75
Prof. Pier Luca Lanzi Text and Web Mining Machine Learning and Data Mining (Unit 19)

Machine Learning and Data Mining: 19 Mining Text And Web Data

Embed Size (px)

DESCRIPTION

Course "Machine Learning and Data Mining" for the degree of Computer Engineering at the Politecnico di Milano. In this lecture we overview text and web mining. The slides are mainly taken from Jiawei Han textbook.

Citation preview

Page 1: Machine Learning and Data Mining: 19 Mining Text And Web Data

Prof. Pier Luca Lanzi

Text and Web MiningMachine Learning and Data Mining (Unit 19)

Page 2: Machine Learning and Data Mining: 19 Mining Text And Web Data

2

Prof. Pier Luca Lanzi

References

Jiawei Han and Micheline Kamber, "Data Mining: Concepts and Techniques", The Morgan Kaufmann Series in Data Management Systems (Second Edition)

Chapter 10, part 2

Web Mining Course by Gregory-Platesky Shapiro available atwww.kdnuggets.com

Page 3: Machine Learning and Data Mining: 19 Mining Text And Web Data

3

Prof. Pier Luca Lanzi

Mining Text Data: An Introduction

Data Mining / Knowledge Discovery

Structured Data Multimedia Free Text Hypertext

HomeLoan (Loanee: Frank RizzoLender: MWFAgency: Lake ViewAmount: $200,000Term: 15 years)

Frank Rizzo boughthis home from LakeView Real Estate in1992.

He paid $200,000under a15-year loanfrom MW Financial.

<a href>Frank Rizzo</a> Bought<a hef>this home</a>from <a href>LakeView Real Estate</a>In <b>1992</b>.<p>...Loans($200K,[map],...)

Page 4: Machine Learning and Data Mining: 19 Mining Text And Web Data

4

Prof. Pier Luca Lanzi

Bag-of-Tokens Approaches

Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal.

Now we are engaged in a great civil war, testing whether that nation, or …

nation – 5civil - 1war – 2men – 2died – 4people – 5Liberty – 1God – 1…

FeatureExtraction

Loses all order-specific information!Severely limits context!

Documents Token Sets

Page 5: Machine Learning and Data Mining: 19 Mining Text And Web Data

5

Prof. Pier Luca Lanzi

Natural Language Processing

A dog is chasing a boy on the playgroundDet Noun Aux Verb Det Noun Prep Det Noun

Noun Phrase Complex Verb Noun PhraseNoun Phrase

Prep PhraseVerb Phrase

Verb Phrase

Sentence

Dog(d1).Boy(b1).Playground(p1).Chasing(d1,b1,p1).

Semantic analysis

Lexicalanalysis

(part-of-speechtagging)

Syntactic analysis(Parsing)

A person saying this maybe reminding another person to

get the dog back…

Pragmatic analysis(speech act)

Scared(x) if Chasing(_,x,_).+

Scared(b1)Inference

(Taken from ChengXiang Zhai, CS 397cxz – Fall 2003)

Page 6: Machine Learning and Data Mining: 19 Mining Text And Web Data

6

Prof. Pier Luca Lanzi

General NLP—Too Difficult!

Word-level ambiguity “design” can be a noun or a verb (Ambiguous POS) “root” has multiple meanings (Ambiguous sense)

Syntactic ambiguity“natural language processing” (Modification)“A man saw a boy with a telescope.” (PP Attachment)

Anaphora resolution“John persuaded Bill to buy a TV for himself.”(himself = John or Bill?)

Presupposition“He has quit smoking.” implies that he smoked before.

(Taken from ChengXiang Zhai, CS 397cxz – Fall 2003)

Humans rely on context to interpret (when possible).This context may extend beyond a given document!

Page 7: Machine Learning and Data Mining: 19 Mining Text And Web Data

7

Prof. Pier Luca Lanzi

Shallow Linguistics

English LexiconPart-of-Speech TaggingWord Sense DisambiguationPhrase Detection / Parsing

Page 8: Machine Learning and Data Mining: 19 Mining Text And Web Data

8

Prof. Pier Luca Lanzi

WordNet

An extensive lexical network for the English languageContains over 138,838 words.Several graphs, one for each part-of-speech.Synsets (synonym sets), each defining a semantic sense.Relationship information (antonym, hyponym, meronym …)Downloadable for free (UNIX, Windows)Expanding to other languages (Global WordNet Association)Funded >$3 million, mainly government (translation interest)Founder George Miller, National Medal of Science, 1991.

wet dry

watery

moist

damp

parched

anhydrous

aridsynonym

antonym

Page 9: Machine Learning and Data Mining: 19 Mining Text And Web Data

9

Prof. Pier Luca Lanzi

1 1

1 1 1

11

( ,..., , ,..., )( | )... ( | ) ( )... ( )

( | ) ( | )

k k

k k kk

i i i ii

p w w t tp t w p t w p w p w

p w t p t t −=

⎧⎪= ⎨⎪⎩∏

1 1

1 1 1

11

( ,..., , ,..., )( | )... ( | ) ( )... ( )

( | ) ( | )

k k

k k kk

i i i ii

p w w t tp t w p t w p w p w

p w t p t t −=

⎧⎪= ⎨⎪⎩∏

Pick the most likely tag sequence.

Part-of-Speech Tagging

This sentence serves as an example of annotated text…Det N V1 P Det N P V2 N

Training data (Annotated text)

POS Tagger“This is a new sentence.” This is a new sentence.Det Aux Det Adj N

Partial dependency(HMM)

Independent assignmentMost common tag

Page 10: Machine Learning and Data Mining: 19 Mining Text And Web Data

10

Prof. Pier Luca Lanzi

Word Sense Disambiguation

Supervised LearningFeatures:

Neighboring POS tags (N Aux V P N)Neighboring words (linguistics are rooted in ambiguity)Stemmed form (root)Dictionary/Thesaurus entries of neighboring wordsHigh co-occurrence words (plant, tree, origin,…)Other senses of word within discourse

Algorithms:Rule-based Learning (e.g. IG guided)Statistical Learning (i.e. Naïve Bayes)Unsupervised Learning (i.e. Nearest Neighbor)

“The difficulties of computational linguistics are rooted in ambiguity.”N Aux V P N

?

(Adapted from ChengXiang Zhai, CS 397cxz – Fall 2003)

Page 11: Machine Learning and Data Mining: 19 Mining Text And Web Data

11

Prof. Pier Luca Lanzi

Parsing

(Adapted from ChengXiang Zhai, CS 397cxz – Fall 2003)

Choose most likely parse tree…

the playground

S

NP VP

BNP

N

Det

A

dog

VP PP

Aux V

is ona boy

chasing

NP P NP

Probability of this tree=0.000015

...S

NP VP

BNP

N

dog

PPAux V

is

ona boy

chasing

NP

P NP

Det

A

the playground

NP

Probability of this tree=0.000011

S→ NP VPNP → Det BNPNP → BNPNP→ NP PPBNP→ NVP → V VP → Aux V NPVP → VP PPPP → P NP

V → chasingAux→ isN → dogN → boyN→ playgroundDet→ theDet→ aP → on

Grammar

Lexicon

1.00.30.40.3

1.0

0.01

0.003

Probabilistic CFG

Page 12: Machine Learning and Data Mining: 19 Mining Text And Web Data

12

Prof. Pier Luca Lanzi

Obstacles

Ambiguity

A man saw a boy with a telescope.

Computational Intensity

Imposes a context horizon.

Text Mining NLP ApproachLocate promising fragments using fast IR methods(bag-of-tokens)Only apply slow NLP techniques to promising fragments

Page 13: Machine Learning and Data Mining: 19 Mining Text And Web Data

13

Prof. Pier Luca Lanzi

Summary: Shallow NLP

However, shallow NLP techniques are feasible and useful:Lexicon – machine understandable linguistic knowledge

possible senses, definitions, synonyms, antonyms, typeof, etc.

POS Tagging – limit ambiguity (word/POS), entity extractionWSD – stem/synonym/hyponym matches (doc and query)

Query: “Foreign cars” Document: “I’m selling a 1976 Jaguar…”

Parsing – logical view of information (inference?, translation?)

A man saw a boy with a telescope.”Even without complete NLP, any additional knowledgeextracted from text data can only be beneficial.Ingenuity will determine the applications.

Page 14: Machine Learning and Data Mining: 19 Mining Text And Web Data

14

Prof. Pier Luca Lanzi

References for NLP

C. D. Manning and H. Schutze, “Foundations of Natural Language Processing”, MIT Press, 1999.S. Russell and P. Norvig, “Artificial Intelligence: A Modern Approach”, Prentice Hall, 1995.S. Chakrabarti, “Mining the Web: Statistical Analysis of Hypertext and Semi-Structured Data”, Morgan Kaufmann, 2002.G. Miller, R. Beckwith, C. FellBaum, D. Gross, K. Miller, and R. Tengi. Five papers on WordNet. Princeton University, August 1993.C. Zhai, Introduction to NLP, Lecture Notes for CS 397cxz, UIUC, Fall 2003.M. Hearst, Untangling Text Data Mining, ACL’99, invited paper. http://www.sims.berkeley.edu/~hearst/papers/acl99/acl99-tdm.htmlR. Sproat, Introduction to Computational Linguistics, LING 306, UIUC, Fall 2003.A Road Map to Text Mining and Web Mining, University of Texas resource page. http://www.cs.utexas.edu/users/pebronia/text-mining/Computational Linguistics and Text Mining Group, IBM Research, http://www.research.ibm.com/dssgrp/

Page 15: Machine Learning and Data Mining: 19 Mining Text And Web Data

15

Prof. Pier Luca Lanzi

Text Databases and Information Retrieval

Text databases (document databases) Large collections of documents from various sources: news articles, research papers, books, digital libraries, E-mail messages, and Web pages, library database, etc.Data stored is usually semi-structuredTraditional information retrieval techniques become inadequate for the increasingly vast amounts of text data

Information retrievalA field developed in parallel with database systemsInformation is organized into (a large number of) documentsInformation retrieval problem: locating relevant documents based on user input, such as keywords or example documents

Page 16: Machine Learning and Data Mining: 19 Mining Text And Web Data

16

Prof. Pier Luca Lanzi

Information Retrieval (IR)

Typical Information Retrieval systemsOnline library catalogsOnline document management systems

Information retrieval vs. database systemsSome DB problems are not present in IR, e.g., update, transaction management, complex objectsSome IR problems are not addressed well in DBMS, e.g., unstructured documents, approximate search using keywords and relevance

Page 17: Machine Learning and Data Mining: 19 Mining Text And Web Data

17

Prof. Pier Luca Lanzi

Basic Measures for Text Retrieval

Precision: the percentage of retrieved documents that are in fact relevant to the query (i.e., “correct” responses)

Recall: the percentage of documents that are relevant to the query and were, in fact, retrieved

|}{||}{}{|

RelevantRetrievedRelevantprecision ∩

=

|}{||}{}{|

RetrievedRetrievedRelevantprecision ∩

=

Relevant Relevant & Retrieved Retrieved

All Documents

Page 18: Machine Learning and Data Mining: 19 Mining Text And Web Data

18

Prof. Pier Luca Lanzi

Information Retrieval Techniques

Basic ConceptsA document can be described by a set of representative keywords called index terms.Different index terms have varying relevance when used to describe document contents.This effect is captured through the assignment of numerical weights to each index term of a document. (e.g.: frequency, tf-idf)

DBMS AnalogyIndex Terms AttributesWeights Attribute Values

Page 19: Machine Learning and Data Mining: 19 Mining Text And Web Data

19

Prof. Pier Luca Lanzi

Information Retrieval Techniques

Index Terms (Attribute) Selection:Stop listWord stemIndex terms weighting methods

Terms Documents Frequency Matrices

Information Retrieval Models:Boolean ModelVector ModelProbabilistic Model

Page 20: Machine Learning and Data Mining: 19 Mining Text And Web Data

20

Prof. Pier Luca Lanzi

Boolean Model

Consider that index terms are either present or absent in a documentAs a result, the index term weights are assumed to be all binariesA query is composed of index terms linked by three connectives: not, and, and or

E.g.: car and repair, plane or airplaneThe Boolean model predicts that each document is either relevant or non-relevant based on the match of a document to the query

Page 21: Machine Learning and Data Mining: 19 Mining Text And Web Data

21

Prof. Pier Luca Lanzi

Keyword-Based Retrieval

A document is represented by a string, which can be identified by a set of keywords

Queries may use expressions of keywordsE.g., car and repair shop, tea or coffee, DBMS but not OracleQueries and retrieval should consider synonyms, e.g., repair and maintenance

Major difficulties of the modelSynonymy: A keyword T does not appear anywhere in the document, even though the document is closely related to T, e.g., data miningPolysemy: The same keyword may mean different things in different contexts, e.g., mining

Page 22: Machine Learning and Data Mining: 19 Mining Text And Web Data

22

Prof. Pier Luca Lanzi

Similarity-Based Retrieval in Text Data

Finds similar documents based on a set of common keywords

Answer should be based on the degree of relevance based on the nearness of the keywords, relative frequency of the keywords, etc.

Basic techniques

Stop listSet of words that are deemed “irrelevant”, even though they may appear frequentlyE.g., a, the, of, for, to, with, etc.Stop lists may vary when document set varies

Page 23: Machine Learning and Data Mining: 19 Mining Text And Web Data

23

Prof. Pier Luca Lanzi

Similarity-Based Retrieval in Text Data

Word stemSeveral words are small syntactic variants of each other since they share a common word stemE.g., drug, drugs, drugged

A term frequency tableEach entry frequent_table(i, j) = # of occurrences of the word ti in document diUsually, the ratio instead of the absolute number of occurrencesis used

Similarity metrics: measure the closeness of a document to a query (a set of keywords)

Relative term occurrencesCosine distance:

||||),(

21

2121 vv

vvvvsim ⋅=

Page 24: Machine Learning and Data Mining: 19 Mining Text And Web Data

24

Prof. Pier Luca Lanzi

Indexing Techniques

Inverted indexMaintains two hash- or B+-tree indexed tables:

• document_table: a set of document records <doc_id, postings_list>

• term_table: a set of term records, <term, postings_list>Answer query: Find all docs associated with one or a set of terms+ easy to implement– do not handle well synonymy and polysemy, and posting lists could be too long (storage could be very large)

Signature fileAssociate a signature with each documentA signature is a representation of an ordered list of terms thatdescribe the documentOrder is obtained by frequency analysis, stemming and stop lists

Page 25: Machine Learning and Data Mining: 19 Mining Text And Web Data

25

Prof. Pier Luca Lanzi

Vector Space Model

Documents and user queries are represented as m-dimensional vectors, where m is the total number of index terms in the document collection. The degree of similarity of the document d with regard to the query q is calculated as the correlation between the vectors that represent them, using measures such as the Euclidian distance or the cosine of the angle between these two vectors.

Page 26: Machine Learning and Data Mining: 19 Mining Text And Web Data

26

Prof. Pier Luca Lanzi

Latent Semantic Indexing (1)

Basic ideaSimilar documents have similar word frequenciesDifficulty: the size of the term frequency matrix is very largeUse a singular value decomposition (SVD) techniques to reduce the size of frequency tableRetain the K most significant rows of the frequency table

Method

Create a term x document weighted frequency matrix A

SVD construction: A = U * S * V’

Define K and obtain Uk ,, Sk , and Vk.

Create query vector q’ .

Project q’ into the term-document space: Dq = q’ * Uk * Sk-1

Calculate similarities: cos α = Dq . D / ||Dq|| * ||D||

Page 27: Machine Learning and Data Mining: 19 Mining Text And Web Data

27

Prof. Pier Luca Lanzi

Latent Semantic Indexing (2)

Weighted Frequency Matrix

Query Terms:- Insulation- Joint

Page 28: Machine Learning and Data Mining: 19 Mining Text And Web Data

28

Prof. Pier Luca Lanzi

Probabilistic Model

Basic assumption: Given a user query, there is a set of documents which contains exactly the relevant documents and no other (ideal answer set)Querying process as a process of specifying the properties of an ideal answer set. Since these properties are not known at query time, an initial guess is madeThis initial guess allows the generation of a preliminary probabilistic description of the ideal answer set which is used to retrieve the first set of documentsAn interaction with the user is then initiated with the purpose of improving the probabilistic description of the answer set

Page 29: Machine Learning and Data Mining: 19 Mining Text And Web Data

29

Prof. Pier Luca Lanzi

Types of Text Data Mining

Keyword-based association analysisAutomatic document classificationSimilarity detection

Cluster documents by a common authorCluster documents containing information from a common source

Link analysis: unusual correlation between entitiesSequence analysis: predicting a recurring eventAnomaly detection: find information that violates usual patterns Hypertext analysis

Patterns in anchors/linksAnchor text correlations with linked objects

Page 30: Machine Learning and Data Mining: 19 Mining Text And Web Data

30

Prof. Pier Luca Lanzi

Keyword-Based Association Analysis

MotivationCollect sets of keywords or terms that occur frequently together and then find the association or correlation relationships among them

Association Analysis ProcessPreprocess the text data by parsing, stemming, removing stop words, etc.Evoke association mining algorithms

• Consider each document as a transaction• View a set of keywords in the document as

a set of items in the transactionTerm level association mining

• No need for human effort in tagging documents• The number of meaningless results and

the execution time is greatly reduced

Page 31: Machine Learning and Data Mining: 19 Mining Text And Web Data

31

Prof. Pier Luca Lanzi

Text Classification (1)

MotivationAutomatic classification for the large number of on-line text documents (Web pages, e-mails, corporate intranets, etc.)

Classification ProcessData preprocessingDefinition of training set and test setsCreation of the classification model using the selected classification algorithmClassification model validationClassification of new/unknown text documents

Text document classification differs from the classification of relational data

Document databases are not structured according to attribute-value pairs

Page 32: Machine Learning and Data Mining: 19 Mining Text And Web Data

32

Prof. Pier Luca Lanzi

Text Classification (2)

Classification Algorithms:Support Vector MachinesK-Nearest NeighborsNaïve BayesNeural NetworksDecision TreesAssociation rule-basedBoosting

Page 33: Machine Learning and Data Mining: 19 Mining Text And Web Data

33

Prof. Pier Luca Lanzi

Document Clustering

MotivationAutomatically group related documents based on their contentsNo predetermined training sets or taxonomiesGenerate a taxonomy at runtime

Clustering ProcessData preprocessing: remove stop words, stem, feature extraction, lexical analysis, etc.Hierarchical clustering: compute similarities applying clustering algorithms.Model-Based clustering (Neural Network Approach): clusters are represented by “exemplars”. (e.g.: SOM)

Page 34: Machine Learning and Data Mining: 19 Mining Text And Web Data

34

Prof. Pier Luca Lanzi

Text Categorization

Pre-given categories and labeled document examples (Categories may form hierarchy)Classify new documents A standard classification (supervised learning ) problem

CategorizationSystem

Sports

Business

Education

Science…SportsBusiness

Education

Page 35: Machine Learning and Data Mining: 19 Mining Text And Web Data

35

Prof. Pier Luca Lanzi

Applications

News article classificationAutomatic email filteringWebpage classificationWord sense disambiguation… …

Page 36: Machine Learning and Data Mining: 19 Mining Text And Web Data

36

Prof. Pier Luca Lanzi

Categorization Methods

Manual: Typically rule-based Does not scale up (labor-intensive, rule inconsistency)May be appropriate for special data on a particular domain

Automatic: Typically exploiting machine learning techniquesVector space model based

• Prototype-based (Rocchio)• K-nearest neighbor (KNN)• Decision-tree (learn rules)• Neural Networks (learn non-linear classifier)• Support Vector Machines (SVM)

Probabilistic or generative model based• Naïve Bayes classifier

Page 37: Machine Learning and Data Mining: 19 Mining Text And Web Data

37

Prof. Pier Luca Lanzi

Vector Space Model

Represent a doc by a term vectorTerm: basic concept, e.g., word or phraseEach term defines one dimensionN terms define a N-dimensional spaceElement of vector corresponds to term weightE.g., d = (x1,…,xN), xi is “importance” of term I

New document is assigned to the most likely category based on vector similarity

Page 38: Machine Learning and Data Mining: 19 Mining Text And Web Data

38

Prof. Pier Luca Lanzi

Illustration of the Vector Space Model

Java

Microsoft

StarbucksC2 Category 2

C1 Category 1

C3

Category 3

new doc

Page 39: Machine Learning and Data Mining: 19 Mining Text And Web Data

39

Prof. Pier Luca Lanzi

What VS Model Does Not Specify?

How to select terms to capture “basic concepts”Word stopping

• e.g. “a”, “the”, “always”, “along”

Word stemming• e.g. “computer”, “computing”, “computerize” => “compute”

Latent semantic indexingHow to assign weights

Not all words are equally important: Some are more indicative than others

• e.g. “algebra” vs. “science”

How to measure the similarity

Page 40: Machine Learning and Data Mining: 19 Mining Text And Web Data

40

Prof. Pier Luca Lanzi

How to Assign Weights

Two-fold heuristics based on frequency

TF (Term frequency)More frequent within a document more relevant to semanticsE.g., “query” vs. “commercial”

IDF (Inverse document frequency)Less frequent among documents more discriminativeE.g. “algebra” vs. “science”

Page 41: Machine Learning and Data Mining: 19 Mining Text And Web Data

41

Prof. Pier Luca Lanzi

Term Frequency Weighting

WeightingThe more frequent, the more relevant to topicE.g. “query” vs. “commercial”Raw TF= f(t,d): how many times term t appears in doc d

NormalizationWhen document length varies, then relative frequency is preferredE.g., Maximum frequency normalization

Page 42: Machine Learning and Data Mining: 19 Mining Text And Web Data

42

Prof. Pier Luca Lanzi

Inverse Document Frequency Weighting

Idea: The less frequent terms among documents are the more discriminative

Formula:

Given n, total number of docsk, the number of docs with term t appearing(the DF document frequency)

Page 43: Machine Learning and Data Mining: 19 Mining Text And Web Data

43

Prof. Pier Luca Lanzi

TF-IDF Weighting

TF-IDF weighting: weight(t, d) = TF(t, d) * IDF(t)Frequent within doc high tf high weightSelective among docs high idf high weight

Recall VS modelEach selected term represents one dimensionEach doc is represented by a feature vectorIts t-term coordinate of document d is the TF-IDF weightThis is more reasonable

Just for illustration …Many complex and more effective weighting variants exist in practice

Page 44: Machine Learning and Data Mining: 19 Mining Text And Web Data

44

Prof. Pier Luca Lanzi

How to Measure Similarity?

Given two document

Similarity definitiondot product

normalized dot product (or cosine)

Page 45: Machine Learning and Data Mining: 19 Mining Text And Web Data

45

Prof. Pier Luca Lanzi

Illustrative Example

text mining travel map search engine govern president congressIDF(faked) 2.4 4.5 2.8 3.3 2.1 5.4 2.2 3.2 4.3

doc1 2(4.8) 1(4.5) 1(2.1) 1(5.4)doc2 1(2.4 ) 2 (5.6) 1(3.3) doc3 1 (2.2) 1(3.2) 1(4.3)

newdoc 1(2.4) 1(4.5)

doc3

text miningsearchengine

text

traveltext

maptravel

government presidentcongress

doc1

doc2

……

To whom is newdocmore similar?

Sim(newdoc,doc1)=4.8*2.4+4.5*4.5

Sim(newdoc,doc2)=2.4*2.4

Sim(newdoc,doc3)=0

Page 46: Machine Learning and Data Mining: 19 Mining Text And Web Data

46

Prof. Pier Luca Lanzi

Vector Space Model-Based Classifiers

What do we have so far?A feature space with similarity measureThis is a classic supervised learning problemSearch for an approximation to classification hyper plane

VS model based classifiersK-NNDecision tree basedNeural networksSupport vector machine

Page 47: Machine Learning and Data Mining: 19 Mining Text And Web Data

47

Prof. Pier Luca Lanzi

Probabilistic Model

Category C is modeled as a probability distribution of predefined random eventsRandom events model the process of generating documentsTherefore, how likely a document d belongs to category C is measured through the probability for category C to generate d.

Page 48: Machine Learning and Data Mining: 19 Mining Text And Web Data

48

Prof. Pier Luca Lanzi

Quick Revisit of Bayes’ Rule

Category Hypothesis space: H = {C1 , …, Cn}One document: D

As we want to pick the most likely category C*, we can drop p(D)

( | ) ( )( | )( )

i ii

P D C P CP C DP D

=

Posterior probability of Ci

Document model for category C

* arg max ( | ) arg max ( | ) ( )C CC P C D P D C P C= =

Page 49: Machine Learning and Data Mining: 19 Mining Text And Web Data

49

Prof. Pier Luca Lanzi

Probabilistic Model: Multi-Bernoulli

Event: word presence or absence

D = (x1, …, x|V|) xi =1 for presence of word wixi =0 for absence

Parameters{p(wi=1|C), p(wi=0|C)}p(wi=1|C)+ p(wi=0|C)=1

| | | | | |

1 | |1 1, 1 1, 0

( ( ,..., ) | ) ( | ) ( 1| ) ( 0 | )i i

V V V

V i i i ii i x i x

p D x x C p w x C p w C p w C= = = = =

= = = = = =∏ ∏ ∏

Page 50: Machine Learning and Data Mining: 19 Mining Text And Web Data

50

Prof. Pier Luca Lanzi

Probabilistic Model: Multinomial

Event: word selection/sampling

D = (n1, …, n|V|)ni: frequency of word win=n1,+…+ n|V|

Parameters{p(wi|C)} p(w1|C)+… p(w|v||C) = 1

| |

1 | |1 | | 1

( ( , ..., ) | ) ( | ) ( | )...

i

Vn

v iV i

np D n n C p n C p w C

n n =

⎛ ⎞= = ⎜ ⎟

⎝ ⎠∏

Page 51: Machine Learning and Data Mining: 19 Mining Text And Web Data

51

Prof. Pier Luca Lanzi

Parameter Estimation

Category prior

Multi-Bernoulli Doc model

Multinomial doc model

Training examples:

C1 C2

Ck

E(C1)

E(Ck)

E(C2)

Vocabulary: V = {w1, …, w|V|}

1

| ( ) |( )| ( ) |

ii k

jj

E Cp CE C

=

=

( )( , ) 0.5

1( 1| ) ( , )

| ( )| 1 0i

jjd E C

j i ji

w dif w occursind

p w C w dE C otherwise

δδ∈

+⎧

= = =⎨+ ⎩

( )| |

1 ( )

( , ) 1( | ) ( , )

( , ) | |

i

i

jd E C

j i j jV

mm d E C

c w dp w C c w d counts of w ind

c w d V

= ∈

+= =

+

∑∑

Page 52: Machine Learning and Data Mining: 19 Mining Text And Web Data

52

Prof. Pier Luca Lanzi

Classification of New Document

1 | |

| |

1| |

1

( ,..., ) {0,1}* argmax ( | ) ( )

argmax ( | ) ( )

argmax log ( ) log ( | )

V

CV

C i ii

V

C i ii

d x x xC P D C P C

p w x C P C

p C p w x C

=

=

= ∈

=

= =

= + =

1 | | 1 | |

| |

1| |

1| |

1

( ,..., ) | | ...

* argmax ( | ) ( )

argmax ( | ) ( | ) ( )

argmax log ( | ) log ( ) log ( | )

argmax log ( ) log ( | )

i

V V

CV

nC i

iV

C i ii

V

C i ii

d n n d n n n

C P D C P C

p n C p w C P C

p n C p C n p w C

p C n p w C

=

=

=

= = = + +

=

=

= + +

≈ +

Multi-Bernoulli Multinomial

Page 53: Machine Learning and Data Mining: 19 Mining Text And Web Data

53

Prof. Pier Luca Lanzi

Categorization Methods

Vector space modelK-NNDecision treeNeural networkSupport vector machine

Probabilistic modelNaïve Bayes classifier

Many, many others and variants existe.g. Bim, Nb, Ind, Swap-1, LLSF, Widrow-Hoff, Rocchio, Gis-W, … …

Page 54: Machine Learning and Data Mining: 19 Mining Text And Web Data

54

Prof. Pier Luca Lanzi

Evaluation (1)

Effectiveness measureClassic: Precision & Recall

• Precision

• Recall

Page 55: Machine Learning and Data Mining: 19 Mining Text And Web Data

55

Prof. Pier Luca Lanzi

Evaluation (2)

BenchmarksClassic: Reuters collection

• A set of newswire stories classified under categories related to economics.

EffectivenessDifficulties of strict comparison

• different parameter setting• different “split” (or selection) between training and testing• various optimizations … …

However widely recognizable• Best: Boosting-based committee classifier & SVM• Worst: Naïve Bayes classifier

Need to consider other factors, especially efficiency

Page 56: Machine Learning and Data Mining: 19 Mining Text And Web Data

56

Prof. Pier Luca Lanzi

Summary: Text Categorization

Wide application domainComparable effectiveness to professionals

Manual Text Classification is not 100% and unlikely to improve substantiallyAutomatic Text Classification is growing at a steady pace

Prospects and extensionsVery noisy text, such as text from O.C.R.Speech transcripts

Page 57: Machine Learning and Data Mining: 19 Mining Text And Web Data

57

Prof. Pier Luca Lanzi

Research Problems in Text Mining

Google: what is the next step?How to find the pages that match approximately the sophisticated documents, with incorporation of user-profiles or preferences?Look back of Google: inverted indiciesConstruction of indicies for the sophisticated documents, with incorporation of user-profiles or preferencesSimilarity search of such pages using such indicies

Page 58: Machine Learning and Data Mining: 19 Mining Text And Web Data

58

Prof. Pier Luca Lanzi

References for Text Mining

Fabrizio Sebastiani, “Machine Learning in Automated Text Categorization”, ACM Computing Surveys, Vol. 34, No.1, March 2002Soumen Chakrabarti, “Data mining for hypertext: A tutorial survey”, ACM SIGKDD Explorations, 2000.Cleverdon, “Optimizing convenient online accesss to bibliographic databases”, Information Survey, Use4, 1, 37-47, 1984Yiming Yang, “An evaluation of statistical approaches to text categorization”, Journal of Information Retrieval, 1:67-88, 1999. Yiming Yang and Xin Liu “A re-examination of text categorization methods”. Proceedings of ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'99, pp 42--49), 1999.

Page 59: Machine Learning and Data Mining: 19 Mining Text And Web Data

59

Prof. Pier Luca Lanzi

World Wide Web: a brief history

Who invented the wheel is unknownWho invented the World-Wide Web ?

(Sir) Tim Berners-Lee in 1989, while working at CERN, invented the World Wide Web, including URL scheme, HTML, and in 1990 wrote the first server and the first browserMosaic browser developed by Marc Andreessen and Eric Bina at NCSA (National Center for Supercomputing Applications) in 1993; helped rapid web spread Mosaic was basis for Netscape …

Page 60: Machine Learning and Data Mining: 19 Mining Text And Web Data

60

Prof. Pier Luca Lanzi

What is Web Mining?

Discovering interesting and useful informationfrom Web content and usage

ExamplesWeb search, e.g. Google, Yahoo, MSN, Ask, …Specialized search: e.g. Froogle (comparison shopping), job ads (Flipdog)eCommerceRecommendations (Netflix, Amazon, etc.)Improving conversion rate: next best product to offerAdvertising, e.g. Google AdsenseFraud detection: click fraud detection, …Improving Web site design and performance

Page 61: Machine Learning and Data Mining: 19 Mining Text And Web Data

61

Prof. Pier Luca Lanzi

How does it differ from “classical”Data Mining?

The web is not a relationTextual information and linkage structure

Usage data is huge and growing rapidlyGoogle’s usage logs are bigger than their web crawlData generated per day is comparable to largest conventional data warehouses

Ability to react in real-time to usage patternsNo human in the loop

Reproduced from Ullman & Rajaraman with permission

Page 62: Machine Learning and Data Mining: 19 Mining Text And Web Data

62

Prof. Pier Luca Lanzi

How big is the Web?

The number of pages is technically, infiniteBecause of dynamically generated contentLots of duplication (30-40%)

Best estimate of “unique” static HTML pages comes from search engine claims

Google = 8 billionYahoo = 20 billionLots of marketing hype

Reproduced from Ullman & Rajaraman with permission

Page 63: Machine Learning and Data Mining: 19 Mining Text And Web Data

63

Prof. Pier Luca Lanzi

Netcraft Survey: 76,184,000 web sites (Feb 2006)

http://news.netcraft.com/archives/web_server_survey.html

Netcraft survey

Page 64: Machine Learning and Data Mining: 19 Mining Text And Web Data

64

Prof. Pier Luca Lanzi

Mining the World-Wide Web

The WWW is huge, widely distributed, global information service center for

Information services: news, advertisements, consumer information, financial management, education, government, e-commerce, etc.Hyper-link informationAccess and usage information

WWW provides rich sources for data miningChallenges

Too huge for effective data warehousing and data miningToo complex and heterogeneous: no standards and structure

Page 65: Machine Learning and Data Mining: 19 Mining Text And Web Data

65

Prof. Pier Luca Lanzi

Web Mining: A more challenging task

Searches for Web access patternsWeb structuresRegularity and dynamics of Web contents

ProblemsThe “abundance” problemLimited coverage of the Web: hidden Web sources, majority of data in DBMSLimited query interface based on keyword-oriented searchLimited customization to individual users

Page 66: Machine Learning and Data Mining: 19 Mining Text And Web Data

66

Prof. Pier Luca Lanzi

Web Mining

Web StructureMining

Web ContentMining

Web PageContent Mining

Search ResultMining

Web UsageMining

General AccessPattern Tracking

CustomizedUsage Tracking

Web Mining Taxonomy

Page 67: Machine Learning and Data Mining: 19 Mining Text And Web Data

67

Prof. Pier Luca Lanzi

Web Mining

Web StructureMining

Web Page Content Mining

Web Page Summarization WebLog , WebOQL …:Web Structuring query languages; Can identify information within given web pages Ahoy!:Uses heuristics to distinguish personal home pages from other web pagesShopBot: Looks for product prices within web pages

Web ContentMining

Search ResultMining

Web UsageMining

General AccessPattern Tracking

CustomizedUsage Tracking

Mining the World-Wide Web

Page 68: Machine Learning and Data Mining: 19 Mining Text And Web Data

68

Prof. Pier Luca Lanzi

Web Mining

Mining the World-Wide Web

Web UsageMining

General AccessPattern Tracking

CustomizedUsage Tracking

Web StructureMining

Web ContentMining

Web PageContent Mining

Search Result Mining

Search Engine Result SummarizationClustering Search Result : Categorizes documents using phrases in titles and snippets

Page 69: Machine Learning and Data Mining: 19 Mining Text And Web Data

69

Prof. Pier Luca Lanzi

Web Mining

Web Structure MiningUsing LinksPageRankCLEVERUse interconnections between web pages to give weight to pages.

Using GeneralizationMLDB, VWVUses a multi-level database representation of the Web. Counters (popularity) and link lists are used for capturing structure.

Web ContentMining

Web PageContent Mining

Search ResultMining

Web UsageMining

General AccessPattern Tracking

CustomizedUsage Tracking

Mining the World-Wide Web

Page 70: Machine Learning and Data Mining: 19 Mining Text And Web Data

70

Prof. Pier Luca Lanzi

Web Mining

Web StructureMining

Web ContentMining

Web PageContent Mining

Search ResultMining

General Access Pattern Tracking

Web Log MiningUses KDD techniques to understand general access patterns and trends.Can shed light on better structure and grouping of resource providers.

Web UsageMining

CustomizedUsage Tracking

Mining the World-Wide Web

Page 71: Machine Learning and Data Mining: 19 Mining Text And Web Data

71

Prof. Pier Luca Lanzi

Web Mining

Customized Usage Tracking

Adaptive SitesAnalyzes access patterns of each user at a time. Web site restructures itself automatically by learning from user access patterns.

Web UsageMining

General AccessPattern Tracking

Mining the World-Wide Web

Web StructureMining

Web ContentMining

Web PageContent Mining

Search ResultMining

Page 72: Machine Learning and Data Mining: 19 Mining Text And Web Data

72

Prof. Pier Luca Lanzi

Web Usage Mining

Mining Web log records to discover user access patterns of Web pagesApplications

Target potential customers for electronic commerceEnhance the quality and delivery of Internet information services to the end userImprove Web server system performanceIdentify potential prime advertisement locations

Web logs provide rich information about Web dynamicsTypical Web log entry includes the URL requested, the IP address from which the request originated, and a timestamp

Page 73: Machine Learning and Data Mining: 19 Mining Text And Web Data

73

Prof. Pier Luca Lanzi

Web Usage Mining

Understanding is a pre-requisite to improvement1 Google, but 70,000,000+ web sites

What Applications?Simple and Basic

• Monitor performance, bandwidth usage• Catch errors (404 errors- pages not found)• Improve web site design (shortcuts for frequent paths,

remove links not used, etc)• …

Advanced and Business Critical• eCommerce: improve conversion, sales, profit• Fraud detection: click stream fraud, …• …

Page 74: Machine Learning and Data Mining: 19 Mining Text And Web Data

74

Prof. Pier Luca Lanzi

Techniques for Web usage mining

Construct multidimensional view on the Weblog databasePerform multidimensional OLAP analysis to find the top N users, top N accessed Web pages, most frequently accessed time periods, etc.

Perform data mining on Weblog records Find association patterns, sequential patterns, and trends of Web accessingMay need additional information,e.g., user browsing sequences of the Web pages in the Web server buffer

Conduct studies toAnalyze system performance, improve system design by Web caching, Web page prefetching, and Web page swapping

Page 75: Machine Learning and Data Mining: 19 Mining Text And Web Data

75

Prof. Pier Luca Lanzi

References

Deng Cai, Shipeng Yu, Ji-Rong Wen and Wei-Ying Ma, “Extracting Content Structure for Web Pages based on Visual Representation”, The Fifth Asia Pacific Web Conference, 2003.Deng Cai, Shipeng Yu, Ji-Rong Wen and Wei-Ying Ma, “VIPS: a Vision-based Page Segmentation Algorithm”, Microsoft Technical Report (MSR-TR-2003-79), 2003.Shipeng Yu, Deng Cai, Ji-Rong Wen and Wei-Ying Ma, “Improving Pseudo-Relevance Feedback in Web Information Retrieval Using Web Page Segmentation”, 12th International World Wide Web Conference (WWW2003), May 2003.Ruihua Song, Haifeng Liu, Ji-Rong Wen and Wei-Ying Ma, “Learning Block Importance Models for Web Pages”, 13th International World Wide Web Conference (WWW2004), May 2004.Deng Cai, Shipeng Yu, Ji-Rong Wen and Wei-Ying Ma, “Block-based Web Search”, SIGIR 2004, July 2004 . Deng Cai, Xiaofei He, Ji-Rong Wen and Wei-Ying Ma, “Block-Level Link Analysis”, SIGIR 2004, July 2004 . Deng Cai, Xiaofei He, Wei-Ying Ma, Ji-Rong Wen and Hong-Jiang Zhang, “Organizing WWW Images Based on The Analysis of Page Layout and Web Link Structure”, The IEEE International Conference on Multimedia and EXPO (ICME'2004) , June 2004Deng Cai, Xiaofei He, Zhiwei Li, Wei-Ying Ma and Ji-Rong Wen, “Hierarchical Clustering of WWW Image Search Results Using Visual, Textual and Link Analysis”,12th ACM International Conference on Multimedia, Oct. 2004 .