Upload
hei
View
46
Download
0
Embed Size (px)
DESCRIPTION
Prof. Ray Larson University of California, Berkeley School of Information. Lecture 4: IR System Elements (cont) . Principles of Information Retrieval. Review. Review Elements of IR Systems Collections, Queries Text processing and Zipf distribution - PowerPoint PPT Presentation
Citation preview
2010.02.01- SLIDE 1IS 240 – Spring 2010
Prof. Ray Larson University of California, Berkeley
School of Information
Principles of Information Retrieval
Lecture 4: IR System Elements (cont)
2010.02.01- SLIDE 2IS 240 – Spring 2010
Review• Review
– Elements of IR Systems• Collections, Queries• Text processing and Zipf distribution
• Stemmers and Morphological analysis (cont…)
• Inverted file indexes
2010.02.01- SLIDE 3IS 240 – Spring 2010
Queries• A query is some expression of a user’s
information needs• Can take many forms
– Natural language description of need– Formal query in a query language
• Queries may not be accurate expressions of the information need– Differences between conversation with a
person and formal query expression
2010.02.01- SLIDE 4IS 240 – Spring 2010
Collections of Documents…• Documents
– A document is a representation of some aggregation of information, treated as a unit.
• Collection– A collection is some physical or logical
aggregation of documents• Let’s take the simplest case, and say we
are dealing with a computer file of plain ASCII text, where each line represents the “UNIT” or document.
2010.02.01- SLIDE 5IS 240 – Spring 2010
How to search that collection?• Manually?
– Cat, more• Scan for strings?
– Grep• Extract individual words to search???
– “tokenize” (a unix pipeline)• tr -sc ’A-Za-z’ ’\012’ < TEXTFILE | sort | uniq –c
– See “Unix for Poets” by Ken Church
• Put it in a DBMS and use pattern matching there…– assuming the lines are smaller than the text size limits
for the DBMS
2010.02.01- SLIDE 6IS 240 – Spring 2010
What about VERY big files?• Scanning becomes a problem• The nature of the problem starts to change
as the scale of the collection increases• A variant of Parkinson’s Law that applies
to databases is:– Data expands to fill the space available to
store it
2010.02.01- SLIDE 7
Document Processing Steps
2010.02.01- SLIDE 8IS 240 – Spring 2010
Structure of an IR SystemSearchLine Interest profiles
& QueriesDocuments
& data
Rules of the game =Rules for subject indexing +
Thesaurus (which consists of
Lead-InVocabulary
andIndexing
Language
StorageLine
Potentially Relevant
Documents
Comparison/Matching
Store1: Profiles/Search requests
Store2: Documentrepresentations
Indexing (Descriptive and
Subject)
Formulating query in terms of
descriptors
Storage of profiles Storage of
Documents
Information Storage and Retrieval System
Adapted from Soergel, p. 19
2010.02.01- SLIDE 9IS 240 – Spring 2010
Query Processing• In order to correctly match queries and
documents they must go through the same text processing steps as the documents did when they were stored
• In effect, the query is treated like it was a document
• Exceptions (of course) include things like structured query languages that must be parsed to extract the search terms and requested operations from the query– The search terms must still go through the same text
process steps as the document…
2010.02.01- SLIDE 10IS 240 – Spring 2010
Steps in Query processing• Parsing and analysis of the query text
(same as done for the document text)– Morphological Analysis– Statistical Analysis of text
2010.02.01- SLIDE 11IS 240 – Spring 2010
Stemming and Morphological Analysis
• Goal: “normalize” similar words• Morphology (“form” of words)
– Inflectional Morphology• E.g,. inflect verb endings and noun number• Never change grammatical class
– dog, dogs– tengo, tienes, tiene, tenemos, tienen
– Derivational Morphology • Derive one word from another, • Often change grammatical class
– build, building; health, healthy
2010.02.01- SLIDE 12IS 240 – Spring 2010
Plotting Word Frequency by Rank
• Say for a text with 100 tokens• Count
– How many tokens occur 1 time (50)– How many tokens occur 2 times (20) …– How many tokens occur 7 times (10) … – How many tokens occur 12 times (1)– How many tokens occur 14 times (1)
• So things that occur the most often share the highest rank (rank 1).
• Things that occur the fewest times have the lowest rank (rank n).
2010.02.01- SLIDE 13IS 240 – Spring 2010
Many similar distributions…• Words in a text collection• Library book checkout patterns• Bradford’s and Lotka’s laws.• Incoming Web Page Requests (Nielsen)• Outgoing Web Page Requests (Cunha &
Crovella)• Document Size on Web (Cunha &
Crovella)
2010.02.01- SLIDE 14
Zipf Distribution(linear and log scale)
2010.02.01- SLIDE 15IS 240 – Spring 2010
Resolving Power (van Rijsbergen 79)
The most frequent words are not the most descriptive.
2010.02.01- SLIDE 16IS 240 – Spring 2010
Other Models• Poisson distribution• 2-Poisson Model• Negative Binomial• Katz K-mixture
– See Church (SIGIR 1995)
2010.02.01- SLIDE 17IS 240 – Spring 2010
2010.02.01- SLIDE 18IS 240 – Spring 2010
Stemming and Morphological Analysis
• Goal: “normalize” similar words• Morphology (“form” of words)
– Inflectional Morphology• E.g,. inflect verb endings and noun number• Never change grammatical class
– dog, dogs– tengo, tienes, tiene, tenemos, tienen
– Derivational Morphology • Derive one word from another, • Often change grammatical class
– build, building; health, healthy
2010.02.01- SLIDE 19IS 240 – Spring 2010
Stemming and Morphological Analysis• Goal: “normalize” similar words• Morphology (“form” of words)
– Inflectional Morphology• E.g,. inflect verb endings and noun number• Never change grammatical class
– dog, dogs– tengo, tienes, tiene, tenemos, tienen
– Derivational Morphology • Derive one word from another, • Often change grammatical class
– build, building; health, healthy
2010.02.01- SLIDE 20IS 240 – Spring 2010
Simple “S” stemming• IF a word ends in “ies”, but not “eies” or
“aies”– THEN “ies” “y”
• IF a word ends in “es”, but not “aes”, “ees”, or “oes”– THEN “es” “e”
• IF a word ends in “s”, but not “us” or “ss”– THEN “s” NULL
Harman, JASIS Jan. 1991
2010.02.01- SLIDE 21IS 240 – Spring 2010
Stemmer ExamplesThe SMART
stemmerThe Porterstemmer
The IAGO!stemmer
% tstem ateate% tstem applesappl% tstem formulaeformul% tstem appendicesappendix% tstem implementationimple% tstem glassesglass
% pstem ateat% pstem applesappl% pstem formulaeformula% pstem appendicesappendic% pstem implementationimplement% pstem glassesglass
% stemate|2eat|2apples|1apple|1formulae|1formula|1appendices|1appendix|1implementation|1implementation|1glasses|1 glasses|1
2010.02.01- SLIDE 22IS 240 – Spring 2010
Too Aggressive Too Timid
organization/organpolicy/police
execute/executivearm/army
european/europecylinder/cylindrical
create/creationsearch/searcher
Errors Generated by Porter Stemmer (Krovetz 93)
2010.02.01- SLIDE 23IS 240 – Spring 2010
Automated Methods• Stemmers:
– Very dumb rules work well (for English)– Porter Stemmer: Iteratively remove suffixes– Improvement: pass results through a lexicon
• Newer stemmers are configurable (Snowball)– Demo…
• Powerful multilingual tools exist for morphological analysis– PCKimmo, Xerox Lexical technology– Require a grammar and dictionary– Use “two-level” automata– Wordnet “morpher”
2010.02.01- SLIDE 24IS 240 – Spring 2010
Wordnet• Type “wn word” on a machine where
wordnet is installed…• Large exception dictionary:• Demo
aardwolves aardwolf abaci abacus abacuses abacus abbacies abbacy abhenries abhenry abilities ability abkhaz abkhaz abnormalities abnormality aboideaus aboideau aboideaux aboideau aboiteaus aboiteau aboiteaux aboiteau abos abo abscissae abscissa abscissas abscissa absurdities absurdity…
2010.02.01- SLIDE 25IS 240 – Spring 2010
Using NLP• Strzalkowski (in Reader)
Text NLP repres Dbasesearch
TAGGERNLP: PARSER TERMS
2010.02.01- SLIDE 26IS 240 – Spring 2010
Using NLP
INPUT SENTENCEThe former Soviet President has been a local hero ever sincea Russian tank invaded Wisconsin.
TAGGED SENTENCEThe/dt former/jj Soviet/jj President/nn has/vbz been/vbn a/dt local/jj hero/nn ever/rb since/in a/dt Russian/jj tank/nn invaded/vbd Wisconsin/np ./per
2010.02.01- SLIDE 27IS 240 – Spring 2010
Using NLP
TAGGED & STEMMED SENTENCEthe/dt former/jj soviet/jj president/nn have/vbz be/vbn a/dt local/jj hero/nn ever/rb since/in a/dt russian/jj tank/nn invade/vbd wisconsin/np ./per
2010.02.01- SLIDE 28IS 240 – Spring 2010
Using NLP
PARSED SENTENCE[assert [[perf [have]][[verb[BE]] [subject [np[n PRESIDENT][t_pos THE] [adj[FORMER]][adj[SOVIET]]]] [adv EVER] [sub_ord[SINCE [[verb[INVADE]] [subject [np [n TANK][t_pos A] [adj [RUSSIAN]]]] [object [np [name [WISCONSIN]]]]]]]]]
2010.02.01- SLIDE 29IS 240 – Spring 2010
Using NLP
EXTRACTED TERMS & WEIGHTSPresident 2.623519 soviet 5.416102President+soviet 11.556747 president+former 14.594883Hero 7.896426 hero+local 14.314775Invade 8.435012 tank 6.848128Tank+invade 17.402237 tank+russian 16.030809Russian 7.383342 wisconsin 7.785689
2010.02.01- SLIDE 30IS 240 – Spring 2010
Same Sentence, different sysEnju ParserROOT ROOT ROOT ROOT -1 ROOT been be VBN VB 5been be VBN VB 5 ARG1 President president NNP NNP 3been be VBN VB 5 ARG2 hero hero NN NN 8a a DT DT 6 ARG1 hero hero NN NN 8a a DT DT 11 ARG1 tank tank NN NN 13local local JJ JJ 7 ARG1 hero hero NN NN 8The the DT DT 0 ARG1 President president NNP NNP 3former former JJ JJ 1 ARG1 President president NNP NNP 3Russian russian JJ JJ 12 ARG1 tank tank NN NN 13Soviet soviet NNP NNP 2 MOD President president NNP NNP 3invaded invade VBD VB 14 ARG1 tank tank NN NN 13invaded invade VBD VB 14 ARG2 Wisconsin wisconsin NNP NNP 15has have VBZ VB 4 ARG1 President president NNP NNP 3has have VBZ VB 4 ARG2 been be VBN VB 5since since IN IN 10 MOD been be VBN VB 5since since IN IN 10 ARG1 invaded invade VBD VB 14ever ever RB RB 9 ARG1 since since IN IN 10
2010.02.01- SLIDE 31IS 240 – Spring 2010
Other Considerations• Church (SIGIR 1995) looked at
correlations between forms of words in texts
hostages nullhostage 619(a) 479(b)null 648(c) 78223(d)
2010.02.01- SLIDE 32IS 240 – Spring 2010
Assumptions in IR• Statistical independence of terms• Dependence approximations
2010.02.01- SLIDE 33IS 240 – Spring 2010
Statistical Independence Two events x and y are statistically
independent if the product of their probability of their happening individually equals their probability of happening together.
),()()( yxPyPxP =
2010.02.01- SLIDE 34IS 240 – Spring 2010
Statistical Independence and Dependence• What are examples of things that are
statistically independent?
• What are examples of things that are statistically dependent?
2010.02.01- SLIDE 35IS 240 – Spring 2010
Statistical Independence vs. Statistical Dependence• How likely is a red car to drive by given we’ve
seen a black one?
• How likely is the word “ambulence” to appear, given that we’ve seen “car accident”?
• Color of cars driving by are independent (although more frequent colors are more likely)
• Words in text are not independent (although again more frequent words are more likely)
2010.02.01- SLIDE 36IS 240 – Spring 2010
Lexical Associations• Subjects write first word that comes to mind
– doctor/nurse; black/white (Palermo & Jenkins 64)
• Text Corpora yield similar associations• One measure: Mutual Information (Church and Hanks
89)
• If word occurrences were independent, the numerator and denominator would be equal (if measured across a large collection)
)(),(),(log),( 2 yPxPyxPyxI =
2010.02.01- SLIDE 37IS 240 – Spring 2010
Interesting Associations with “Doctor”
(AP Corpus, N=15 million, Church & Hanks 89)
I(x,y) f(x,y) f(x) x f(y) y11.311.310.79.49.08.98.7
12830861125
1111105110511052751105621
honorarydoctorsdoctorsdoctorsexamineddoctorsdoctor
621442411546213171407
doctordentistsnursestreatingdoctortreatbills
2010.02.01- SLIDE 38IS 240 – Spring 2010
These associations were likely to happen because the non-doctor words shown here are very commonand therefore likely to co-occur with any noun.
Un-Interesting Associations with “Doctor”
I(x,y) f(x,y) f(x) x f(y) y0.960.950.93
64112
62128469084716
doctorais
7378511051105
withdoctorsdoctors
2010.02.01- SLIDE 39IS 240 – Spring 2010
Query Processing• Once the text is in a form to match to the
indexes then the fun begins– What approach to use?
• Boolean?• Extended Boolean?• Ranked
– Fuzzy sets?– Vector?– Probabilistic?– Language Models? – Neural nets?
• Most of the next few weeks will be looking at these different approaches
2010.02.01- SLIDE 40IS 240 – Spring 2010
Display and formatting• Have to present the the results to the user• Lots of different options here, mostly
governed by – How the actual document is stored – And whether the full document or just the
metadata about it is presented
2010.02.01- SLIDE 41IS 240 – Spring 2010
What to do with terms…• Once terms have been extracted from the
documents, they need to be stored in some way that lets you get back to documents that those terms came from
• The most common index structure to do this in IR systems is the “Inverted File”
2010.02.01- SLIDE 42IS 240 – Spring 2010
Boolean Implementation: Inverted Files
• We will look at “Vector files” in detail later. But conceptually, an Inverted File is a vector file “inverted” so that rows become columns and columns become rows
docs t1 t2 t3D1 1 0 1D2 1 0 0D3 0 1 1D4 1 0 0D5 1 1 1D6 1 1 0D7 0 1 0D8 0 1 0D9 0 0 1
D10 0 1 1
Terms D1 D2 D3 D4 D5 D6 D7 …t1 1 1 0 1 1 1 0t2 0 0 1 0 1 1 1t3 1 0 1 0 1 0 0
2010.02.01- SLIDE 43IS 240 – Spring 2010
How Are Inverted Files Created
• Documents are parsed to extract words (or stems) and these are saved with the Document ID.
Now is the timefor all good men
to come to the aidof their country
Doc 1
It was a dark andstormy night in
the country manor. The time was past midnight
Doc 2
Term Doc #now 1is 1the 1time 1for 1all 1good 1men 1to 1come 1to 1the 1aid 1of 1their 1country 1it 2was 2a 2dark 2and 2stormy 2night 2in 2the 2country 2manor 2the 2time 2was 2past 2midnight 2
TextProcSteps
2010.02.01- SLIDE 44IS 240 – Spring 2010
How Inverted Files are Created
• After all document have been parsed the inverted file is sorted
Term Doc #a 2aid 1all 1and 2come 1country 1country 2dark 2for 1good 1in 2is 1it 2manor 2men 1midnight 2night 2now 1of 1past 2stormy 2the 1the 1the 2the 2their 1time 1time 2to 1to 1was 2was 2
Term Doc #now 1is 1the 1time 1for 1all 1good 1men 1to 1come 1to 1the 1aid 1of 1their 1country 1it 2was 2a 2dark 2and 2stormy 2night 2in 2the 2country 2manor 2the 2time 2was 2past 2midnight 2
2010.02.01- SLIDE 45IS 240 – Spring 2010
How Inverted Files are Created
• Multiple term entries for a single document are merged and frequency information added
Term Doc # Freqa 2 1aid 1 1all 1 1and 2 1come 1 1country 1 1country 2 1dark 2 1for 1 1good 1 1in 2 1is 1 1it 2 1manor 2 1men 1 1midnight 2 1night 2 1now 1 1of 1 1past 2 1stormy 2 1the 1 2the 2 2their 1 1time 1 1time 2 1to 1 2was 2 2
Term Doc #a 2aid 1all 1and 2come 1country 1country 2dark 2for 1good 1in 2is 1it 2manor 2men 1midnight 2night 2now 1of 1past 2stormy 2the 1the 1the 2the 2their 1time 1time 2to 1to 1was 2was 2
2010.02.01- SLIDE 46IS 240 – Spring 2010
Inverted Files• The file is commonly split into a Dictionary
and a Postings fileTerm Doc # Freqa 2 1aid 1 1all 1 1and 2 1come 1 1country 1 1country 2 1dark 2 1for 1 1good 1 1in 2 1is 1 1it 2 1manor 2 1men 1 1midnight 2 1night 2 1now 1 1of 1 1past 2 1stormy 2 1the 1 2the 2 2their 1 1time 1 1time 2 1to 1 2was 2 2
Doc # Freq2 11 11 12 11 11 12 12 11 11 12 11 12 12 11 12 12 11 11 12 12 11 22 21 11 12 11 22 2
Term N docs Tot Freqa 1 1aid 1 1all 1 1and 1 1come 1 1country 2 2dark 1 1for 1 1good 1 1in 1 1is 1 1it 1 1manor 1 1men 1 1midnight 1 1night 1 1now 1 1of 1 1past 1 1stormy 1 1the 2 4their 1 1time 2 2to 1 2was 1 2
2010.02.01- SLIDE 47IS 240 – Spring 2010
Inverted files• Permit fast search for individual terms• Search results for each term is a list of
document IDs (and optionally, frequency and/or positional information)
• These lists can be used to solve Boolean queries:– country: d1, d2– manor: d2– country and manor: d2
2010.02.01- SLIDE 48IS 240 – Spring 2010
Inverted Files• Lots of alternative implementations
– E.g.: Cheshire builds within-document frequency using a hash table during document parsing. Then Document IDs and frequency info are stored in a BerkeleyDB B-tree index keyed by the term.
2010.02.01- SLIDE 49IS 240 – Spring 2010
Btree (conceptual)
B | | D | | F |
AcesBoilers
Cars
F | | P | | Z |
R | | S | | Z |H | | L | | P |
DevilsMinors
PanthersSeminoles
FlyersHawkeyesHoosiers
2010.02.01- SLIDE 50IS 240 – Spring 2010
Btree with Postings
B | | D | | F |
AcesBoilers
Cars
F | | P | | Z |
R | | S | | Z |H | | L | | P |
DevilsMinors
PanthersSeminoles
FlyersHawkeyesHoosiers
2,4,8,122,4,8,122,4,8,12
2,4,8,122,4,8,12
2,4,8,125, 7, 200
2,4,8,122,4,8,128,120
2010.02.01- SLIDE 51IS 240 – Spring 2010
Inverted files• Permit fast search for individual terms• Search results for each term is a list of
document IDs (and optionally, frequency, part of speech and/or positional information)
• These lists can be used to solve Boolean queries:– country: d1, d2– manor: d2– country and manor: d2
2010.02.01- SLIDE 52IS 240 – Spring 2010
Query Processing• Once the text is in a form to match to the
indexes then the fun begins– What approach to use?
• Boolean?• Extended Boolean?• Ranked
– Fuzzy sets?– Vector?– Probabilistic?– Language Models? – Neural nets?
• Most of the next few weeks will be looking at these different approaches
2010.02.01- SLIDE 53IS 240 – Spring 2010
Display and formatting• Have to present the the results to the user• Lots of different options here, mostly
governed by – How the actual document is stored – And whether the full document or just the
metadata about it is presented