View
53
Download
0
Category
Tags:
Preview:
DESCRIPTION
Information Retrieval. CSE 8337 (Part IV) Spring 2011 Some Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza -Yates and Berthier Ribeiro-Neto http://www.sims.berkeley.edu/~hearst/irbook/ - PowerPoint PPT Presentation
Citation preview
Information Retrieval
CSE 8337 (Part IV)Spring 2011
Some Material for these slides obtained from:Modern Information Retrieval by Ricardo Baeza-Yates and Berthier Ribeiro-Neto
http://www.sims.berkeley.edu/~hearst/irbook/Data Mining Introductory and Advanced Topics by Margaret H. Dunham
http://www.engr.smu.edu/~mhd/bookIntroduction to Information Retrieval by Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schutze
http://informationretrieval.org
CSE 8337 Spring 2011 2
CSE 8337 Outline• Introduction• Text Processing• Indexes• Boolean Queries• Web Searching/Crawling• Vector Space Model• Matching• Evaluation• Feedback/Expansion
CSE 8337 Spring 2011 3
Why System Evaluation? There are many retrieval models/
algorithms/ systems, which one is the best?
What does best mean? IR evaluation may not actually look at
traditional CS metrics of space/time. What is the best component for:
Ranking function (dot-product, cosine, …) Term selection (stopword removal,
stemming…) Term weighting (TF, TF-IDF,…)
How far down the ranked list will a user need to look to find some/all relevant documents?
CSE 8337 Spring 2011 4
Measures for a search engine How fast does it index
Number of documents/hour (Average document size)
How fast does it search Latency as a function of index size
Expressiveness of query language Ability to express complex information
needs Speed on complex queries
Uncluttered UI Is it free?
CSE 8337 Spring 2011 5
Measures for a search engine All of the preceding criteria are
measurable: we can quantify speed/size; we can make expressiveness precise
The key measure: user happiness What is this? Speed of response/size of index are factors But blindingly fast, useless answers won’t
make a user happy Need a way of quantifying user
happiness
CSE 8337 Spring 2011 6
Happiness: elusive to measure Most common proxy: relevance of
search results But how do you measure relevance? We will detail a methodology here, then
examine its issues Relevant measurement requires 3
elements:1. A benchmark document collection2. A benchmark suite of queries3. A usually binary assessment of either
Relevant or Nonrelevant for each query and each document
CSE 8337 Spring 2011 7
Difficulties in Evaluating IR Systems
Effectiveness is related to the relevancy of retrieved items.
Relevancy is not typically binary but continuous.
Even if relevancy is binary, it can be a difficult judgment to make.
Relevancy, from a human standpoint, is: Subjective: Depends upon a specific user’s
judgment. Situational: Relates to user’s current needs. Cognitive: Depends on human perception and
behavior. Dynamic: Changes over time.
CSE 8337 Spring 2011 8
How to perform evaluation
Start with a corpus of documents. Collect a set of queries for this corpus. Have one or more human experts
exhaustively label the relevant documents for each query.
Typically assumes binary relevance judgments.
Requires considerable human effort for large document/query corpora.
CSE 8337 Spring 2011 9
IR Evaluation Metrics Precision/Recall
P/R graph Regular Smoothing
Interpolating Averaging
ROC Curve MAP R-Precision P/R points
F-Measure E-Measure Fallout Novelty Coverage Utility ….
CSE 8337 Spring 2011 10
documents relevant of number Totalretrieved documents relevant of Number recall
retrieved documents of number Totalretrieved documents relevant of Number precision
Relevant documents
Retrieved documents
Entire document collection
retrieved & relevant
not retrieved but relevant
retrieved & irrelevant
Not retrieved & irrelevant
retrieved not retrieved
rele
vant
irrel
evan
t
Precision and Recall
CSE 8337 Spring 2011 11
Determining Recall is Difficult Total number of relevant items is
sometimes not available: Sample across the database and
perform relevance judgment on these items.
Apply different retrieval algorithms to the same database for the same query. The aggregate of relevant items is taken as the total relevant set.
CSE 8337 Spring 2011 12
Trade-off between Recall and Precision
10
1
Recall
Prec
isio
n
The ideal
Desired areas
Returns relevant documents butmisses many useful ones too
Returns most relevantdocuments but includes lots of junk
CSE 8337 Spring 2011 13
Recall-Precision Graph Example
Recall-Precision Graph
00.20.40.60.8
1
0 0.5 1Recall
Prec
ision
CSE 8337 Spring 2011 14
A precision-recall curve
0.0
0.2
0.4
0.6
0.8
1.0
0.0 0.2 0.4 0.6 0.8 1.0
Pre
cisi
on
Recall
CSE 8337 Spring 2011 15
Recall-Precision Graph Smoothing Avoid sawtooth lines by smoothing Interpolate for one query Average across queries
CSE 8337 Spring 2011 16
Interpolating a Recall/Precision Curve
Interpolate a precision value for each standard recall level: rj {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9,
1.0} r0 = 0.0, r1 = 0.1, …, r10=1.0
The interpolated precision at the j-th standard recall level is the maximum known precision at any recall level between the j-th and (j + 1)-th level: )(max)(
1
rPrPjj rrrj
CSE 8337 Spring 2011 17
Interpolated precision Idea: If locally precision increases with
increasing recall, then you should get to count that…
So you max of precisions to right of value(Need not be at only standard levels.)
CSE 8337 Spring 2011 18
Precision across queries Recall and Precision are calculated for a
specific query. Generally want a value for many
queries. Calculate average precision recall over
a set of queries. Average precision at recall level r:
Nq – number of queries Pi(r) - precision at recall level r for ith
query
1
( )( )qN
i
i q
P rP rN
CSE 8337 Spring 2011 19
Average Recall/Precision Curve
Typically average performance over a large set of queries.
Compute average precision at each standard recall level across all queries.
Plot average precision/recall curves to evaluate overall system performance on a document/query corpus.
CSE 8337 Spring 2011 20
Compare Two or More Systems The curve closest to the upper right-
hand corner of the graph indicates the best performance
00.2
0.40.6
0.81
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1Recall
Prec
ision
NoStem Stem
CSE 8337 Spring 2011 21
Operating Characteristic Curve (ROC Curve)
CSE 8337 Spring 2011 22
ROC Curve Data False Positive Rate vs True Positive
Rate True Positive Rate
Sensitivity – proportion of positive results received
Recall tn/(fp+tn)
False Positive Rate fp/(fp+tn) 1-Specificity Specificity – proportion of negative
results not received
CSE 8337 Spring 2011 23
Yet more evaluation measures…
Mean average precision (MAP) Average of the precision value obtained for
the top k documents, each time a relevant doc is retrieved
Avoids interpolation, use of fixed recall levels
MAP for query collection is arithmetic ave. Macro-averaging: each query counts equally
R-precision If have known (though perhaps incomplete)
set of relevant documents of size Rel, then calculate precision of top Rel docs returned
Perfect system could score 1.0.
CSE 8337 Spring 2011 24
Variance For a test collection, it is usual that a
system does poor on some information needs (e.g., MAP = 0.1) and well on others (e.g., MAP = 0.7)
Indeed, it is usually the case that the variance in performance of the same system across queries is much greater than the variance of different systems on the same query.
That is, there are easy information needs and hard ones!
CSE 8337 Spring 2011 2525
Evaluation Graphs are good, but people want summary measures!
Precision at fixed retrieval level Precision-at-k: Precision of top k results Perhaps appropriate for most of web search: all
people want are good matches on the first one or two results pages
But: averages badly and has an arbitrary parameter of k
11-point interpolated average precision The standard measure in the early TREC
competitions: you take the precision at 11 levels of recall varying from 0 to 1 by tenths of the documents, using interpolation (the value for 0 is always interpolated!), and average them
Evaluates performance at all recall levels
CSE 8337 Spring 2011 2626
Typical (good) 11 point precisions
SabIR/Cornell 8A1 11pt precision from TREC 8 (1999)
0
0.2
0.4
0.6
0.8
1
0 0.2 0.4 0.6 0.8 1
Recall
Prec
isio
n
CSE 8337 Spring 2011 27
Computing Recall/Precision Points
For a given query, produce the ranked list of retrievals.
Adjusting a threshold on this ranked list produces different sets of retrieved documents, and therefore different recall/precision measures.
Mark each document in the ranked list that is relevant according to the gold standard.
Compute a recall/precision pair for each position in the ranked list that contains a relevant document.
CSE 8337 Spring 2011 28
R=3/6=0.5; P=3/4=0.75
Computing Recall/Precision Points: An Example (modified from [Salton83])
n doc # relevant1 588 x2 589 x3 5764 590 x5 9866 592 x7 9848 9889 57810 98511 10312 59113 772 x14 990
Let total # of relevant docs = 6Check each new recall point:
R=1/6=0.167; P=1/1=1
R=2/6=0.333; P=2/2=1
R=5/6=0.833; p=5/13=0.38
R=4/6=0.667; P=4/6=0.667
CSE 8337 Spring 2011 29
F-Measure One measure of performance that takes
into account both recall and precision. Harmonic mean of recall and precision:
Calculated at a specific document in the ranking.
Compared to arithmetic mean, both need to be high for harmonic mean to be high.
Compromise between precision and recall
PRRPPRF 11
22
CSE 8337 Spring 2011 30
A combined measure: F Combined measure that assesses
precision/recall tradeoff is F measure (weighted harmonic mean):
People usually use balanced F1 measure i.e., with = 1 or = ½
RPPR
RP
F
2
2 )1(1)1(1
1
CSE 8337 Spring 2011 31
E Measure (parameterized F Measure)
A variant of F measure that allows weighting emphasis on precision or recall:
Value of controls trade-off: = 1: Equally weight precision and recall
(E=F). > 1: Weight precision more. < 1: Weight recall more.
PRRPPRE
1
2
2
2
2
)1()1(
CSE 8337 Spring 2011 32
Fallout Rate Problems with both precision and
recall: Number of irrelevant documents in
the collection is not taken into account.
Recall is undefined when there is no relevant document in the collection.
Precision is undefined when no document is retrieved.
collection the in items tnonrelevan of no. totalretrieved items tnonrelevan of no. Fallout
CSE 8337 Spring 2011 33
Fallout Want fallout to be close to 0. In general want to maximize recall
and minimize fallout. Examine fallout-recall graph. More
systems oriented than recall-precision.
CSE 8337 Spring 2011 34
Subjective Relevance Measures Novelty Ratio: The proportion of items
retrieved and judged relevant by the user and of which they were previously unaware. Ability to find new information on a topic.
Coverage Ratio: The proportion of relevant items retrieved out of the total relevant documents known to a user prior to the search. Relevant when the user wants to locate
documents which they have seen before (e.g., the budget report for Year 2000).
CSE 8337 Spring 2011 35
Utility Subjective measure Cost-Benefit Analysis for retrieved
documents Cr – Benefit of retrieving relevant document Cnr – Cost of retrieving a nonrelevant
document Crn – Cost of not retrieving a relevant
document Nr – Number of relevant documents retrieved Nnr – Number of nonrelevant documents
retrieved Nrn – Number of relevant documents not
retrieved
Nrn))* (Crn Nnr)*((Cnr - Nr) (Cr Utility
CSE 8337 Spring 2011 36
Other Factors to Consider User effort: Work required from the user
in formulating queries, conducting the search, and screening the output.
Response time: Time interval between receipt of a user query and the presentation of system responses.
Form of presentation: Influence of search output format on the user’s ability to utilize the retrieved materials.
Collection coverage: Extent to which any/all relevant items are included in the document corpus.
CSE 8337 Spring 2011 37
Experimental Setup for Benchmarking
Analytical performance evaluation is difficult for document retrieval systems because many characteristics such as relevance, distribution of words, etc., are difficult to describe with mathematical precision.
Performance is measured by benchmarking. That is, the retrieval effectiveness of a system is evaluated on a given set of documents, queries, and relevance judgments.
Performance data is valid only for the environment under which the system is evaluated.
CSE 8337 Spring 2011 38
Benchmarks A benchmark collection contains:
A set of standard documents and queries/topics.
A list of relevant documents for each query.
Standard collections for traditional IR:TREC: http://trec.nist.gov/
Standard document collection
Standard queries
Algorithm under test Evaluation
Standard result
Retrieved result
Precision and recall
CSE 8337 Spring 2011 39
Benchmarking The Problems
Performance data is valid only for a particular benchmark.
Building a benchmark corpus is a difficult task.
Benchmark web corpora are just starting to be developed.
Benchmark foreign-language corpora are just starting to be developed.
CSE 8337 Spring 2011 40
The TREC Benchmark • TREC: Text REtrieval Conference (http://trec.nist.gov/) Originated from the TIPSTER program sponsored by Defense Advanced Research Projects Agency (DARPA).• Became an annual conference in 1992, co-sponsored
by the National Institute of Standards and Technology (NIST) and DARPA.• Participants are given parts of a standard set of
documents and TOPICS (from which queries have to be derived) in different stages for training and testing.• Participants submit the P/R values for the final
document and query corpus and present their results at the conference.
CSE 8337 Spring 2011 41
The TREC Objectives • Provide a common ground for comparing different
IR techniques.
– Same set of documents and queries, and same evaluation method.
• Sharing of resources and experiences in developing the
benchmark.– With major sponsorship from government to develop
large benchmark collections.• Encourage participation from industry and
academia.• Development of new evaluation techniques,
particularly for new applications.
– Retrieval, routing/filtering, non-English collection, web-based collection, question answering.
CSE 8337 Spring 2011 42
From document collections to test collections
Still need Test queries Relevance assessments
Test queries Must be germane to docs available Best designed by domain experts Random query terms generally not a good
idea Relevance assessments
Human judges, time-consuming Are human panels perfect?
CSE 8337 Spring 2011 43
Kappa measure for inter-judge (dis)agreement Kappa measure
Agreement measure among judges Designed for categorical judgments Corrects for chance agreement
Kappa = [ P(A) – P(E) ] / [ 1 – P(E) ] P(A) – proportion of time judges agree P(E) – what agreement would be by chance Kappa = 0 for chance agreement, 1 for total
agreement.
CSE 8337 Spring 2011 44
Kappa Measure: ExampleNumber of docs Judge 1 Judge 2
300 Relevant Relevant
70 Nonrelevant Nonrelevant
20 Relevant Nonrelevant
10 Nonrelevant relevant
P(A)? P(E)?
CSE 8337 Spring 2011 45
Kappa Example P(A) = 370/400 = 0.925 P(nonrelevant) = (10+20+70+70)/800 = 0.2125 P(relevant) = (10+20+300+300)/800 = 0.7878 P(E) = 0.2125^2 + 0.7878^2 = 0.665 Kappa = (0.925 – 0.665)/(1-0.665) = 0.776
Kappa > 0.8 = good agreement 0.67 < Kappa < 0.8 -> “tentative conclusions”
(Carletta ’96) Depends on purpose of study For >2 judges: average pairwise kappas
CSE 8337 Spring 2011 46
Interjudge Agreement: TREC 3
CSE 8337 Spring 2011 47
Impact of Inter-judge Agreement Impact on absolute performance measure can be
significant (0.32 vs 0.39) Little impact on ranking of different systems or
relative performance Suppose we want to know if algorithm A is better
than algorithm B A standard information retrieval experiment will
give us a reliable answer to this question.
CSE 8337 Spring 2011 48
Critique of pure relevance Relevance vs Marginal Relevance
A document can be redundant even if it is highly relevant
Duplicates The same information from different sources Marginal relevance is a better measure of
utility for the user. Using facts/entities as evaluation units
more directly measures true relevance. But harder to create evaluation set
CSE 8337 Spring 2011 49
Can we avoid human judgment?
No Makes experimental work hard
Especially on a large scale In some very specific settings, can use proxies
E.g.: for approximate vector space retrieval, we can compare the cosine distance closeness of the closest docs to those found by an approximate retrieval algorithm
But once we have test collections, we can reuse them (so long as we don’t overtrain too badly)
CSE 8337 Spring 2011 50
Evaluation at large search engines
Search engines have test collections of queries and hand-ranked results
Recall is difficult to measure on the web Search engines often use precision at top k, e.g., k = 10 . . . or measures that reward you more for getting rank 1
right than for getting rank 10 right. NDCG (Normalized Cumulative Discounted Gain)
Search engines also use non-relevance-based measures. Clickthrough on first result
Not very reliable if you look at a single clickthrough … but pretty reliable in the aggregate.
Studies of user behavior in the lab A/B testing
CSE 8337 Spring 2011 51
A/B testing Purpose: Test a single innovation Prerequisite: You have a large search engine up and
running. Have most users use old system Divert a small proportion of traffic (e.g., 1%) to the
new system that includes the innovation Evaluate with an “automatic” measure like
clickthrough on first result Now we can directly see if the innovation does
improve user happiness. Probably the evaluation methodology that large
search engines trust most In principle less powerful than doing a multivariate
regression analysis, but easier to understand Problems with A/B Testing:
http://www.sigkdd.org/explorations/issues/12-2-2010-12/v12-02-8-UR-Kohavi.pdf
CSE 8337 Spring 2011 52
CSE 8337 Outline• Introduction• Text Processing• Indexes• Boolean Queries• Web Searching/Crawling• Vector Space Model• Matching• Evaluation• Feedback/Expansion
CSE 8337 Spring 2011 53
Query Operations Introduction IR queries as stated by the user
may not be precise or effective. There are many techniques to
improve a stated query and then process that query instead.
CSE 8337 Spring 2011 54
How can results be improved? Options for improving result
Local methods Personalization Relevance feedback Pseudo relevance feedback
Query expansion Local Analysis Thesauri Automatic thesaurus generation
Query assist
CSE 8337 Spring 2011 55
Relevance Feedback Relevance feedback: user feedback on
relevance of docs in initial set of results User issues a (short, simple) query The user marks some results as relevant or
non-relevant. The system computes a better representation
of the information need based on feedback. Relevance feedback can go through one or
more iterations. Idea: it may be difficult to formulate a
good query when you don’t know the collection well, so iterate
CSE 8337 Spring 2011 56
Relevance Feedback After initial retrieval results are
presented, allow the user to provide feedback on the relevance of one or more of the retrieved documents.
Use this feedback information to reformulate the query.
Produce new results based on reformulated query.
Allows more interactive, multi-pass process.
CSE 8337 Spring 2011 57
Relevance feedback We will use ad hoc retrieval to refer to
regular retrieval without relevance feedback.
We now look at four examples of relevance feedback that highlight different aspects.
CSE 8337 Spring 2011 58
Similar pages
CSE 8337 Spring 2011 59
Relevance Feedback: Example Image search engine
CSE 8337 Spring 2011 60
Results for Initial Query
CSE 8337 Spring 2011 61
Relevance Feedback
CSE 8337 Spring 2011 62
Results after Relevance Feedback
CSE 8337 Spring 2011 63
Initial query/results Initial query: New space satellite
applications1. 0.539, 08/13/91, NASA Hasn’t Scrapped Imaging Spectrometer2. 0.533, 07/09/91, NASA Scratches Environment Gear From Satellite
Plan3. 0.528, 04/04/90, Science Panel Backs NASA Satellite Plan, But
Urges Launches of Smaller Probes4. 0.526, 09/09/91, A NASA Satellite Project Accomplishes Incredible
Feat: Staying Within Budget5. 0.525, 07/24/90, Scientist Who Exposed Global Warming Proposes
Satellites for Climate Research6. 0.524, 08/22/90, Report Provides Support for the Critics Of Using
Big Satellites to Study Climate7. 0.516, 04/13/87, Arianespace Receives Satellite Launch Pact From
Telesat Canada8. 0.509, 12/02/87, Telecommunications Tale of Two Companies
User then marks relevant documents with “+”.
++
+
CSE 8337 Spring 2011 64
Expanded query after relevance feedback 2.074 new 15.106 space 30.816 satellite 5.660
application 5.991 nasa 5.196 eos 4.196 launch 3.972 aster 3.516 instrument 3.446 arianespace 3.004 bundespost 2.806 ss 2.790 rocket 2.053 scientist 2.003 broadcast 1.172 earth 0.836 oil 0.646 measure
CSE 8337 Spring 2011 65
Results for expanded query1. 0.513, 07/09/91, NASA Scratches Environment Gear From
Satellite Plan2. 0.500, 08/13/91, NASA Hasn’t Scrapped Imaging Spectrometer3. 0.493, 08/07/89, When the Pentagon Launches a Secret
Satellite, Space Sleuths Do Some Spy Work of Their Own4. 0.493, 07/31/89, NASA Uses ‘Warm’ Superconductors For Fast
Circuit5. 0.492, 12/02/87, Telecommunications Tale of Two Companies6. 0.491, 07/09/91, Soviets May Adapt Parts of SS-20 Missile For
Commercial Use7. 0.490, 07/12/88, Gaping Gap: Pentagon Lags in Race To Match
the Soviets In Rocket Launchers8. 0.490, 06/14/90, Rescue of Satellite By Space Agency To Cost
$90 Million
21
8
CSE 8337 Spring 2011 66
Relevance Feedback Use assessments by users as to the
relevance of previously returned documents to create new (modify old) queries.
Technique:1. Increase weights of terms from relevant
documents.2. Decrease weight of terms from
nonrelevant documents.
CSE 8337 Spring 2011 67
Relevance Feedback Architecture
RankingsIRSystem
Documentcorpus
RankedDocuments
1. Doc1 2. Doc2 3. Doc3 . .
1. Doc1 2. Doc2 3. Doc3 . .
Feedback
Query String
RevisedQuery
ReRankedDocuments
1. Doc2 2. Doc4 3. Doc5 . .
QueryReformulation
CSE 8337 Spring 2011 68
Query Reformulation Revise query to account for
feedback: Query Expansion: Add new terms to
query from relevant documents. Term Reweighting: Increase weight of
terms in relevant documents and decrease weight of terms in irrelevant documents.
Several algorithms for query reformulation.
CSE 8337 Spring 2011 69
Relevance Feedback in vector spaces We can modify the query based on
relevance feedback and apply standard vector space model.
Use only the docs that were marked. Relevance feedback can improve recall
and precision Relevance feedback is most useful for
increasing recall in situations where recall is important Users can be expected to review results and
to take time to iterate
CSE 8337 Spring 2011 70
The Theoretically Best Query
xx
x xo oo
Optimal query
x non-relevant documentso relevant documents
o
o
o
x x
xxx
x
x
x
x
x
x
xx
x
CSE 8337 Spring 2011 71
Query Reformulation for Vectors Change query vector using vector
algebra. Add the vectors for the relevant
documents to the query vector. Subtract the vectors for the irrelevant
docs from the query vector. This both adds both positive and
negatively weighted terms to the query as well as reweighting the initial terms.
CSE 8337 Spring 2011 72
Optimal Query Assume that the relevant set of
documents Cr are known. Then the best query that ranks all
and only the relevant queries at the top is:
rjrj Cd
jrCd
jr
opt dCN
dC
q
11
Where N is the total number of documents.
CSE 8337 Spring 2011 73
Standard Rocchio Method Since all relevant documents
unknown, just use the known relevant (Dr) and irrelevant (Dn) sets of documents and include the initial query q.
njrj Dd
jnDd
jr
m dD
dD
: Tunable weight for initial query.: Tunable weight for relevant documents.: Tunable weight for irrelevant documents.
CSE 8337 Spring 2011 74
Relevance feedback on initial query
xx
xxo oo
Revised query
x known non-relevant documentso known relevant documents
o
o
ox
x
x x
xx
x
x
xx
x
x x
x
Initial query
CSE 8337 Spring 2011 75
Positive vs Negative Feedback
Positive feedback is more valuable than negative feedback (so, set < ; e.g. = 0.25, = 0.75).
Many systems only allow positive feedback (=0).
Why?
CSE 8337 Spring 2011 76
Ide Regular Method Since more feedback should
perhaps increase the degree of reformulation, do not normalize for amount of feedback:
njrj Dd
jDd
jm ddqq
: Tunable weight for initial query.: Tunable weight for relevant documents.: Tunable weight for irrelevant documents.
CSE 8337 Spring 2011 77
Ide “Dec Hi” Method Bias towards rejecting just the
highest ranked of the irrelevant documents:
)(max jrelevantnonDd
jm ddqqrj
: Tunable weight for initial query.: Tunable weight for relevant documents.: Tunable weight for irrelevant document.
CSE 8337 Spring 2011 78
Comparison of Methods Overall, experimental results
indicate no clear preference for any one of the specific methods.
All methods generally improve retrieval performance (recall & precision) with feedback.
Generally just let tunable constants equal 1.
CSE 8337 Spring 2011 79
Relevance Feedback: Assumptions A1: User has sufficient knowledge for
initial query. A2: Relevance prototypes are “well-
behaved”. Term distribution in relevant documents will
be similar Term distribution in non-relevant documents
will be different from those in relevant documents
Either: All relevant documents are tightly clustered around a single prototype.
Or: There are different prototypes, but they have significant vocabulary overlap.
Similarities between relevant and irrelevant documents are small
CSE 8337 Spring 2011 80
Violation of A1
User does not have sufficient initial knowledge.
Examples: Misspellings (Brittany Speers). Cross-language information retrieval. Mismatch of searcher’s vocabulary vs.
collection vocabulary Cosmonaut/astronaut
CSE 8337 Spring 2011 81
Violation of A2 There are several relevance
prototypes. Examples:
Burma/Myanmar Contradictory government policies Pop stars that worked at Burger King
Often: instances of a general concept
Good editorial content can address problem Report on contradictory government
policies
CSE 8337 Spring 2011 82
Relevance Feedback: Problems Long queries are inefficient for typical IR
engine. Long response times for user. High cost for retrieval system. Partial solution:
Only reweight certain prominent terms Perhaps top 20 by term frequency
Users are often reluctant to provide explicit feedback
It’s often harder to understand why a particular document was retrieved after applying relevance feedback
Why?
CSE 8337 Spring 2011 83
Evaluation of relevance feedback strategies
Use q0 and compute precision and recall graph Use qm and compute precision recall graph
Assess on all documents in the collection Spectacular improvements, but … it’s cheating! Partly due to known relevant documents ranked
higher Must evaluate with respect to documents not seen
by user Use documents in residual collection (set of
documents minus those assessed relevant) Measures usually lower than for original query But a more realistic evaluation Relative performance can be validly compared
Empirically, one round of relevance feedback is often very useful. Two rounds is sometimes marginally useful.
CSE 8337 Spring 2011 84
Evaluation of relevance feedback
Second method – assess only the docs not rated by the user in the first round Could make relevance feedback look worse
than it really is Can still assess relative performance of
algorithms Most satisfactory – use two collections
each with their own relevance assessments q0 and user feedback from first collection qm run on second collection and measured
CSE 8337 Spring 2011 85
Why is Feedback Not Widely Used?
Users sometimes reluctant to provide explicit feedback.
Results in long queries that require more computation to retrieve, and search engines process lots of queries and allow little time for each one.
Makes it harder to understand why a particular document was retrieved.
CSE 8337 Spring 2011 86
Evaluation: Caveat True evaluation of usefulness must
compare to other methods taking the same amount of time.
Alternative to relevance feedback: User revises and resubmits query.
Users may prefer revision/resubmission to having to judge relevance of documents.
There is no clear evidence that relevance feedback is the “best use” of the user’s time.
CSE 8337 Spring 2011 87
Pseudo relevance feedback Pseudo-relevance feedback automates
the “manual” part of true relevance feedback.
Pseudo-relevance algorithm: Retrieve a ranked list of hits for the user’s
query Assume that the top k documents are
relevant. Do relevance feedback (e.g., Rocchio)
Works very well on average But can go horribly wrong for some
queries. Several iterations can cause query drift. Why?
CSE 8337 Spring 2011 88
PseudoFeedback Results Found to improve performance on
TREC competition ad-hoc retrieval task.
Works even better if top documents must also satisfy additional boolean constraints in order to be used in feedback.
CSE 8337 Spring 2011 89
Relevance Feedback on the Web
Some search engines offer a similar/related pages feature (this is a trivial form of relevance feedback)
Google Altavista
But some don’t because it’s hard to explain to average user:
Yahoo Excite initially had true relevance feedback, but
abandoned it due to lack of use.
α/β/γ ??
CSE 8337 Spring 2011 90
Excite Relevance FeedbackSpink et al. 2000 Only about 4% of query sessions from a
user used relevance feedback option Expressed as “More like this” link next to
each result But about 70% of users only looked at
first page of results and didn’t pursue things further So 4% is about 1/8 of people extending
search Relevance feedback improved results
about 2/3 of the time
CSE 8337 Spring 2011 91
Query Expansion
In relevance feedback, users give additional input (relevant/non-relevant) on documents, which is used to reweight terms in the documents
In query expansion, users give additional input (good/bad search term) on words or phrases
CSE 8337 Spring 2011 92
How do we augment the user query?
Manual thesaurus E.g. MedLine: physician, syn: doc, doctor, MD,
medico Can be query rather than just synonyms
Global Analysis: (static; of all documents in collection) Automatically derived thesaurus
(co-occurrence statistics) Refinements based on query log mining
Common on the web Local Analysis: (dynamic)
Analysis of documents in result set
CSE 8337 Spring 2011 93
Local vs. Global Automatic Analysis
Local – Documents retrieved are examined to automatically determine query expansion. No relevance feedback needed.
Global – Thesaurus used to help select terms for expansion.
CSE 8337 Spring 2011 94
Automatic Local Analysis At query time, dynamically determine
similar terms based on analysis of top-ranked retrieved documents.
Base correlation analysis on only the “local” set of retrieved documents for a specific query.
Avoids ambiguity by determining similar (correlated) terms only within relevant documents. “Apple computer”
“Apple computer Powerbook laptop”
CSE 8337 Spring 2011 95
Automatic Local Analysis Expand query with terms found in local
clusters. Dl – set of documents retrieved for
query q. Vl – Set of words used in Dl. Sl – Set of distinct stems in Vl. fsi,j –Frequency of stem si in document dj
found in Dl. Construct stem-stem association matrix.
CSE 8337 Spring 2011 96
Association Matrixw1 w2 w3 …………………..wn
w1
w2
w3
.
.wn
c11 c12 c13…………………c1n
c21
c31
.
.cn1
cij: Correlation factor between stems si and stem sj
k l
ij sik sjkd D
c f f
fik : Frequency of term i in document k
CSE 8337 Spring 2011 97
Normalized Association Matrix Frequency based correlation factor
favors more frequent terms. Normalize association scores:
Normalized score is 1 if two stems have the same frequency in all documents.
ijjjii
ijij ccc
cs
CSE 8337 Spring 2011 98
Metric Correlation Matrix Association correlation does not
account for the proximity of terms in documents, just co-occurrence frequencies within documents.
Metric correlations account for term proximity.
iu jvVk Vk vu
ij kkrc
),(1
Vi: Set of all occurrences of term i in any document.r(ku,kv): Distance in words between word occurrences ku and kv
( if ku and kv are occurrences in different documents).
CSE 8337 Spring 2011 99
Normalized Metric Correlation Matrix
Normalize scores to account for term frequencies:
ji
ijij VV
cs
CSE 8337 Spring 2011 100
Query Expansion with Correlation Matrix
For each term i in query, expand query with the n terms, j, with the highest value of cij (sij).
This adds semantically related terms in the “neighborhood” of the query terms.
CSE 8337 Spring 2011 101
Problems with Local Analysis
Term ambiguity may introduce irrelevant statistically correlated terms. “Apple computer” “Apple red fruit
computer” Since terms are highly correlated
anyway, expansion may not retrieve many additional documents.
CSE 8337 Spring 2011 102
Automatic Global Analysis Determine term similarity through
a pre-computed statistical analysis of the complete corpus.
Compute association matrices which quantify term correlations in terms of how frequently they co-occur.
Expand queries with statistically most similar terms.
CSE 8337 Spring 2011 103
Automatic Global Analysis There are two modern variants
based on a thesaurus-like structure built using all documents in collection Query Expansion based on a Similarity
Thesaurus Query Expansion based on a Statistical
Thesaurus
CSE 8337 Spring 2011 104
Thesaurus A thesaurus provides information
on synonyms and semantically related words and phrases.
Example: physician syn: ||croaker, doc, doctor, MD, medical, mediciner, medico, ||sawbones
rel: medic, general practitioner, surgeon,
CSE 8337 Spring 2011 105
Thesaurus-based Query Expansion
For each term, t, in a query, expand the query with synonyms and related words of t from the thesaurus.
May weight added terms less than original query terms.
Generally increases recall. May significantly decrease precision,
particularly with ambiguous terms. “interest rate” “interest rate fascinate
evaluate”
CSE 8337 Spring 2011 106
Similarity Thesaurus The similarity thesaurus is based on term to
term relationships rather than on a matrix of co-occurrence.
This relationship are not derived directly from co-occurrence of terms inside documents.
They are obtained by considering that the terms are concepts in a concept space.
In this concept space, each term is indexed by the documents in which it appears.
Terms assume the original role of documents while documents are interpreted as indexing elements
CSE 8337 Spring 2011 107
Similarity Thesaurus The following definitions establish
the proper framework t: number of terms in the collection N: number of documents in the
collection fi,j: frequency of occurrence of the
term ki in the document dj tj: vocabulary of document dj
itfj: inverse term frequency for document dj
CSE 8337 Spring 2011 108
Similarity Thesaurus Inverse term frequency for document
dj
To ki is associated a vector
jj t
titf log
),....,,(k ,2,1,i Niii www
CSE 8337 Spring 2011 109
Similarity Thesaurus where wi,j is a weight associated to
index-document pair[ki,dj]. These weights are computed as follows
N
lj
lil
li
jjij
ji
ji
itff
f
itff
f
w
1
22
,
,
,
,
,
))(max
5.05.0(
))(max
5.05.0(
CSE 8337 Spring 2011 110
Similarity Thesaurus The relationship between two
terms ku and kv is computed as a correlation factor c u,v given by
The global similarity thesaurus is built through the computation of correlation factor cu,v for each pair of indexing terms [ku,kv] in the collection
jd
jv,ju,vuvu, wwkkc
CSE 8337 Spring 2011 111
Similarity Thesaurus This computation is expensive Global similarity thesaurus has to
be computed only once and can be updated incrementally
CSE 8337 Spring 2011 112
Query Expansion based on a Similarity Thesaurus
Query expansion is done in three steps as follows:1 Represent the query in the concept space
used for representation of the index terms2 Based on the global similarity thesaurus,
compute a similarity sim(q,kv) between each term kv correlated to the query terms and the whole query q.
3 Expand the query with the top r ranked terms according to sim(q,kv)
CSE 8337 Spring 2011 113
Query Expansion - step one To the query q is associated a
vector q in the term-concept space given by
where wi,q is a weight associated to the index-query pair[ki,q]
iqk
qi kwi
,q
CSE 8337 Spring 2011 114
Query Expansion - step two Compute a similarity sim(q,kv)
between each term kv and the user query q
where cu,v is the correlation factor
qk
vu,qu,vvu
cwkq)ksim(q,
CSE 8337 Spring 2011 115
Query Expansion - step three Add the top r ranked terms according to
sim(q,kv) to the original query q to form the expanded query q’
To each expansion term kv in the query q’ is assigned a weight wv,q’ given by
The expanded query q’ is then used to retrieve new documents to the user
qkqu,
vq'v,
u
w)ksim(q,w
CSE 8337 Spring 2011 116
Query Expansion Sample Doc1 = D, D, A, B, C, A, B, C Doc2 = E, C, E, A, A, D Doc3 = D, C, B, B, D, A, B, C, A Doc4 = A
c(A,A) = 10.991 c(A,C) = 10.781 c(A,D) = 10.781 ... c(D,E) = 10.398 c(B,E) = 10.396 c(E,E) = 10.224
CSE 8337 Spring 2011 117
Query Expansion Sample Query: q = A E E sim(q,A) = 24.298 sim(q,C) = 23.833 sim(q,D) = 23.833 sim(q,B) = 23.830 sim(q,E) = 23.435
New query: q’ = A C D E E w(A,q')= 6.88 w(C,q')= 6.75 w(D,q')= 6.75 w(E,q')= 6.64
CSE 8337 Spring 2011 118
WordNet A more detailed database of semantic
relationships between English words. Developed by famous cognitive
psychologist George Miller and a team at Princeton University.
About 144,000 English words. Nouns, adjectives, verbs, and adverbs
grouped into about 109,000 synonym sets called synsets.
CSE 8337 Spring 2011 119
WordNet Synset Relationships Antonym: front back Attribute: benevolence good (noun to
adjective) Pertainym: alphabetical alphabet (adjective
to noun) Similar: unquestioning absolute Cause: kill die Entailment: breathe inhale Holonym: chapter text (part-of) Meronym: computer cpu (whole-of) Hyponym: tree plant (specialization) Hypernym: fruit apple (generalization)
CSE 8337 Spring 2011 120
WordNet Query Expansion Add synonyms in the same synset. Add hyponyms to add specialized
terms. Add hypernyms to generalize a
query. Add other related terms to expand
query.
CSE 8337 Spring 2011 121
Statistical Thesaurus Existing human-developed thesauri
are not easily available in all languages.
Human thesuari are limited in the type and range of synonymy and semantic relations they represent.
Semantically related terms can be discovered from statistical analysis of corpora.
CSE 8337 Spring 2011 122
Query Expansion Based on a Statistical Thesaurus
Global thesaurus is composed of classes which group correlated terms in the context of the whole collection
Such correlated terms can then be used to expand the original user query
This terms must be low frequency terms However, it is difficult to cluster low frequency
terms To circumvent this problem, we cluster
documents into classes instead and use the low frequency terms in these documents to define our thesaurus classes.
This algorithm must produce small and tight clusters.
CSE 8337 Spring 2011 123
Query Expansion based on a Statistical Thesaurus
Use the thesaurus class to query expansion.
Compute an average term weight wtc for each thesaurus class C
C
wwtc
C
1iCi,
CSE 8337 Spring 2011 124
Query Expansion based on a Statistical Thesaurus
wtc can be used to compute a thesaurus class weight wc as
5.0C
wtcWc
CSE 8337 Spring 2011 125
Query Expansion Sample
TC = 0.90 NDC = 2.00 MIDF = 0.2
sim(1,3) = 0.99sim(1,2) = 0.40sim(1,2) = 0.40sim(2,3) = 0.29sim(4,1) = 0.00sim(4,2) = 0.00sim(4,3) = 0.00
Doc1 = D, D, A, B, C, A, B, CDoc2 = E, C, E, A, A, DDoc3 = D, C, B, B, D, A, B, C, ADoc4 = A
C1
D1 D2D3 D4
C2C3 C4
C1,3
0.99
C1,3,2
0.29
C1,3,2,4
0.00
idf A = 0.0idf B = 0.3idf C = 0.12idf D = 0.12idf E = 0.60 q'=A B E E
q= A E E
CSE 8337 Spring 2011 126
Query Expansion based on a Statistical Thesaurus
Problems with this approach initialization of parameters TC,NDC and MIDF TC depends on the collection Inspection of the cluster hierarchy is almost
always necessary for assisting with the setting of TC.
A high value of TC might yield classes with too few terms
CSE 8337 Spring 2011 127
Complete link algorithm This is document clustering algorithm with
produces small and tight clusters Place each document in a distinct cluster. Compute the similarity between all pairs of
clusters. Determine the pair of clusters [Cu,Cv] with the
highest inter-cluster similarity. Merge the clusters Cu and Cv Verify a stop criterion. If this criterion is not met
then go back to step 2. Return a hierarchy of clusters.
Similarity between two clusters is defined as the minimum of similarities between all pair of inter-cluster documents
CSE 8337 Spring 2011 128
Selecting the terms that compose each class
Given the document cluster hierarchy for the whole collection, the terms that compose each class of the global thesaurus are selected as follows Obtain from the user three parameters
TC: Threshold class NDC: Number of documents in class MIDF: Minimum inverse document frequency
CSE 8337 Spring 2011 129
Selecting the terms that compose each class
Use the parameter TC as threshold value for determining the document clusters that will be used to generate thesaurus classes
This threshold has to be surpassed by sim(Cu,Cv) if the documents in the clusters Cu and Cv are to be selected as sources of terms for a thesaurus class
CSE 8337 Spring 2011 130
Selecting the terms that compose each class
Use the parameter NDC as a limit on the size of clusters (number of documents) to be considered.
A low value of NDC might restrict the selection to the smaller cluster Cu+v
CSE 8337 Spring 2011 131
Selecting the terms that compose each class
Consider the set of document in each document cluster pre-selected above.
Only the lower frequency documents are used as sources of terms for the thesaurus classes
The parameter MIDF defines the minimum value of inverse document frequency for any term which is selected to participate in a thesaurus class
CSE 8337 Spring 2011 132
Global vs. Local Analysis Global analysis requires intensive term
correlation computation only once at system development time.
Local analysis requires intensive term correlation computation for every query at run time (although number of terms and documents is less than in global analysis).
But local analysis gives better results.
CSE 8337 Spring 2011 133
Example of manual thesaurus
CSE 8337 Spring 2011 134
Thesaurus-based query expansion
For each term, t, in a query, expand the query with synonyms and related words of t from the thesaurus
feline → feline cat May weight added terms less than original query
terms. Generally increases recall Widely used in many science/engineering fields May significantly decrease precision, particularly
with ambiguous terms. “interest rate” “interest rate fascinate evaluate”
There is a high cost of manually producing a thesaurus
And for updating it for scientific changes
CSE 8337 Spring 2011 135
Automatic Thesaurus Generation Attempt to generate a thesaurus automatically by
analyzing the collection of documents Fundamental notion: similarity between two words Definition 1: Two words are similar if they co-occur
with similar words. Definition 2: Two words are similar if they occur in a
given grammatical relation with the same words. You can harvest, peel, eat, prepare, etc. apples and
pears, so apples and pears must be similar. Co-occurrence based is more robust, grammatical
relations are more accurate.
CSE 8337 Spring 2011 136
Co-occurrence Thesaurus Simplest way to compute one is based on term-
term similarities in C = AAT where A is term-document matrix.
wi,j = (normalized) weight for (ti ,dj)
For each ti, pick terms with high values in C
ti
dj N
M
What does C contain if A is a term-doc incidence (0/1) matrix?
CSE 8337 Spring 2011 137
Automatic Thesaurus GenerationExample
CSE 8337 Spring 2011 138
Automatic Thesaurus GenerationDiscussion Quality of associations is usually a problem. Term ambiguity may introduce irrelevant
statistically correlated terms. “Apple computer” “Apple red fruit computer”
Problems: False positives: Words deemed similar that are
not False negatives: Words deemed dissimilar that
are similar Since terms are highly correlated anyway,
expansion may not retrieve many additional documents.
CSE 8337 Spring 2011 139
Query Expansion Conclusions
Expansion of queries with related terms can improve performance, particularly recall.
However, must select similar terms very carefully to avoid problems, such as loss of precision.
CSE 8337 Spring 2011 140
Conclusion Thesaurus is a efficient method to
expand queries The computation is expensive but it
is executed only once Query expansion based on
similarity thesaurus may use high term frequency to expand the query
Query expansion based on statistical thesaurus need well defined parameters
CSE 8337 Spring 2011 141
Query assist
Would you expect such a feature to increase the queryvolume at a search engine?
CSE 8337 Spring 2011 142
Query assist Generally done by query log
mining Recommend frequent recent
queries that contain partial string typed by user
A ranking problem! View each prior query as a doc – Rank-order those matching partial string …
Recommended