110
Statistical Language Models for Information Retrieval Tutorial at ACM SIGIR 2006 Aug. 6, 2006 ChengXiang Zhai Department of Computer Science University of Illinois, Urbana-Champaign http://www-faculty.cs.uiuc.edu/~czhai [email protected]

Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

Statistical Language Models

for Information RetrievalTutorial at ACM SIGIR 2006

Aug. 6, 2006

ChengXiang ZhaiDepartment of Computer Science

University of Illinois, Urbana-Champaign

http://www-faculty.cs.uiuc.edu/~czhai

[email protected]

Page 2: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 2

Goal of the Tutorial

• Introduce the emerging area of applying statistical language models (SLMs) to information retrieval (IR).

• Targeted audience: – IR practitioners who are interested in acquiring

advanced modeling techniques

– IR researchers who are looking for new research problems in IR models

• Accessible to anyone with basic knowledge of probability and statistics

Page 3: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 3

Scope of the Tutorial• What will be covered

– Brief background on IR and SLMs– Review of recent applications of unigram SLMs in IR– Details of some specific methods that are either empirically

effective or theoretically important– A framework for systematically exploring SLMs in IR– Outstanding research issues in applying SLMs to IR

• What will not be covered– Traditional IR methods– Implementation of IR systems– Discussion of high-order or other complex SLMs– Application of SLMs in supervised learning See [Manning & Schutze 99]

and [Jelinek 98]

See any IR textbooke.g., [Baeza-Yates & Ribeiro-Neto 99,

Grossman & Frieder 04]See [Witten et al. 99]

E.g., TDT, Text Categorization…. See publications in Machine Learning, Speech Recognition, and Natural Language Processing

Page 4: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 4

Tutorial Outline

1. Introduction

2. The Basic Language Modeling Approach

3. More Advanced Language Models

4. Language Models for Special Retrieval Tasks

5. A General Framework for Applying SLMs to IR

6. Summary

Page 5: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 5

Part 1: Introduction

1. Introduction- Information Retrieval (IR)

- Statistical Language Models (SLMs)

- Applications of SLMs to IR

2. The Basic Language Modeling Approach

3. More Advanced Language Models

4. Language Models for Special Retrieval Tasks

5. A General Framework for Applying SLMs to IR

6. Summary

We are here

Page 6: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 6

What is Information Retrieval (IR)?

• Narrow sense (= ad hoc text retrieval) – Given a collection of text documents (information items)– Given a text query from a user (information need)– Retrieve relevant documents from the collection

• A broader sense of IR may include– Retrieving non-textual information (e.g., images)– Other tasks (e.g., filtering, categorization or summarization)

• In this tutorial, IR ≈ ad hoc text retrieval• Ad hoc text retrieval is fundamental to IR and has

many applications (e.g., search engines, digital libraries, …)

Page 7: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 7

Formalization of IR Tasks• Vocabulary V={w1, w2, …, wN} of language

• Query q = q1,…,qm, where qi ∈ V

• Document di = di1,…,dimi, where dij ∈ V

• Collection C= {d1, …, dk}

• Set of relevant documents R(q) ⊆ C– Generally unknown and user-dependent

– Query is a “hint” on which doc is in R(q)

• Task = compute R’(q), an “approximate R(q)”

Page 8: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 8

Computing R’(q): Doc Selection vs. Ranking

++

+ +- - -

- - - -

- - - --

- - +- -

Doc Selectionf(d,q)=?

++

++

- -+

-+

--

- --

---

1

0

R’(q)True R(q)

R(q) = {d∈C|f(d,q)>θ}, where f(d,q) ∈ℜ is a ranking function; θ is a cutoff implicitly set by the user

R(q)={d∈C|f(d,q)=1}, where f(d,q) ∈{0,1}is an indicator function (classifier)

0.98 d1 +0.95 d2 +0.83 d3 -0.80 d4 +0.76 d5 -0.56 d6 -0.34 d7 -0.21 d8 +0.21 d9 -

Doc Rankingf(d,q)=?

R’(q)

θ=0.77

Page 9: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 9

Problems with Doc Selection

• The classifier is unlikely accurate

– “Over-constrained” query (terms are too specific): no relevant documents found

– “Under-constrained” query (terms are too general): over delivery

– It is extremely hard to find the right position between these two extremes

• Even if it is accurate, all relevant documents are not equally relevant

• Relevance is a matter of degree!

Page 10: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 10

Ranking is often preferred

• A user can stop browsing anywhere, so the boundary/cutoff is controlled by the user– High recall users would view more items

– High precision users would view only a few

• Theoretical justification: Probability Ranking Principle [Robertson 77], Risk Minimization [Zhai 02, Zhai & Lafferty 06]

• The retrieval problem is now reduced to defining a ranking function f, such that, for all q, d1, d2, f(q,d1) > f(q,d2) iff p(Relevant|q,d1) >p(Relevant|q,d2)

• Function f is an operational definition of relevance

• Most IR research is centered on finding a good f…

Page 11: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 11

Two Well-Known Traditional Retrieval Formulas [Singhal 01]

[ ]

Key retrieval heuristics:

TF (Term Frequency)IDF (Inverse Doc Freq.)

+Length normalization

Other heuristics:

StemmingStop word removal

Phrases

Similar quantities willoccur in the LMs…

1

1

( 1)

(1 )

k tfdl

k b b tfavdl

+

− + +

Typo

[Sparck Jones 72, Salton & Buckley 88,Singhal et al. 96, Robertson & Walker 94,

Fang et al. 04]

Page 12: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 12

Feedback in IR

Judgments:d1 +d2 -d3 +…dk -...

Query RetrievalEngine

Results:d1 3.5d2 2.4…dk 0.5...

User

Documentcollection

Judgments:d1 +d2 +d3 +…dk -...

top 10

Pseudo feedback

Assume top 10 docsare relevant

Relevance feedback User judges documents

Updatedquery

FeedbackLearn from Examples

Page 13: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 13

Feedback in IR (cont.)

• An essential component in any IR method

• Relevance feedback is always desirable, but a user may not be willing to provide explicit judgments

• Pseudo/automatic feedback is always possible, and often improves performance on average through

– Exploiting word co-occurrences

– Enriching a query with additional related words

– Indirectly addressing issues such as ambiguous words and synonyms

• Implicit feedback is a good compromise

Page 14: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 14

Evaluation of Retrieval Performance

1. d1 √2. d2 √3. d3 ×4. d4 √5. d5 ×6. d6 ×7. d7 ×8. d8 ×9. d9 ×10. d10 √

Total # relevant docs = 8

PR-curve As a ranked list precision

recall

x

x

x

x

1.0

1.00.0

As a SET of results # 4

0.4# 10

# 40.5

# 8

relretprecision

retrievedrelret

recallrelevant

= = =

= = =

How do we compare different rankings?

Which is the best?

0.0

A

CB

A>CB>CBut is A>B?

Summarize a ranking with a single number

1

1 k

ii

AvgPrec pk =

= ∑pi = prec at the rank where the i-th rel doc is retrieved

pi=0 if the i-th rel doc is not retrieved

k is the total # of rel docs

Avg. Prec. is sensitive to the position of each rel doc!

AvgPrec = (1/1+2/2+3/4+4/10+0+0+0+0)/8=0.394

Page 15: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 15

Part 1: Introduction (cont.)

1. Introduction- Information Retrieval (IR)

- Statistical Language Models (SLMs)

- Application of SLMs to IR

2. The Basic Language Modeling Approach

3. More Advanced Language Models

4. Language Models for Special Retrieval Tasks

5. A General Framework for Applying SLMs to IR

6. Summary

We are here

Page 16: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 16

What is a Statistical LM?• A probability distribution over word

sequences

– p(“Today is Wednesday”) ≈ 0.001

– p(“Today Wednesday is”) ≈ 0.0000000000001

– p(“The eigenvalue is positive”) ≈ 0.00001

• Context/topic dependent!

• Can also be regarded as a probabilistic mechanism for “generating” text, thus also called a “generative” model

Page 17: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 17

Why is a LM Useful?

• Provides a principled way to quantify the uncertainties associated with natural language

• Allows us to answer questions like:– Given that we see “John” and “feels”, how likely will we see

“happy” as opposed to “habit” as the next word? (speech recognition)

– Given that we observe “baseball” three times and “game” once in a news article, how likely is it about “sports”? (text categorization, information retrieval)

– Given that a user is interested in sports news, how likely would the user use “baseball” in a query?

(information retrieval)

Page 18: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 18

Source-Channel Framework(Model of Communication System [Shannon 48] )

Source Transmitter(encoder)

DestinationReceiver(decoder)

NoisyChannel

P(X)P(Y|X)

X Y X’P(X|Y)=?

)()|(maxarg)|(maxargˆ XpXYpYXpXXX

==

When X is text, p(X) is a language model

(Bayes Rule)

Many Examples: Speech recognition: X=Word sequence Y=Speech signalMachine translation: X=English sentence Y=Chinese sentenceOCR Error Correction: X=Correct word Y= Erroneous wordInformation Retrieval: X=Document Y=QuerySummarization: X=Summary Y=Document

Page 19: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 19

The Simplest Language Model(Unigram Model)

• Generate a piece of text by generating each word independently

• Thus, p(w1 w2 ... wn)=p(w1)p(w2)…p(wn)

• Parameters: {p(wi)} p(w1)+…+p(wN)=1 (N is voc. size)

• Essentially a multinomial distribution over words

• A piece of text can be regarded as a sample drawn according to this word distribution

Page 20: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 20

Text Generation with Unigram LM (Unigram) Language Model θ

p(w| θ)

…text 0.2mining 0.1assocation 0.01clustering 0.02…food 0.00001…

Topic 1:Text mining

…food 0.25nutrition 0.1healthy 0.05diet 0.02…

Topic 2:Health

Document d

Text miningpaper

Food nutritionpaper

Sampling

Given θ, p(d| θ) varies according to d

Page 21: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 21

Estimation of Unigram LM

(Unigram) Language Model θp(w| θ)=?

Document

text 10mining 5

association 3database 3algorithm 2

…query 1

efficient 1

…text ?mining ?assocation ?database ?…query ?…

Estimation

Total #words=100

10/1005/1003/1003/100

1/100

How good is the estimated model ?

It gives our document sample the highest prob,but it doesn’t generalize well… More about this later…

Page 22: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 22

More Sophisticated LMs

• N-gram language models

– In general, p(w1 w2 ... wn)=p(w1)p(w2|w1)…p(wn|w1 …wn-1)

– n-gram: conditioned only on the past n-1 words

– E.g., bigram: p(w1 ... wn)=p(w1)p(w2|w1) p(w3|w2) …p(wn|wn-1)

• Remote-dependence language models (e.g., Maximum Entropy model)

• Structured language models (e.g., probabilistic context-free grammar)

• Will barely be covered in this tutorial. If interested, read [Jelinek 98, Manning & Schutze 99, Rosenfeld 00]

Page 23: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 23

Why Just Unigram Models?

• Difficulty in moving toward more complex models

– They involve more parameters, so need more data to estimate (A doc is an extremely small sample)

– They increase the computational complexity significantly, both in time and space

• Capturing word order or structure may not add so much value for “topical inference”

• But, using more sophisticated models can still be expected to improve performance ...

Page 24: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 24

Evaluation of SLMs• Direct evaluation criterion: How well does the model fit the

data to be modeled? – Example measures: Data likelihood, perplexity, cross entropy,

Kullback-Leibler divergence (mostly equivalent)

• Indirect evaluation criterion: Does the model help improve the performance of the task?– Specific measure is task dependent

– For retrieval, we look at whether a model helps improve retrieval accuracy

– We hope more “reasonable” LMs would achieve better retrieval performance

Page 25: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 25

Part 1: Introduction (cont.)

1. Introduction- Information Retrieval (IR)

- Statistical Language Models (SLMs)

- Application of SLMs to IR

2. The Basic Language Modeling Approach

3. More Advanced Language Models

4. Language Models for Special Retrieval Tasks

5. A General Framework for Applying SLMs to IR

6. Summary

We are here

Page 26: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 26

Representative LMs for IR1998 1999 2000 2001 2002 2003

Beyond unigramSong & Croft 99

Smoothing examinedZhai & Lafferty 01a

Bayesian Query likelihoodZaragoza et al. 03.

Theoretical justificationLafferty & Zhai 01a,01b

Two-stage LMsZhai & Lafferty 02

Lavrenko 04Kraaij 04

Zhai 02Dissertations Hiemstra 01Berger 01

Ponte 98

Translation modelBerger & Lafferty 99

Basic LM (Query Likelihood)

URL prior Kraaij et al. 02

Lavrenko et al. 02 Ogilvie & Callan 03Zhai et al. 03

Xu et al. 01 Zhang et al. 02Cronen-Townsend et al. 02

Si et al. 02

Special IR tasks

Xu & Croft 99

2004

Parsimonious LMHiemstra et al. 04

Cluster smoothingLiu & Croft 04; Tao et al. 06

Relevance LMLavrenko & Croft 01

Dependency LMGao et al. 04

Model-based FBZhai & Lafferty 01b

Rel. Query FBNallanati et al 03

Query likelihood scoringPonte & Croft 98

Hiemstra & Kraaij 99;Miller et al. 99

Parametersensitivity

Ng 00

Title LMJin et al. 02

Term-specific smoothingHiemstra 02

Concept Likelihood Srikanth & Srihari 03

Time priorLi & Croft 03

Shen et al. 05

Srikanth 04

Kurland & Lee 05

Pesudo QueryKurland et al. 05

Rebust Est.Tao & Zhai 06

Thesauri Cao et al. 05

Query expansionBai et al. 05

2005 -

Markov-chain query modelLafferty & Zhai 01b

Query/RelModel &

Feedback

Cluster LMKurland & Lee 04Improved

Basic LM

Tan et al. 06

Page 27: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 27

Ponte & Croft’s Pioneering Work [Ponte & Croft 98]

• Contribution 1: – A new “query likelihood” scoring method: p(Q|D)

– [Maron and Kuhns 60] had the idea of query likelihood, but didn’t work out how to estimate p(Q|D)

• Contribution 2:– Connecting LMs with text representation and weighting in IR

– [Wong & Yao 89] had the idea of representing text with a multinomial distribution (relative frequency), but didn’t study the estimation problem

• Good performance is reported using the simple query likelihood method

Page 28: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 28

Early Work (1998-1999)

• At about the same time as SIGIR 98, in TREC 7, two groups explored similar ideas independently: BBN [Miller et al., 99] & Univ. of Twente [Hiemstra & Kraaij 99]

• In TREC-8, Ng from MIT motivated the same query likelihood method in a different way [Ng 99]

• All following the simple query likelihood method; methods differ in the way the model is estimated and the event model for the query

• All show promising empirical results

• Main problems: – Feedback is explored heuristically

– Lack of understanding why the method works….

Page 29: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 29

Later Work (1999-)

• Attempt to understand why LMs work [Zhai & Lafferty 01a, Lafferty & Zhai 01a, Ponte 01, Greiff & Morgan 03, Sparck Jones et al. 03, Lavrenko 04]

• Further extend/improve the basic LMs [Song & Croft 99, Berger & Lafferty 99, Jin et al. 02, Nallapati & Allan 02, Hiemstra 02, Zaragoza et al. 03, Srikanth & Srihari 03, Nallapati et al 03, Li &Croft 03, Gao et al. 04, Liu & Croft 04, Kurland & Lee 04,Hiemstra et al. 04,Cao et al. 05, Tao et al. 06]

• Explore alternative ways of using LMs for retrieval (mostly query/relevance model estimation) [Xu & Croft 99, Lavrenko & Croft 01, Lafferty & Zhai 01a, Zhai & Lafferty 01b, Lavrenko 04, Kurland et al. 05, Bai et al. 05,Tao & Zhai 06]

• Explore the use of SLMs for special retrieval tasks [Xu & Croft 99, Xu et al. 01, Lavrenko et al. 02, Cronen-Townsend et al. 02, Zhang et al. 02, Ogilvie & Callan 03, Zhai et al. 03, Kurland & Lee 05, Shen et al. 05]

Page 30: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 30

Part 2: The Basic LM Approach

1. Introduction2. The Basic Language Modeling Approach

- Query Likelihood Document Ranking- Smoothing of Language Models- Why does it work?- Variants of the basic LM

3. More Advanced Language Models 4. Language Models for Special Retrieval Tasks 5. A General Framework for Applying SLMs to IR 6. Summary

We are here

Page 31: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 31

The Basic LM Approach[Ponte & Croft 98]

Document

Text miningpaper

Food nutritionpaper

Language Model

…text ?mining ?assocation ?clustering ?…food ?…

…food ?nutrition ?healthy ?diet ?…

Query = “data mining algorithms”

Which model would most likely have generatedthis query?

Page 32: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 32

Ranking Docs by Query Likelihood

d1

d2

dN

qθd1

θd2

θdN

Doc LM

p(q| θd1)

p(q| θd2)

p(q| θdN)

Query likelihood

Page 33: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 33

Modeling Queries: Different Assumptions

• Multi-Bernoulli: Modeling word presence/absence– q= (x1, …, x|V|), xi =1 for presence of word wi; xi =0 for absence

– Parameters: {p(wi=1|d), p(wi=0|d)} p(wi=1|d)+ p(wi=0|d)=1

• Multinomial (Unigram LM): Modeling word frequency– q=q1,…qm , where qj is a query word

– c(wi,q) is the count of word wi in query q

– Parameters: {p(wi|d)} p(w1|d)+… p(w|v||d) = 1

| | | | | |

1 | |1 1, 1 1, 0

( ( ,..., ) | ) ( | ) ( 1| ) ( 0 | )i i

V V V

V i i i ii i x i x

p q x x d p w x d p w d p w d= = = = =

= = = = = =∏ ∏ ∏

| |( , )

11 1

( ... | ) ( | ) ( | ) i

Vmc w q

m j ij i

p q q q d p q d p w d= =

= = =∏ ∏

[Ponte & Croft 98] uses Multi-Bernoulli; most other work uses multinomialMultinomial seems to work better [Song & Croft 99, McCallum & Nigam 98,Lavrenko 04]

Page 34: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 34

Retrieval as LM Estimation

• Document ranking based on query likelihood| |

1 1

1 2

log ( | ) log ( | ) ( , ) log ( | )

, ...

Vm

i i ii i

m

p q d p q d c w q p w d

where q q q q= =

= =

=

∑ ∑

• Retrieval problem ≈ Estimation of p(wi|d)

• Smoothing is an important issue, and distinguishes different approaches

Document language model

Page 35: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 35

How to Estimate p(w|d)?

• Simplest solution: Maximum Likelihood Estimator– P(w|d) = relative frequency of word w in d

– What if a word doesn’t appear in the text? P(w|d)=0

• In general, what probability should we give a word that has not been observed?

• If we want to assign non-zero probabilities to such words, we’ll have to discount the probabilities of observed words

• This is what “smoothing” is about …

Page 36: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 36

Part 2: The Basic LM Approach (cont.)

1. Introduction2. The Basic Language Modeling Approach

- Query Likelihood Document Ranking- Smoothing of Language Models- Why does it work?- Variants of the basic LM

3. More Advanced Language Models 4. Language Models for Special Retrieval Tasks 5. A General Framework for Applying SLMs to IR 6. Summary

We are here

Page 37: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 37

Language Model Smoothing (Illustration)

P(w)

Word w

Max. Likelihood Estimate

wordsallofcountwofcount

ML wp =)(

Smoothed LM

Page 38: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 38

How to Smooth?• All smoothing methods try to

– discount the probability of words seen in a document

– re-allocate the extra counts so that unseen words will have a non-zero count

• Method 1 Additive smoothing [Chen & Goodman 98]: Add a constant δ to the counts of each word, e.g., “add 1”

( , ) 1( | )

| | | |c w d

p w dd V

+=

+

“Add one”, Laplace

Vocabulary size

Counts of w in d

Length of d (total counts)

Page 39: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 39

Improve Additive Smoothing• Should all unseen words get equal

probabilities?

• We can use a reference model to discriminate unseen words

Discounted ML estimate

Reference language model

( | )( | )

( | )DML

d

p w d if w is seen in dp w d

p w REF otherwiseα

=

1 ( | )

( | )

DMLw is seen

d

w is unseen

p w d

p w REFα

−=

∑∑

NormalizerProb. Mass for unseen words

Page 40: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 40

Other Smoothing Methods

• Method 2 Absolute discounting [Ney et al. 94]: Subtract a constant δ from the counts of each word

• Method 3 Linear interpolation [Jelinek-Mercer 80]: “Shrink” uniformly toward p(w|REF)

max( ( , ) ,0) | | ( | )| |( | ) uc w d d p w REFdp w d δ δ− +=

# unique words

( , )( | ) (1 ) ( | )

| |c w d

p w d p w REFd

λ λ= − +

parameterML estimate

Page 41: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 41

Other Smoothing Methods (cont.)

• Method 4 Dirichlet Prior/Bayesian [MacKay & Peto 95, Zhai &

Lafferty 01a, Zhai & Lafferty 02]: Assume pseudo counts µp(w|REF)

• Method 5 Good Turing [Good 53]: Assume total # unseen events to be n1 (# of singletons), and adjust the seen events in the same way

( , ) ( | ) | | ( , )( | ) ( | )

| | | | | | | |c w d p w REF d c w d

p w d p w REFd d d dµ µ

µ µ µ+

= = ++ + +

parameter

( )

*( , ) 1 2( , ) 1| |

( , ) 0 1

( , )

2*( , ) 1( | ) ; *( , ) ;0* ,1* ,.....

0? | ?

c w dc w dd

c w d

r

c w d

n nc w dp w d c w d n

n n n

n the number of words with count r

What if n What about p w REF

++

= = = =

=

= Heuristics needed

Page 42: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

So, which method is the best?

It depends on the data and the task!

Cross validation is generally used to choose the best method and/or set the smoothing parameters…

For retrieval, Dirichlet prior performs well…

Backoff smoothing [Katz 87] doesn’t work well due to a lack of 2nd-stage smoothing…

Note that many other smoothing methods existSee [Chen & Goodman 98] and other publications in speech recognition…

Page 43: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 43

Comparison of Three Methods[Zhai & Lafferty 01a]

Query Type Jelinek- M ercer Dirichlet Abs. Discount ingTitle 0.228 0.256 0.237Long 0.278 0.276 0.260

Relative performance of JM, Dir. and AD

0

0.1

0.2

0.3

JM DIR AD

Method

precision

TitleQuery

LongQuery

Comparison is performed on a variety of test collections

Page 44: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 44

Part 2: The Basic LM Approach (cont.)

1. Introduction2. The Basic Language Modeling Approach

- Query Likelihood Document Ranking- Smoothing of Language Models- Why does it work?- Variants of the basic LM

3. More Advanced Language Models 4. Language Models for Different Retrieval Tasks 5. A General Framework for Applying SLMs to IR 6. Summary

We are here

Page 45: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 45

Understanding Smoothing

( | )( | )

( | )DML

d

p w d if w is seen in dp w d

p w REF otherwiseα

=

Discounted ML estimate

Reference language model

, ( , ) 0 , ( , ) 0

, ( , ) 0 , ( , ) 0

log ( | ) ( , ) log ( | )

( , ) log ( | ) ( , ) log ( | )

( , ) log ( | ) ( , ) log ( | ) ( , ) log ( | )

( |( , ) log

w V

DML dw V c w d w V c w d

DML d dw V c w d w V w V c w d

DML

p q d c w q p w d

c w q p w d c w q p w REF

c w q p w d c w q p w REF c w q p w REF

p wc w q

α

α α

∈ > ∈ =

∈ > ∈ ∈ >

=

= +

= + −

=

∑ ∑

∑ ∑ ∑

, ( , ) 0( , ) 0

)| | log ( , ) ( | )

( | ) dw V c w d w Vdc w q

dq c w q p w REF

p w REFα

α∈ > ∈>

+ +∑ ∑

Retrieval formula using the general smoothing

scheme

The key rewriting stepSimilar rewritings are very common when using LMs for IR…

The general smoothing scheme

Page 46: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 46

, ( , ) 0( , ) 0

( | )log ( | ) ( , ) log | | log ( , ) ( | )

( | )DML

dw V c w d w Vdc w q

p w dp q d c w q q c w q p w REF

p w REFα

α∈ > ∈>

= + +∑ ∑

Smoothing & TF-IDF Weighting [Zhai & Lafferty 01a]

• Plug in the general smoothing scheme to the query likelihood retrieval formula, we obtain

Ignore for rankingIDF-like weighting

TF weightingDoc length normalization(long doc is expected to have a smaller αd)

• Smoothing with p(w|C) ≈ TF-IDF + length norm. Smoothing implements traditional retrieval heuristics

• LMs with simple smoothing can be computed as efficiently as traditional retrieval models

Words in both query and doc

Page 47: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 47

The Dual-Role of Smoothing [Zhai & Lafferty 02]

Verbosequeries

Keywordqueries

Why does query type affect smoothing sensitivity?

long

short

short

long

Page 48: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 48

Query = “the algorithms for data mining”

Another Reason for Smoothing

p( “algorithms”|d1) = p(“algorithm”|d2)p( “data”|d1) < p(“data”|d2)

p( “mining”|d1) < p(“mining”|d2)

So we should make p(“the”) and p(“for”) less different for all docs, and smoothing helps achieve this goal…

Content words

Intuitively, d2 should have a higher score, but p(q|d1)>p(q|d2)…

pDML(w|d1): 0.04 0.001 0.02 0.002 0.003pDML(w|d2): 0.02 0.001 0.01 0.003 0.004

Query = “the algorithms for data mining”P(w|REF) 0.2 0.00001 0.2 0.00001 0.00001Smoothed p(w|d1): 0.184 0.000109 0.182 0.000209 0.000309Smoothed p(w|d2): 0.182 0.000109 0.181 0.000309 0.000409

)!2|()1|(),|(9.0)|(1.0)|( dqpdqpREFwpdwpdwpwithsmoothingAfter DML <+=

Page 49: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 49

Two-stage Smoothing [Zhai & Lafferty 02]

c(w,d)

|d|P(w|d) =

+µp(w|C)

Stage-1

-Explain unseen words-Dirichlet prior(Bayesian)

µ

Collection LM

(1-λ) + λp(w|U)

Stage-2

-Explain noise in query-2-component mixture

λ

User background modelCan be approximated by p(w|C)

Page 50: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 50

Estimating µ using leave-one-out [Zhai & Lafferty 02]

P(w1|d- w1)

P(w2|d- w2)

∑∑= ∈

− +−+−

=N

i Vw i

ii d

CwpdwcdwcCl

11 )

1||)|(1),(

log(),()|(µ

µµ

log-likelihood

)ˆ C|(µlargmaxµ 1µ

−=

Maximum Likelihood Estimator

Newton’s Method

Leave-one-outw1

w2

P(wn|d- wn)

wn

...

Page 51: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 51

Why would “leave-one-out” work?

abc abc ab c d dabc cd d d

abd ab ab ab abcd d e cd e

20 word by author1

20 word by author2

abc abc ab c d dabe cb e f

acf fb ef aff abefcdc db ge f s

Suppose we keep sampling and get 10 more words. Which author is likely to

“write” more new words?

Now, suppose we leave “e” out…

1 20 1(" " | 1) (" " | 1) (" " | )

19 20 19 200 20 0

(" " | 2) (" " | 2) (" " | )19 20 19 20

ml smooth

ml smooth

p e author p e author p e REF

p e author p e author p e REF

µµ µ

µµ µ

= = ++ +

= = ++ +

µ must be big! more smoothing

µ doesn’t have to be big

The amount of smoothing is closely related to the underlying vocabulary size

Page 52: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 52

Estimating λ using Mixture Model[Zhai & Lafferty 02]

Query

Q=q1…qm

π1

πN

...

mN

j i ji 1 j 1

p(Q | ,U) ((1 )p(q | d ) p(q | U))

ˆ argmax p(Q | , U)

i

λ

λ π λ λ

λ λ

= =

= − +

=

∑ ∏

Maximum Likelihood Estimator Expectation-Maximization (EM) algorithm

P(w|d1)d1µ

P(w|dN)dNµ

… ...

Stage-1

(1-λ)p(w|d1)+ λp(w|U)λ

(1-λ)p(w|dN)+ λp(w|U)λ

Stage-2

ˆ( , ) ( | )( | )

ˆ| |j i j

j ii

c q d p q Cp q d

d

µ

µ

+=

+

Estimated in stage-1

Page 53: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 53

Collection query Optimal-JM Optimal-Dir Auto-2stageSK 20.3% 23.0% 22.2%*LK 36.8% 37.6% 37.4%SV 18.8% 20.9% 20.4%LV 28.8% 29.8% 29.2%SK 19.4% 22.3% 21.8%*LK 34.8% 35.3% 35.8%SV 17.2% 19.6% 19.9%LV 27.7% 28.2% 28.8%*SK 17.9% 21.5% 20.0%LK 32.6% 32.6% 32.2%SV 15.6% 18.5% 18.1%LV 26.7% 27.9% 27.9%*

AP88-89

WSJ87-92

ZIFF1-2

Automatic 2-stage results ≈ Optimal 1-stage results [Zhai & Lafferty 02]

Average precision (3 DB’s + 4 query types, 150 topics)* Indicates significant difference

Completely automatic tuning of parameters IS POSSIBLE!

Page 54: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 54

The Notion of RelevanceRelevance

∆(Rep(q), Rep(d)) Similarity

P(r=1|q,d) r ∈{0,1}Probability of Relevance

P(d →q) or P(q →d)Probabilistic inference

Different rep & similarity

Vector spacemodel

(Salton et al., 75)

Prob. distr.model

(Wong & Yao, 89)

GenerativeModel

RegressionModel

(Fox 83)

Classicalprob. Model(Robertson &

Sparck Jones, 76)

Docgeneration

Querygeneration

Basic LMapproach

(Ponte & Croft, 98)

Prob. conceptspace model

(Wong & Yao, 95)

Differentinference system

Inference network

model(Turtle & Croft, 91)

Initially, LMs are applied to IR in this wayLater, LMs are used along these lines too

Page 55: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 55

Justification of Query Likelihood [Lafferty & Zhai 01a]

• The General Probabilistic Retrieval Model– Define P(Q,D|R)

– Compute P(R|Q,D) using Bayes’ rule

– Rank documents by O(R|Q,D)

• Special cases– Document “generation”: P(Q,D|R)=P(D|Q,R)P(Q|R)

– Query “generation”: P(Q,D|R)=P(Q|D,R)P(D|R)

)0()1(

)0|,()1|,(

),|1(==

==

==RPRP

RDQPRDQP

DQROIgnored for ranking D

Doc generation leads to the classic Robertson-Sparck Jones modelQuery generation leads to the query likelihood LM approach

Page 56: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 56

Query Generation [Lafferty & Zhai 01a]

))0|()0,|(()0|()1|(

)1,|(

)0|()0,|()1|()1,|(

)0|,()1|,(

),|1(

=≈===

=∝

====

=

==

∝=

RQPRDQPAssumeRDPRDP

RDQP

RDPRDQPRDPRDQP

RDQPRDQP

DQRO

Assuming uniform prior, we have

Query likelihood p(q| θd) Document prior

)1,|(),|1( =∝= RDQPDQRO

Computing P(Q|D, R=1) generally involves two steps:(1) estimate a language model based on D(2) compute the query likelihood according to the estimated model

P(Q|D)=P(Q|D, R=1)! Prob. that a user who likes D would pose query Q

Relevance-based interpretation of the so-called “document language model”

Page 57: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 57

Part 2: The Basic LM Approach (cont.)

1. Introduction2. The Basic Language Modeling Approach

- Query Likelihood Document Ranking- Smoothing of Language Models- Why does it work?- Variants of the basic LM

3. More Advanced Language Models 4. Language Models for Special Retrieval Tasks 5. A General Framework for Applying SLMs to IR 6. Summary

We are here

Page 58: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 58

Variants of the Basic LM Approach• Different smoothing strategies

– Hidden Markov Models (essentially linear interpolation) [Miller et al. 99]

– Smoothing with an IDF-like reference model [Hiemstra & Kraaij 99]

– Performance tends to be similar to the basic LM approach– Many other possibilities for smoothing [Chen & Goodman 98]

• Different priors– Link information as prior leads to significant improvement of

Web entry page retrieval performance [Kraaij et al. 02]

– Time as prior [Li & Croft 03]

– PageRank as prior [Kurland & Lee 05]

• Passage retrieval [Liu & Croft 02]

Page 59: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 59

Part 3: More Advanced LMs

1. Introduction

2. The Basic Language Modeling Approach

3. More Advanced Language Models - Improving the basic LM approach

- Feedback and alternative ways of using LMs

4. Language Models for Special Retrieval Tasks

5. A General Framework for Applying SLMs to IR

6. Summary

We are here

Page 60: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 60

Improving the Basic LM Approach• Capturing limited dependencies

– Bigrams/Trigrams [Song & Croft 99]; Grammatical dependency [Nallapati & Allan 02, Srikanth & Srihari 03, Gao et al. 04]

– Generally insignificant improvement as compared with other extensions such as feedback

• Full Bayesian query likelihood [Zaragoza et al. 03]

– Performance similar to the basic LM approach

• Translation model for p(Q|D,R) [Berger & Lafferty 99, Jin et al. 02,Cao et al. 05]

– Address polesemy and synonyms; improves over the basic LM methods, but computationally expensive

• Cluster-based smoothing/scoring [Liu & Croft 04, Kurland & Lee 04,Tao et al. 06]

– Improves over the basic LM, but computationally expensive

• Parsimonious LMs [Hiemstra et al. 04]:

– Using a mixture model to “factor out” non-discriminative words

Page 61: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 61

Translation Models

• Directly modeling the “translation” relationship between words in the query and words in a doc

• When relevance judgments are available, (q,d) serves as data to train the translation model

• Without relevance judgments, we can use synthetic data [Berger & Lafferty 99], <title, body>[Jin et al.

02] , or thesauri [Cao et al. 05]

1

( | , ) ( | ) ( | )j

m

t i j jw Vi

p Q D R p q w p w D∈=

= ∑∏Basic translation model

Translation model Regular doc LM

Page 62: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 62

Cluster-based Smoothing/Scoring• Cluster-based smoothing: Smooth a document LM with a

cluster of similar documents [Liu & Croft 04]: improves over the basic LM, but insignificantly

• Document expansion smoothing: Smooth a document LM with the neighboring documents (essentially one cluster per document) [Tao et al. 06] : improves over the basic LM more significantly

• Cluster-based query likelihood: Similar to the translation model, but “translate” the whole document to the query through a set of clusters [Kurland & Lee 04]

( | , ) ( | ) ( | )C Clusters

p Q D R p Q C p C D∈

= ∑How likely doc D

belongs to cluster C

Only effective when interpolated with the basic LM scores

Likelihood of Q given C

Page 63: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 63

Part 3: More Advanced LMs (cont.)

1. Introduction

2. The Basic Language Modeling Approach

3. More Advanced Language Models - Improving the basic LM approach

- Feedback and Alternative ways of using LMs

4. Language Models for Special Retrieval Tasks

5. A General Framework for Applying SLMs to IR

6. Summary

We are here

Page 64: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 64

Feedback and Doc/Query Generation

( 1| , ) ( | , 1)O R Q D P Q D R= ∝ =

( | , 1)( 1| , )

( | , 0)P D Q R

O R Q DP D Q R

== ∝

=Classic Prob. Model

Query likelihood(“Language Model”)

Rel. doc model

NonRel. doc model

“Rel. query” model

P(D|Q,R=1)

P(D|Q,R=0)

P(Q|D,R=1)

(q1,d1,1)(q1,d2,1)(q1,d3,1)(q1,d4,0)(q1,d5,0)

(q3,d1,1)(q4,d1,1)(q5,d1,1)(q6,d2,1)(q6,d3,0)

ParameterEstimation

Initial retrieval:- query as rel doc vs. doc as rel query- P(Q|D,R=1) is more accurate

Feedback:- P(D|Q,R=1) can be improved for the

current query and future doc- P(Q|D,R=1) can also be improved, but

for current doc and future query

Doc-based feedbackQuery-based feedback

Page 65: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 65

Difficulty in Feedback with Query Likelihood

• Traditional query expansion [Ponte 98, Miller et al. 99, Ng 99]

– Improvement is reported, but there is a conceptual inconsistency

– What’s an expanded query, a piece of text or a set of terms?

• Avoid expansion– Query term reweighting [Hiemstra 01, Hiemstra 02]

– Translation models [Berger & Lafferty 99, Jin et al. 02]

– Only achieving limited feedback

• Doing relevant query expansion instead [Nallapati et al 03]

• The difficulty is due to the lack of a query/relevance model

• The difficulty can be overcome with alternative ways of using LMs for retrieval (e.g., relevance model [Lavrenko & Croft 01] , Query model estimation [Lafferty & Zhai 01b; Zhai & Lafferty 01b])

Page 66: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 66

Two Alternative Ways of Using LMs• Classic Probabilistic Model :Doc-Generation as opposed to

Query-generation

– Natural for relevance feedback

– Challenge: Estimate p(D|Q,R=1) without relevance feedback; relevance model [Lavrenko & Croft 01] provides a good solution

• Probabilistic Distance Model :Similar to the vector-space model, but with LMs as opposed to TF-IDF weight vectors– A popular distance function: Kullback-Leibler (KL) divergence,

covering query likelihood as a special case

– Retrieval is now to estimate query & doc models and feedback is treated as query LM updating [Lafferty & Zhai 01b; Zhai & Lafferty 01b]

( | , 1) ( | , 1)( 1| , )

( | , 0) ( )P D Q R P D Q R

O R Q DP D Q R P D

= == ∝ ≈

=

Both methods outperform the basic LM significantly

( , ) ( | | ) , ( | ) l o g ( | )Q D Q Dw V

s c o r e Q D D e s s e n t ia l ly p w p wθ θ θ θ∈

= − ∑

Page 67: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 67

Relevance Model Estimation[Lavrenko & Croft 01]

• Question: How to estimate P(D|Q,R) (or p(w|Q,R)) without relevant documents?

• Key idea: – Treat query as observations about p(w|Q,R)

– Approximate the model space with document models

• Two methods for decomposing p(w,Q)– Independent sampling (Bayesian model averaging)

– Conditional sampling: p(w,Q)=p(w)p(Q|w)1

( | , ) ( | ) ( | , ) ( | ) ( | ) ( | )

( | ) ( | ) ( | ) ( | ) ( | )

D D D D D D D

m

D D D D j DD C D C j

p w Q R p w p Q R d p w p R p Q d

p w p R p Q p w p q

θ θ θ θ θ θ θ

θ θ θ θ θ

Θ Θ

∈ ∈ =

= ∝

≈ ∝

∫ ∫

∑ ∑ ∏

1

( | , 1) ( ) ( | ) ( ) ( | ) ( | )

( | ) ( )( ) ( | ) ( ) ( | )

( )

m

iD Ci

D C

p w Q R p w p Q w p w p q D p D w

p w D p Dp w p w D p D p D w

p w

∈=

= ∝ =

= =

∑∏

∑ ( | ) ( )( | )

( )p w D p w

p D wp D

=

Original formula in [Lavranko &Croft 01]

Page 68: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 68

Kernel-based Allocation [Lavrenko 04]

• A general generative model for text

• Choices of the kernel function– Delta kernel:

– Dirichlet kernel: allow a training point to “spread” its influence

11

( ... ) ( | ) ( )

1( ) ( )

n

n ii

ww T

p w w p w p d

p d K dN

θ θ

θ θ

=

=

∏∫

∑ vv

An infinite mixture model

Kernel-based density function

Kernel function ( ) ( , )wk similarity wθ θ≈vv

11

1( ... ) ( | )

n

n iw T i

p w w p w wN ∈ =

= ∑∏v

v

Average probability of w1…wn over all training points

T= training data

Page 69: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 69

Query Model Estimation[Lafferty & Zhai 01b, Zhai & Lafferty 01b]

• Question: How to estimate a better query model than the ML estimate based on the original query?

• “Massive feedback”: Improve a query model through co-occurrence pattern learned from – A document-term Markov chain that outputs the query [Lafferty

& Zhai 01b]

– Thesauri, corpus [Bai et al. 05,Collins-Thompson & Callan 05]

• Model-based feedback: Improve the estimate of query model by exploiting pseudo-relevance feedback– Update the query model by interpolating the original query

model with a learned feedback model [ Zhai & Lafferty 01b]

– Estimate a more integrated mixture model using pseudo-feedback documents [ Tao & Zhai 06]

Page 70: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 70

Feedback as Model Interpolation [Zhai & Lafferty 01b]

Query Q

)||( DQD θθ

Document D

Results

Feedback Docs F={d1, d2 , …, dn}

FQQ αθθαθ +−= )1('

Generative model

Divergence minimization

Fθα=0

No feedback

FQ θθ ='

α=1

Full feedback

QQ θθ ='

Page 71: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 71

θF Estimation Method I: Generative Mixture Model

w

w

F={D1, …, Dn}

log ( | ) ( ; ) log((1 ) ( | ) ( | ))D F w D

p F c w D p w p w Cθ λ θ λ∈ ∈

= − +∑ ∏)|(logmaxarg θθ

θFpF =Maximum Likelihood

P(w| θ )

P(w| C)λ

1-λ

P(source)

Background words

Topic words

The learned topic model is called a “parsimonious language model” in [Hiemstra et al. 04]

Page 72: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 72

θF Estimation Method II:Empirical Divergence Minimization

D1

F={D1, …, Dn}

1dθ

ndθ Dn

θclose

1| |

1

( , , ) ( || ) ( || ))n

D j CFi

D F C D Dλ θ θ θ λ θ θ=

= −∑

),,(minarg CFDF θθ λθ

=

Empirical divergence

Divergence minimization

Cθfar (λ)

C

Background model

Page 73: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 73

Example of Feedback Query Model

W p(W| )security 0.0558airport 0.0546

beverage 0.0488alcohol 0.0474bomb 0.0236

terrorist 0.0217author 0.0206license 0.0188bond 0.0186

counter-terror 0.0173terror 0.0142

newsnet 0.0129attack 0.0124

operation 0.0121headline 0.0121

Trec topic 412: “airport security”

W p(W| )the 0.0405

security 0.0377airport 0.0342

beverage 0.0305alcohol 0.0304

to 0.0268of 0.0241

and 0.0214author 0.0156bomb 0.0150

terrorist 0.0137in 0.0135

license 0.0127state 0.0127

by 0.0125

λ=0.9 λ=0.7

FθFθ

Mixture model approach

Web database

Top 10 docs

Page 74: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 74

Model-based feedback Improves over Simple LM [Zhai & Lafferty 01b]

Simple LM Mixture Improv. Div.Min. Improv.AvgPr 0.21 0.296 pos +41% 0.295 pos +40%InitPr 0.617 0.591 pos -4% 0.617 pos +0%Recall 3067/4805 3888/4805 pos +27% 3665/4805 pos +19%AvgPr 0.256 0.282 pos +10% 0.269 pos +5%InitPr 0.729 0.707 pos -3% 0.705 pos -3%Recall 2853/4728 3160/4728 pos +11% 3129/4728 pos +10%AvgPr 0.281 0.306 pos +9% 0.312 pos +11%InitPr 0.742 0.732 pos -1% 0.728 pos -2%Recall 1755/2279 1758/2279 pos +0% 1798/2279 pos +2%

collection

AP88-89

TREC8

WEB

Translation models, Relevance models, and Feedback-based query models have all been shown to improve performance significantly over the simple LMs (Parameter tuning is necessary in many cases, but see

[Tao & Zhai 06] for “parameter-free” pseudo feedback)

Page 75: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 75

Part 4: LMs for Special Retrieval Tasks1. Introduction2. The Basic Language Modeling Approach 3. More Advanced Language Models 4. Language Models for Special Retrieval Tasks

- Cross-lingual IR- Distributed IR- Structured document retrieval- Personalized/context-sensitive search - Modeling redundancy- Predicting query difficulty- Subtopic retrieval

5. A General Framework for Applying SLMs to IR 6. Summary

We are here

Page 76: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 76

Cross-Lingual IR

• Use query in language A (e.g., English) to retrieve documents in language B (e.g., Chinese)

• Cross-lingual p(Q|D,R) [Xu et al 01]

• Cross-lingual p(D|Q,R) [Lavrenko et al 02]

1

( | , ) [ ( | ) (1 ) ( | ) ( | )]Chinese

m

i trans ic Vi

p Q D R p q REF p c D p q cα α∈=

= + − ∑∏

English Chinese

English Chinese word

1

1

1( , ) 1

11

( , ... )( | , )

( ... )

( , ... ) ( , ) ( | ) ( | )

( , ... ) ( ) ( | ) ( | ) ( | ) ( | ) ( | )

E C

C Chinese

m

m

m

m E C c i EM M M i

m

m C c i C i C trans i CM M c Vi

p c q qp c Q R

p q q

p c q q p M M p c M p q M

p c q q p M p c M p q M p q M p q c p c M

∈ =

∈ ∈=

=

= =

∑ ∏

∑ ∑∏

Method 1:

Method 2:

Translation model

Estimate with a bilingual lexicon

Or Parallel corpora

Estimate with parallel corpora

Page 77: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 77

Distributed IR• Retrieve documents from multiple collections

• The task is generally decomposed into two subtasks: Collection selection and result fusion

• Using LMs for collection selection [Xu & Croft 99, Si et al. 02]

– Treat collection selection as “retrieving collections” as opposed to “documents”

– Estimate each collection model by maximum likelihood estimate [Si et al. 02] or clustering [Xu & Croft 99]

• Using LMs for result fusion [ Si et al. 02]

– Assume query likelihood scoring for all collections, but on each collection, a distinct reference LM is used for smoothing

– Adjust the bias score p(Q|D,Collection) to recover the fair score p(Q|D)

Page 78: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 78

1 2

1

11

...

( | , 1) ( | , 1)

( | , 1) ( | , 1)

m

m

ii

m k

j i jji

Q q q q

p Q D R p q D R

s D D R p q D R

=

==

=

= = =

= = =

∑∏

Structured Document Retrieval[Ogilvie & Callan 03]

Title

Abstract

Body-Part1

Body-Part2

D

D1

D2

D3

Dk

-Want to combine different parts of a document with appropriate weights

-Anchor text can be treated as a “part” of a document

- Applicable to XML retrieval

“part selection” prob. Serves as weight for DjCan be trained using EM

Select Dj and generate a query word using Dj

Page 79: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 79

Personalized/Context-Sensitive Search[Shen et al. 05, Tan et al. 06]

• User information and search context can be used to estimate a better query model

ˆ arg max ( | , )Q p Query Collectionθθ θ=

Refinement of this model leads to specific retrieval formulasSimple models often end up interpolating many unigram language

models based on different sources of evidence, e.g., short-term search history [Shen et al. 05] or long-term search history [Tan et al. 06]

ˆ arg max ( | , , , )Q p Query User SearchContext Collectionθθ θ=

Context-independent Query LM:

Context-sensitive Query LM:

Page 80: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 80

Modeling Redundancy• Given two documents D1 and D2, decide how redundant D1

(or D2) is w.r.t. D2 (or D1)

• Redundancy of D1 ≈ “to what extent can D1 be explained by a model estimated based on D2”

• Use a unigram mixture model [Zhai 02]

• See [Zhang et al. 02] for a 3-component redundancy model

• Along a similar line, we could measure document similarity in an asymmetric way [Kurland & Lee 05]

2 2

2

1 1

*1

log ( | , ) ( , ) log[ ( | ) (1 ) ( | )]

arg max log ( | , )

D Dw V

D

p D c w D p w p w REF

p Dλ

λ θ λ θ λ

λ λ θ∈

= + −

=

Maximum Likelihood estimatorEM algorithm

Reference LMLM for D2

Measure of redundancy

Page 81: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 81

Predicting Query Difficulty [Cronen-Townsend et al. 02]

• Observations:– Discriminative queries tend to be easier– Comparison of the query model and the collection model can

indicate how discriminative a query is

• Method:– Define “query clarity” as the KL-divergence between an

estimated query model or relevance model and the collection LM

– An enriched query LM can be estimated by exploiting pseudo feedback (e.g., relevance model)

• Correlation between the clarity scores and retrieval performance is found

( | )( ) ( | ) log

( | )Q

Qw

p wclarity Q p w

p w Collection

θθ= ∑

Page 82: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 82

Subtopic Retrieval [Zhai 02, Zhai et al 03]

• Subtopic retrieval: Aim at retrieving as many distinctsubtopics of the query topic as possible – E.g., retrieve “different applications of robotics”– Need to go beyond independent relevance

• Two methods explored in [Zhai 02]

– Maximal Marginal Relevance: • Maximizing subtopic coverage indirectly through redundancy

elimination• LMs can be used to model redundancy

– Maximal Diverse Relevance: • Maximizing subtopic coverage directly through subtopic modeling• Define a retrieval function based on subtopic representation of

query and documents• Mixture LMs can be used to model subtopics (essentially

clustering)

Page 83: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 83

Unigram Mixture Models• Each subtopic is modeled with one unigram LM• A document is treated as observations from a mixture

model involving many subtopic LMs• Two different sampling strategies to generate a document

– Strategy 1 (Document Clustering): Choose a subtopic model and generate all the words in a document using the same model

– Strategy 2 (Aspect Models [Hofmann 99; Blei et al 02])Use a (potentially) different subtopic model when generating each word in a document, so two words in a document may be generated using different LMs

• For subtopic retrieval, we assume a document may have multiple subtopics, so strategy 2 is more appropriate

• Many other applications…

Page 84: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 84

Aspect Models

P(w|τ1)

P(w|τ2)

P(w|τk)

1 11

( | ,..., , ,..., ) ( | )k

k k i ii

p w p wτ τ λ λ λ τ=

= ∑Subtopic 1

Subtopic 2

Subtopic k

⊕ w Document D=d1 … dn

1 1 111

( | ,..., , ,..., ) [ ( | ) ( | )] ( | ,..., )n A

k k i a kai

p D p d p a Dir dτ τ α α τ λ λ α α λΛ

==

= ∑∏∫

Latent Dirichlet Allocation [Blei et al 02, Lafferty & Minka 03]λ’s are drawn from a common Dirichlet distribution

λ is now regularized

1 111

( | ,..., , ,..., ) ( | )n A

D D Dk k a i a

ai

p D p dτ τ λ λ λ τ==

= ∑∏

Prob. LSI [Hofmann 99]:Different D has a different set of λ’s

Flexible aspect distr.Need regularization

Page 85: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 85

Part 5: A General Framework forApplying SLMs to IR

1. Introduction

2. The Basic Language Modeling Approach

3. More Advanced Language Models

4. Language Models for Special Retrieval Tasks

5. A General Framework for Applying SLMs to IR- Risk minimization framework

- Special cases

6. Summary

We are here

Page 86: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 86

Risk Minimization: Motivation

• Long-standing IR Challenges

– Improve IR theory• Develop theoretically sound and empirically effective models

• Go beyond the limited traditional notion of relevance (independent, topical relevance)

– Improve IR practice• Optimize retrieval parameters automatically

• SLMs are very promising tools …

– How can we systematically exploit SLMs in IR?

– Can SLMs offer anything hard/impossible to achieve in traditional IR?

Page 87: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 87

Idea 1: Retrieval as Decision-Making(A more general notion of relevance)

Unordered subset?

Clustering?

Given a query,- Which documents should be selected? (D)- How should these docs be presented to the user? (π)

Choose: (D,π)

Query … Ranked list?1 2 3 4

Page 88: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 88

Idea 2: Systematic Language Modeling

DocumentLanguage ModelsDocuments

DOC MODELING

QueryQuery

Language Model

QUERY MODELING

Loss Function User

USER MODELING

Retrieval Decision: ?

Page 89: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 89

Generative Model of Document & Query [Lafferty & Zhai 01b]

observedPartiallyobserved

QθU)|( Up Qθ

User

DθS)|( Sp Dθ

Source

inferred

),|( Sdp Dθd Document

),|( Uqp Qθq Query

( | , )Q Dp R θ θ R

Page 90: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 90

Applying Bayesian Decision Theory [Lafferty & Zhai 01b, Zhai 02, Zhai & Lafferty 06]

Choice: (D1,π1)

Choice: (D2,π2)

Choice: (Dn,πn)

...

query quser U

doc set Csource S

θq

θ1

θN

∫Θ

= θθθπππ

dSCUqpDLDD

),,,|(),,(minarg*)*,(,

hidden observedloss

Bayes risk for choice (D, π)RISK MINIMIZATION

Loss

L

L

L

Page 91: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 91

Special Cases

• Set-based models (choose D)

• Ranking models (choose π)– Independent loss

• Relevance-based loss

• Distance-based loss

– Dependent loss • MMR loss

• MDR loss

Boolean model

Probabilistic relevance model Generative Relevance Theory

Vector-space Model

Subtopic retrieval model

Two-stage LMKL-divergence model

Page 92: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 92

Optimal Ranking for Independent Loss

1 11 1

1 1

1

1 1

1

1 1

1

1 1

* arg min ( , ) ( | , , , )

( , ) ( | ... )

( )

( ) ( )

* arg min ( ) ( ) ( | , , , )

arg min ( ) ( ) (

j j

j

j

j

j

N i

ii j

N i

ii j

N jN

ij i

N jN

ij i

N jN

ij i

L p q U C S d

L s l

s l

s l

s l p q U C S d

s l p

π

π π

π

π

ππ

π ππ

π π θ θ θ

π θ θ θ θ

θ

θ

π θ θ θ

θ θ

Θ

−= =

= =

− +

= =

− +

= =Θ

− +

= =

=

=

=

=

=

=

∑ ∑

∑ ∑

∑ ∑

∑ ∑∫

∑ ∑

v

v

| , , , )

( | , , , ) ( ) ( | , , , )

* ( | , , , )

j j

k k k k

k

q U C S d

r d q U C S l p q U C S d

Ranking based on r d q U C S

πθ

θ θ θ

π

Θ

Θ

=

=

v

v v

v

Decision space = {rankings}

Sequential browsing

Independent loss

Independent risk= independent scoring

“Risk ranking principle”[Zhai 02]

Page 93: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 93

Automatic Parameter Tuning• Retrieval parameters are needed to

– model different user preferences

– customize a retrieval model to specific queries and documents

• Retrieval parameters in traditional models– EXTERNAL to the model, hard to interpret

– Parameters are introduced heuristically to implement “intuition”

– No principles to quantify them, must set empirically through many experiments

– Still no guarantee for new queries/documents

• Language models make it possible to estimate parameters…

Page 94: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 94

Parameter Setting in Risk Minimization

Query Query Language Model

DocumentLanguage Models

Loss Function

User

Documents

Query model parameters

Doc model parameters

User model parameters

Estimate

Estimate

Set

Page 95: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 95

Generative Relevance Hypothesis [Lavrenko 04]

• Generative Relevance Hypothesis: – For a given information need, queries expressing that need and

documents relevant to that need can be viewed as independent random samples from the same underlying generative model

• A special case of risk minimization when document models and query models are in the same space

• Implications for retrieval models: “the same underlying generative model” makes it possible to– Match queries and documents even if they are in different

languages or media

– Estimate/improve a relevant document model based on example queries or vice versa

Page 96: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 96

Risk Minimization: Summary• Risk minimization is a general probabilistic retrieval

framework– Retrieval as a decision problem (=risk min.)

– Separate/flexible language models for queries and docs

• Advantages– A unified framework for existing models

– Automatic parameter tuning due to LMs

– Allows for modeling complex retrieval tasks

• Lots of potential for exploring LMs…

• For more information, see [Zhai 02]

Page 97: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 97

Part 6: Summary

1. Introduction

2. The Basic Language Modeling Approach

3. More Advanced Language Models

4. Language Models for Special Retrieval Tasks

5. A General Framework for Applying SLMs to IR

6. Summary– SLMs vs. traditional methods: Pros & Cons

– What we have achieved so far

– Challenges and future directions

We are here

Page 98: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 98

SLMs vs. Traditional IR• Pros:

– Statistical foundations (better parameter setting)

– More principled way of handling term weighting

– More powerful for modeling subtopics, passages,..

– Leverage LMs developed in related areas

– Empirically as effective as well-tuned traditional models with potential for automatic parameter tuning

• Cons:– Lack of discrimination (a common problem with generative models)

– Less robust in some cases (e.g., when queries are semi-structured)

– Computationally complex

– Empirically, performance appears to be inferior to well-tuned full-fledged traditional methods (at least, no evidence for beating them)

Page 99: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 99

What We Have Achieved So Far

• Framework and justification for using LMs for IR

• Several effective models are developed– Basic LM with Dirichlet prior smoothing is a reasonable baseline– Basic LM with informative priors often improves performance – Translation model handles polysemy & synonyms– Relevance model incorporates LMs into the classic probabilistic

IR model– KL-divergence model ties feedback with query model estimation – Mixture models can model redundancy and subtopics

• Completely automatic tuning of parameters is possible• LMs can be applied to virtually any retrieval task with great

potential for modeling complex IR problems

Page 100: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 100

Challenges and Future Directions• Challenge 1: Establish a robust and effective LM that

– Optimizes retrieval parameters automatically

– Performs as well as or better than well-tuned traditional retrieval methods with pseudo feedback

– Is as efficient as traditional retrieval methods

• Challenge 2: Demonstrate consistent and substantial improvement by going beyond unigram LMs– Model limited dependency between terms

– Derive more principled weighting methods for phrases

Can LMs consistently (convincingly) outperform traditional methods without sacrificing efficiency?

Can we do much better by going beyond unigram LMs?

Page 101: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 101

Challenges and Future Directions (cont.)

• Challenge 3: Develop LMs that can support “life-time learning” – Develop LMs that can improve accuracy for a current query

through learning from past relevance judgments

– Support collaborative information retrieval

• Challenge 4: Develop LMs that can model document structures and subtopics– Recognize query-specific boundaries of relevant passages

– Passage-based/subtopic-based feedback

– Combine different structural components of a document

How can we learn effectively from past relevance judgments?

How can we break the document unit in a principled way?

Page 102: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 102

Challenges and Future Directions (cont.)

• Challenge 5: Develop LMs to support personalized search– Infer and track a user’s interests with LMs

– Incorporate user’s preferences and search context in retrieval

– Customize/organize search results according to user’s interests

• Challenge 6: Generalize LMs to handle relational data– Develop LMs for semi-structured data (e.g., XML)

– Develop LMs to handle structured queries

– Develop LMs for keyword search in relational databases

How can we exploit user information and search context to improve search?

What role can LMs play when combining text with relational data?

Page 103: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 103

Challenges and Future Directions (cont.)

• Challenge 7: Develop LMs for hypertext retrieval– Combine LMs with link information

– Modeling and exploiting anchor text

– Develop a unified LM for hypertext search

• Challenge 8: Develop LMs for retrieval with complex information needs, e.g., – Subtopic retrieval

– Readability constrained retrieval

– Entity retrieval (e.g. expert search)

How can we exploit LMs to develop models for complex retrieval tasks?

How can we develop an effective unified retrieval model for Web search?

Page 104: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 104

References[Agichtein & Cucerzan 05] E. Agichtein and S. Cucerzan, Predicting accuracy of extracting information from

unstructured text collections, Proceedings of ACM CIKM 2005. pages 413-420.[Baeza-Yates & Ribeiro-Neto 99] R. Baeza-Yates and B. Ribeiro-Neto, Modern Information Retrieval, Addison-Wesley,

1999.[Bai et al. 05] Jing Bai, Dawei Song, Peter Bruza, Jian-Yun Nie, Guihong Cao, Query expansion using term relationships

in language models for information retrieval, Proceedings of ACM CIKM 2005, pages 688-695.[Berger & Lafferty 99] A. Berger and J. Lafferty. Information retrieval as statistical translation. Proceedings of the ACM

SIGIR 1999, pages 222-229.[Berger 01] A. Berger. Statistical machine learning for information retrieval. Ph.D. dissertation, Carnegie Mellon

University, 2001. [Blei et al. 02] D. Blei, A. Ng, and M. Jordan. Latent dirichlet allocation. In T G Dietterich, S. Becker, and Z. Ghahramani,

editors, Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press.[Cao et al. 05] Guihong Cao, Jian-Yun Nie, Jing Bai, Integrating word relationships into language models, Proceedings of

ACM SIGIR 2005, Pages: 298 - 305.[Carbonell and Goldstein 98]J. Carbonell and J. Goldstein, The use of MMR, diversity-based reranking for reordering

documents and producing summaries. In Proceedings of SIGIR'98, pages 335--336.[Chen & Goodman 98] S. F. Chen and J. T. Goodman. An empirical study of smoothing techniques for language

modeling. Technical Report TR-10-98, Harvard University.[Collins-Thompson & Callan 05] K. Collins-Thompson and J. Callan, Query expansing using random walk models,

Proceedings of ACM CIKM 2005, pages 704-711. [Cronen-Townsend et al. 02] Steve Cronen-Townsend, Yun Zhou, and W. Bruce Croft. Predicting query performance. In

Proceedings of the ACM Conference on Research in Information Retrieval (SIGIR), 2002.[Croft & Lafferty 03] W. B. Croft and J. Lafferty (ed), Language Modeling and Information Retrieval. Kluwer Academic

Publishers. 2003.[Fang et al. 04] H. Fang, T. Tao and C. Zhai, A formal study of information retrieval heuristics, Proceedings of ACM

SIGIR 2004. pages 49-56.[Fox 83] E. Fox. Expending the Boolean and Vector Space Models of Information Retrieval with P-Norm Queries and

Multiple Concept Types. PhD thesis, Cornell University. 1983.

Page 105: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 105

References (cont.)

[Fuhr 01] N. Fuhr. Language models and uncertain inference in information retrieval. In Proceedings of the Language Modeling and IR workshop, pages 6--11.

[Gao et al. 04] J. Gao, J. Nie, G. Wu, and G. Cao, Dependence language model for information retrieval, In Proceedings of ACM SIGIR 2004.

[Good 53] I. J. Good. The population frequencies of species and the estimation of population parameters. Biometrika, 40(3 and 4):237--264, 1953.

[Greiff & Morgan 03] W. Greiff and W. Morgan, Contributions of Language Modeling to the Theory and Practice of IR, In W. B. Croft and J. Lafferty (eds), Language Modeling for Information Retrieval, Kluwer Academic Pub. 2003.

[Grossman & Frieder 04] D. Grossman and O. Frieder, Information Retrieval: Algorithms and Heuristics, 2nd Ed, Springer, 2004.

[He & Ounis 05] Ben He and Iadh Ounis, A study of the Dirichlet priors for term frequency normalisation, Proceedings of ACM SIGIR 2005, Pages 465 - 471

[Hiemstra & Kraaij 99] D. Hiemstra and W. Kraaij, Twenty-One at TREC-7: Ad-hoc and Cross-language track, In Proceedings of the Seventh Text REtrieval Conference (TREC-7), 1999.

[Hiemstra 01] D. Hiemstra. Using Language Models for Information Retrieval. PhD dissertation, University of Twente, Enschede, The Netherlands, January 2001.

[Hiemstra 02] D. Hiemstra. Term-specific smoothing for the language modeling approach to information retrieval: the importance of a query term. In Proceedings of ACM SIGIR 2002, 35-41

[Hiemstra et al. 04] D. Hiemstra, S. Robertson, and H. Zaragoza. Parsimonious language models for information retrieval, In Proceedings of ACM SIGIR 2004.

[Hofmann 99] T. Hofmann. Probabilistic latent semantic indexing. In Proceedings on the 22nd annual international ACM-SIGIR 1999, pages 50-57.

[Jelinek 98] F. Jelinek, Statistical Methods for Speech Recognition, Cambirdge: MIT Press, 1998.[Jelinek & Mercer 80] F. Jelinek and R. L. Mercer. Interpolated estimation of markov source parameters from sparse data.

In E. S. Gelsema and L. N. Kanal, editors, Pattern Recognition in Practice. 1980. Amsterdam, North-Holland,. [Jeon et al. 03] J. Jeon, V. Lavrenko and R. Manmatha, Automatic Image Annotation and Retrieval using Cross-media

Relevance Models, In Proceedings of ACM SIGIR 2003

Page 106: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 106

References (cont.)

[Jin et al. 02] R. Jin, A. Hauptmann, and C. Zhai, Title language models for information retrieval, In Proceedings of ACM SIGIR 2002.

[Kalt 96] T. Kalt. A new probabilistic model of text classication and retrieval. University of Massachusetts Technical report TR98-18,1996.

[Katz 87] S. M. Katz. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics, Speech and Signal Processing, volume ASSP-35:400--401.

[Kraaij et al. 02] W. Kraaij,T. Westerveld, D. Hiemstra: The Importance of Prior Probabilities for Entry Page Search. Proceedings of SIGIR 2002, pp. 27-34

[Kraaij 04] W. Kraaij. Variations on Language Modeling for Information Retrieval, Ph.D. thesis, University of Twente, 2004, [Kurland & Lee 04] O. Kurland and L. Lee. Corpus structure, language models, and ad hoc information retrieval. In

Proceedings of ACM SIGIR 2004. [Kurland et al. 05] Oren Kurland, Lillian Lee, Carmel Domshlak, Better than the real thing?: iterative pseudo-query

processing using cluster-based language models, Proceedings of ACM SIGIR 2005. pages 19-26. [Kurland & Lee 05] Oren Kurland and Lillian Lee, PageRank without hyperlinks: structural re-ranking using links induced by

language models, Proceedings of ACM SIGIR 2005. pages 306-313.[Lafferty and Zhai 01a] J. Lafferty and C. Zhai, Probabilistic IR models based on query and document generation. In

Proceedings of the Language Modeling and IR workshop, pages 1--5.[Lafferty & Zhai 01b] J. Lafferty and C. Zhai. Document language models, query models, and risk minimization for

information retrieval. In Proceedings of the ACM SIGIR 2001, pages 111-119.[Lavrenko & Croft 01] V. Lavrenko and W. B. Croft. Relevance-based language models. In Proceedings of the ACM SIGIR

2001, pages 120-127.[Lavrenko et al. 02] V. Lavrenko, M. Choquette, and W. Croft. Cross-lingual relevance models. In Proceedings of SIGIR

2002, pages 175-182.[Lavrenko 04] V. Lavrenko, A generative theory of relevance. Ph.D. thesis, University of Massachusetts. 2004.[Li & Croft 03] X. Li, and W.B. Croft, Time-Based Language Models, In Proceedings of CIKM'03, 2003 [Liu & Croft 02] X. Liu and W. B. Croft. Passage retrieval based on language models. In Proceedings of CIKM 2002, pages

15-19.

Page 107: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 107

References (cont.)

[Liu & Croft 04] X. Liu and W. B. Croft. Cluster-based retrieval using language models. In Proceedings of ACM SIGIR 2004.

[MacKay & Peto 95] D. MacKay and L. Peto. (1995). A hierarchical Dirichlet language model. Natural Language Engineering, 1(3):289--307.

[Maron & Kuhns 60] M. E. Maron and J. L. Kuhns, On relevance, probabilistic indexing and information retrieval. Journal of the ACM, 7:216--244.

[McCallum & Nigam 98] A. McCallum and K. Nigam (1998). A comparison of event models for Naïve Bayes text classification. In AAAI-1998 Learning for Text Categorization Workshop, pages 41--48.

[Miller et al. 99] D. R. H. Miller, T. Leek, and R. M. Schwartz. A hidden Markov model information retrieval system. In Proceedings of ACM-SIGIR 1999, pages 214-221.

[Minka & Lafferty 03] T. Minka and J. Lafferty, Expectation-propagation for the generative aspect model, In Proceedings of the UAI 2002, pages 352--359.

[Nallanati & Allan 02] Ramesh Nallapati and James Allan, Capturing term dependencies using a language model based on sentence trees. In Proceedings of CIKM 2002. 383-390

[Nallanati et al 03] R. Nallanati, W. B. Croft, and J. Allan, Relevant query feedback in statistical language modeling, In Proceedings of CIKM 2003.

[Ney et al. 94] H. Ney, U. Essen, and R. Kneser. On Structuring Probabilistic Dependencies in Stochastic Language Modeling. Comput. Speech and Lang., 8(1), 1-28.

[Ng 00]K. Ng. A maximum likelihood ratio information retrieval model. In Voorhees, E. and Harman, D., editors, Proceedings of the Eighth Text REtrieval Conference (TREC-8), pages 483--492. 2000.

[Ogilvie & Callan 03] P. Ogilvie and J. Callan Combining Document Representations for Known Item Search. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2003), pp. 143-150

[Ponte & Croft 98]] J. M. Ponte and W. B. Croft. A language modeling approach to information retrieval. In Proceedings of ACM-SIGIR 1998, pages 275-281.

[Ponte 98] J. M. Ponte. A language modeling approach to information retrieval. Phd dissertation, University of Massachusets, Amherst, MA, September 1998.

Page 108: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 108

References (cont.)

[Ponte 01] J. Ponte. Is information retrieval anything more than smoothing? In Proceedings of the Workshop on Language Modeling and Information Retrieval, pages 37-41, 2001.

[Robertson & Sparch-Jones 76] S. Robertson and K. Sparck Jones. (1976). Relevance Weighting of Search Terms. JASIS, 27, 129-146.

[Robertson 77] S. E. Robertson. The probability ranking principle in IR. Journal of Documentation, 33:294-304, 1977.[Robertson & Walker 94] S. E. Robertson and S. Walker, Some simple effective approximations to the 2-Poisson model

for probabilistic weighted retrieval. Proceedings of ACM SIGIR 1994. pages 232-241. 1994. [Rosenfeld 00] R. Rosenfeld, Two decades of statistical language modeling: where do we go from here? In

Proceedings of IEEE, volume~88.[Salton et al. 75] G. Salton, A. Wong and C. S. Yang, A vector space model for automatic indexing. Communications of

the ACM, 18(11):613--620.[Salton & Buckley 88] G. Salton and C. Buckley, Term weighting approaches in automatic text retrieval, Information

Processing and Management, 24(5), 513-523. 1988. [Shannon 48] Shannon, C. E. (1948).. A mathematical theory of communication. Bell System Tech. J. 27, 379-423, 623-

656. [Shen et al. 05] X. Shen, B. Tan, and C. Zhai. Context-sensitive information retrieval with implicit feedback. In

Proceedings of ACM SIGIR 2005.[Si et al. 02] L. Si , R. Jin, J. Callan and P.l Ogilvie. A Language Model Framework for Resource Selection and Results

Merging. In Proceedings of the 11th International Conference on Information and Knowledge Management (CIKM). 2002

[Singhal et al. 96] A. Singhal, C. Buckley, and M. Mitra, Pivoted document length normalization, Proceedings of ACM SIGIR 1996.

[Singhal 01] A. Singhal, Modern Information Retrieval: A Brief Overview. Amit Singhal. In IEEE Data Engineering Bulletin24(4), pages 35-43, 2001.

[Song & Croft 99] F. Song and W. B. Croft. A general language model for information retrieval. In Proceedings of Eighth International Conference on Information and Knowledge Management (CIKM 1999)

[Sparck Jones 72] K. Sparck Jones, A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation 28, 11-21, 1972 and 60, 493-502, 2004.

Page 109: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 109

References (cont.)

[Sparck Jones et al. 00] K. Sparck Jones, S. Walker, and S. E. Robertson, A probabilistic model of information retrieval: development and comparative experiments - part 1 and part 2. Information Processing and Management, 36(6):779--808 and 809--840.

[Sparck Jones et al. 03] K. Sparck Jones, S. Robertson, D. Hiemstra, H. Zaragoza, Language Modeling and Relevance, In W. B. Croft and J. Lafferty (eds), Language Modeling for Information Retrieval, Kluwer Academic Pub. 2003.

[Srikanth & Srihari 03] M. Srikanth, R. K. Srihari. Exploiting Syntactic Structure of Queries in a Language Modeling Approach to IR. in Proceedings of Conference on Information and Knowledge Management(CIKM'03).

[Srikanth 04] M. Srikanth. Exploiting query features in language modeling approach for information retrieval. Ph.D. dissertation, State University of New York at Buffalo, 2004.

[Tan et al. 06] Bin Tan, Xuehua Shen, and ChengXiang Zhai,, Mining long-term search history to improve search accuracy, Proceedings of ACM KDD 2006.

[Tao et al. 06] Tao Tao, Xuanhui Wang, Qiaozhu Mei, and ChengXiang Zhai, Language model information retrieval with document expansion, Proceedings of HLT/NAACL 2006.

[Tao & Zhai 06] Tao Tao and ChengXiang Zhai, Regularized estimation of mixture models for robust pseudo-relevance feedback. Proceedings of ACM SIGIR 2006.

[Turtle & Croft 91]H. Turtle and W. B. Croft, Evaluation of an inference network-based retrieval model. ACM Transactions on Information Systems, 9(3):187--222.

[van Rijsbergen 86] C. J. van Rijsbergen. A non-classical logic for information retrieval. The Computer Journal, 29(6).[Witten et al. 99] I.H. Witten, A. Mo#at, and T.C. Bell. Managing Gigabytes - Compressing and Indexing Documents and

Images. Academic Press, San Diego, 2nd edition, 1999.[Wong & Yao 89] S. K. M. Wong and Y. Y. Yao, A probability distribution model for information retrieval. Information

Processing and Management, 25(1):39--53.[Wong & Yao 95] S. K. M. Wong and Y. Y. Yao. On modeling information retrieval with probabilistic inference. ACM

Transactions on Information Systems, 13(1):69--99.[Xu & Croft 99] J. Xu and W. B. Croft. Cluster-based language models for distributed retrieval. In Proceedings of the

ACM SIGIR 1999, pages 15-19,[Xu et al. 01] J. Xu, R. Weischedel, and C. Nguyen. Evaluating a probabilistic model for cross-lingual information

retrieval. In Proceedings of the ACM-SIGIR 2001, pages 105-110.

Page 110: Statistical Language Models for Information Retrievalsifaka.cs.uiuc.edu/lmir/sigir06-tutorial-lmir.pdf · Part 1: Introduction 1. Introduction-Information Retrieval (IR)-Statistical

© ChengXiang Zhai, 2006 110

References (cont.)

[Zaragoza et al. 03] Hugo Zaragoza, D. Hiemstra and M. Tipping, Bayesian extension to the language model for ad hoc information retrieval. In Proceedings of SIGIR 2003: 4-9.

[Zhai & Lafferty 01a] C. Zhai and J. Lafferty. A study of smoothing methods for language models applied to ad hoc information retrieval. In Proceedings of the ACM-SIGIR 2001, pages 334-342.

[Zhai & Lafferty 01b] C. Zhai and J. Lafferty. Model-based feedback in the language modeling approach to information retrieval, In Proceedings of the Tenth International Conference on Information and Knowledge Management (CIKM 2001).

[Zhai & Lafferty 02] C. Zhai and J. Lafferty. Two-stage language models for information retrieval. In Proceedings of the ACM-SIGIR 2002, pages 49-56.

[Zhai et al. 03] C. Zhai, W. Cohen, and J. Lafferty, Beyond Independent Relevance: Methods and Evaluation Metrics for Subtopic Retrieval, In Proceedings of ACM SIGIR 2003.

[Zhai & Lafferty 06] C. Zhai and J. Lafferty, A risk minimization framework for information retrieval, Information Processing and Management, 42(1), Jan. 2006, pages 31-55.

[Zhai 02] C. Zhai, Language Modeling and Risk Minimization in Text Retrieval, Ph.D. thesis, Carnegie Mellon University, 2002.

[Zhai & Lafferty 06] C. Zhai and J. Lafferty, A risk minimization framework for information retrieval, Information Processing and Management, 42(1), Jan. 2006, pages 31-55.

[Zhang et al. 02] Y. Zhang , J. Callan, and Thomas P. Minka, Novelty and redundancy detection in adaptive filtering. In Proceedings of SIGIR 2002, 81-88