43
Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Embed Size (px)

Citation preview

Page 1: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Statistical NLPWinter 2009

Lecture 5: Unsupervised text categorization

Roger Levy

Many thanks to Tom Griffiths for use of his slides

Page 2: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Where we left off

• Supervised text categorization through Naïve Bayes• Generative model: first generate a document category,

then words in the document (unigram model)

• Inference: obtain posterior over document categories using Bayes rule (argmax to choose the category)

Cat

w1 w2 w3 w4 w5 w6 …

P(Cat | w1...n ) =P(w1...n | Cat)P(Cat)

P(w1...n )

Page 3: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

What we’re doing today

• Supervised categorization requires hand-labeling documents

• This can be extremely time-consuming• Unlabeled documents are cheap• So we’d really like to do unsupervised text

categorization• Today we’ll look at unsupervised learning within the

Naïve Bayes model

Page 4: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Compact graphical model representations

• We’re going to lean heavily on graphical model representations here.

• We’ll use a more compact notation:

Cat

w1 w2 w3 w4 wn…

Cat

w1

n

“generate a word from Cat n times”

a “plate”

Page 5: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

• Now suppose that Cat isn’t observed• We need to learn two distributions:

• P(Cat)• P(w|Cat)

• How do we do this?• We might use the method of maximum likelihood (MLE)• But it turns out that the likelihood surface is highly non-

convex and lots of information isn’t contained in a point estimate

• Alternative: Bayesian methods

Cat

w1

n

Page 6: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Bayesian document categorization

Cat

w1nD

D

priors

P(w|Cat)

P(Cat)

Page 7: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Latent Dirichlet allocation(Blei, Ng, & Jordan, 2001; 2003)

Nd D

zi

wi

(d)

(j)

(d) Dirichlet()

zi Discrete( (d) ) (j) Dirichlet()

wi Discrete( (zi) )

T

distribution over topicsfor each document

topic assignment for each word

distribution over words for each topic

word generated from assigned topic

Dirichlet priors

Main difference: one topic per word

Page 8: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

A generative model for documents

HEART 0.2 LOVE 0.2SOUL 0.2TEARS 0.2JOY 0.2SCIENTIFIC 0.0KNOWLEDGE 0.0WORK 0.0RESEARCH 0.0MATHEMATICS 0.0

HEART 0.0 LOVE 0.0SOUL 0.0TEARS 0.0JOY 0.0 SCIENTIFIC 0.2KNOWLEDGE 0.2WORK 0.2RESEARCH 0.2MATHEMATICS 0.2

topic 1 topic 2

w P(w|Cat = 1) w P(w|Cat = 2)

Page 9: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Choose mixture weights for each document, generate “bag of words”

{P(z = 1), P(z = 2)}

{0, 1}

{0.25, 0.75}

{0.5, 0.5}

{0.75, 0.25}

{1, 0}

MATHEMATICS KNOWLEDGE RESEARCH WORK MATHEMATICS RESEARCH WORK SCIENTIFIC MATHEMATICS WORK

SCIENTIFIC KNOWLEDGE MATHEMATICS SCIENTIFIC HEART LOVE TEARS KNOWLEDGE HEART

MATHEMATICS HEART RESEARCH LOVE MATHEMATICS WORK TEARS SOUL KNOWLEDGE HEART

WORK JOY SOUL TEARS MATHEMATICS TEARS LOVE LOVE LOVE SOUL

TEARS LOVE JOY SOUL LOVE TEARS SOUL SOUL TEARS JOY

Page 10: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Dirichlet priors

• Multivariate equivalent of Beta distribution

• Hyperparameters determine form of the prior

p(θ | Cat) =Γ(Tα )

Γ(α )Tθ j

α −1

j=1

T

Page 11: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

wor

ds

documents

wor

ds

topics

topi

cs

documents

P(w

|z)

P(z)

Matrix factorization interpretation

Maximum-likelihood estimation is finding the factorization that minimizes KL divergence

P(w)

(Hofmann, 1999)

Page 12: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

STORYSTORIES

TELLCHARACTER

CHARACTERSAUTHOR

READTOLD

SETTINGTALESPLOT

TELLINGSHORT

FICTIONACTION

TRUEEVENTSTELLSTALE

NOVEL

MINDWORLDDREAM

DREAMSTHOUGHT

IMAGINATIONMOMENT

THOUGHTSOWNREALLIFE

IMAGINESENSE

CONSCIOUSNESSSTRANGEFEELINGWHOLEBEINGMIGHTHOPE

WATERFISHSEA

SWIMSWIMMING

POOLLIKE

SHELLSHARKTANK

SHELLSSHARKSDIVING

DOLPHINSSWAMLONGSEALDIVE

DOLPHINUNDERWATER

DISEASEBACTERIADISEASES

GERMSFEVERCAUSE

CAUSEDSPREADVIRUSES

INFECTIONVIRUS

MICROORGANISMSPERSON

INFECTIOUSCOMMONCAUSING

SMALLPOXBODY

INFECTIONSCERTAIN

FIELDMAGNETIC

MAGNETWIRE

NEEDLECURRENT

COILPOLESIRON

COMPASSLINESCORE

ELECTRICDIRECTION

FORCEMAGNETS

BEMAGNETISM

POLEINDUCED

SCIENCESTUDY

SCIENTISTSSCIENTIFIC

KNOWLEDGEWORK

RESEARCHCHEMISTRY

TECHNOLOGYMANY

MATHEMATICSBIOLOGY

FIELDPHYSICS

LABORATORYSTUDIESWORLD

SCIENTISTSTUDYINGSCIENCES

BALLGAMETEAM

FOOTBALLBASEBALLPLAYERS

PLAYFIELD

PLAYERBASKETBALL

COACHPLAYEDPLAYING

HITTENNISTEAMSGAMESSPORTS

BATTERRY

JOBWORKJOBS

CAREEREXPERIENCE

EMPLOYMENTOPPORTUNITIES

WORKINGTRAINING

SKILLSCAREERS

POSITIONSFIND

POSITIONFIELD

OCCUPATIONSREQUIRE

OPPORTUNITYEARNABLE

Interpretable topics

each column shows words from a single topic, ordered by P(w|z)

Page 13: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

STORYSTORIES

TELLCHARACTER

CHARACTERSAUTHOR

READTOLD

SETTINGTALESPLOT

TELLINGSHORT

FICTIONACTION

TRUEEVENTSTELLSTALE

NOVEL

MINDWORLDDREAM

DREAMSTHOUGHT

IMAGINATIONMOMENT

THOUGHTSOWNREALLIFE

IMAGINESENSE

CONSCIOUSNESSSTRANGEFEELINGWHOLEBEINGMIGHTHOPE

WATERFISHSEA

SWIMSWIMMING

POOLLIKE

SHELLSHARKTANK

SHELLSSHARKSDIVING

DOLPHINSSWAMLONGSEALDIVE

DOLPHINUNDERWATER

DISEASEBACTERIADISEASES

GERMSFEVERCAUSE

CAUSEDSPREADVIRUSES

INFECTIONVIRUS

MICROORGANISMSPERSON

INFECTIOUSCOMMONCAUSING

SMALLPOXBODY

INFECTIONSCERTAIN

FIELDMAGNETIC

MAGNETWIRE

NEEDLECURRENT

COILPOLESIRON

COMPASSLINESCORE

ELECTRICDIRECTION

FORCEMAGNETS

BEMAGNETISM

POLEINDUCED

SCIENCESTUDY

SCIENTISTSSCIENTIFIC

KNOWLEDGEWORK

RESEARCHCHEMISTRY

TECHNOLOGYMANY

MATHEMATICSBIOLOGY

FIELDPHYSICS

LABORATORYSTUDIESWORLD

SCIENTISTSTUDYINGSCIENCES

BALLGAMETEAM

FOOTBALLBASEBALLPLAYERS

PLAYFIELD

PLAYERBASKETBALL

COACHPLAYEDPLAYING

HITTENNISTEAMSGAMESSPORTS

BATTERRY

JOBWORKJOBS

CAREEREXPERIENCE

EMPLOYMENTOPPORTUNITIES

WORKINGTRAINING

SKILLSCAREERS

POSITIONSFIND

POSITIONFIELD

OCCUPATIONSREQUIRE

OPPORTUNITYEARNABLE

Handling multiple senses

each column shows words from a single topic, ordered by P(w|z)

Page 14: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Natural statistics

Documents Semantic representation

MATHEMATICS KNOWLEDGE RESEARCH WORK MATHEMATICS RESEARCH WORK SCIENTIFIC MATHEMATICS WORK

SCIENTIFIC KNOWLEDGE MATHEMATICS SCIENTIFIC HEART LOVE TEARS KNOWLEDGE HEART

MATHEMATICS HEART RESEARCH LOVE MATHEMATICS WORK TEARS SOUL KNOWLEDGE HEART

WORK JOY SOUL TEARS MATHEMATICS TEARS LOVE LOVE LOVE SOUL

TEARS LOVE JOY SOUL LOVE TEARS SOUL SOUL TEARS JOY

HEARTLOVESOUL

TEARSJOY…

SCIENTIFICKNOWLEDGE

WORKRESEARCH

MATHEMATICS…?

Page 15: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Inverting the generative model

• Maximum likelihood estimation (EM)• e.g. Hofmann (1999)

• Deterministic approximate algorithms • variational EM; Blei, Ng & Jordan (2001; 2003)• expectation propagation; Minka & Lafferty (2002)

• Markov chain Monte Carlo• full Gibbs sampler; Pritchard et al. (2000)• collapsed Gibbs sampler; Griffiths & Steyvers (2004)

Page 16: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

The collapsed Gibbs sampler

• Using conjugacy of Dirichlet and multinomial distributions, integrate out continuous parameters

• Defines a distribution on discrete ensembles z

ΦΦΦ=∫Δ

dpPPTW

)(),|()|( zwzw

ΘΘΘ= ∫Δ

dpPPDT

)()|()( zz

∑=

z

zzw

zzwwz

)()|(

)()|()|(

PP

PPP

∏ ∑∏

= +ΓΓ

Γ

+Γ=

T

jw

jw

Ww

jw

n

Wn

1)(

)(

)(

)(

)(

)(

β

β

β

β

∏ ∑∏

= +ΓΓ

Γ

+Γ=

D

dj

dj

T

j

dj

n

Tn

1)(

)(

)(

)(

)(

)(

α

α

α

α

Page 17: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

The collapsed Gibbs sampler

• Sample each zi conditioned on z-i

• This is nicer than your average Gibbs sampler:• memory: counts can be cached in two sparse matrices• optimization: no special functions, simple arithmetic• the distributions on Φ and Θ are analytic given z and w,

and can later be found for each sample

Tn

n

Wn

nzP

i

i

i

i

i

d

dj

z

zw

ii +

+

+

+∝

••− )(

)(

)(

)(

),|( zw

Page 18: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Gibbs sampling in LDA

i wi di zi123456789

101112...

50

MATHEMATICSKNOWLEDGE

RESEARCHWORK

MATHEMATICSRESEARCH

WORKSCIENTIFIC

MATHEMATICSWORK

SCIENTIFICKNOWLEDGE

.

.

.JOY

111111111122...5

221212212111...2

iteration1

Page 19: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Gibbs sampling in LDA

i wi di zi zi123456789

101112...

50

MATHEMATICSKNOWLEDGE

RESEARCHWORK

MATHEMATICSRESEARCH

WORKSCIENTIFIC

MATHEMATICSWORK

SCIENTIFICKNOWLEDGE

.

.

.JOY

111111111122...5

221212212111...2

?

iteration1 2

Page 20: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Gibbs sampling in LDA

i wi di zi zi123456789

101112...

50

MATHEMATICSKNOWLEDGE

RESEARCHWORK

MATHEMATICSRESEARCH

WORKSCIENTIFIC

MATHEMATICSWORK

SCIENTIFICKNOWLEDGE

.

.

.JOY

111111111122...5

221212212111...2

?

iteration1 2

Page 21: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Gibbs sampling in LDA

i wi di zi zi123456789

101112...

50

MATHEMATICSKNOWLEDGE

RESEARCHWORK

MATHEMATICSRESEARCH

WORKSCIENTIFIC

MATHEMATICSWORK

SCIENTIFICKNOWLEDGE

.

.

.JOY

111111111122...5

221212212111...2

?

iteration1 2

Page 22: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Gibbs sampling in LDA

i wi di zi zi123456789

101112...

50

MATHEMATICSKNOWLEDGE

RESEARCHWORK

MATHEMATICSRESEARCH

WORKSCIENTIFIC

MATHEMATICSWORK

SCIENTIFICKNOWLEDGE

.

.

.JOY

111111111122...5

221212212111...2

2?

iteration1 2

Page 23: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Gibbs sampling in LDA

i wi di zi zi123456789

101112...

50

MATHEMATICSKNOWLEDGE

RESEARCHWORK

MATHEMATICSRESEARCH

WORKSCIENTIFIC

MATHEMATICSWORK

SCIENTIFICKNOWLEDGE

.

.

.JOY

111111111122...5

221212212111...2

21?

iteration1 2

Page 24: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Gibbs sampling in LDA

i wi di zi zi123456789

101112...

50

MATHEMATICSKNOWLEDGE

RESEARCHWORK

MATHEMATICSRESEARCH

WORKSCIENTIFIC

MATHEMATICSWORK

SCIENTIFICKNOWLEDGE

.

.

.JOY

111111111122...5

221212212111...2

211?

iteration1 2

Page 25: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Gibbs sampling in LDA

i wi di zi zi123456789

101112...

50

MATHEMATICSKNOWLEDGE

RESEARCHWORK

MATHEMATICSRESEARCH

WORKSCIENTIFIC

MATHEMATICSWORK

SCIENTIFICKNOWLEDGE

.

.

.JOY

111111111122...5

221212212111...2

2112?

iteration1 2

Page 26: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Gibbs sampling in LDA

i wi di zi zi zi123456789

101112...

50

MATHEMATICSKNOWLEDGE

RESEARCHWORK

MATHEMATICSRESEARCH

WORKSCIENTIFIC

MATHEMATICSWORK

SCIENTIFICKNOWLEDGE

.

.

.JOY

111111111122...5

221212212111...2

211222212212...1

222122212222...1

iteration1 2 … 1000

Page 27: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Effects of hyperparameters

• and control the relative sparsity of Φ and Θ• smaller , fewer topics per document• smaller , fewer words per topic

• Good assignments z compromise in sparsity

log Γ

(x)

x

∏ ∑∏

= +ΓΓ

Γ

+Γ=

T

jw

jw

Ww

jw

n

WnP

1)(

)(

)(

)(

)(

)()|(

β

β

β

βzw

∏ ∑∏

= +ΓΓ

Γ

+Γ=

D

dj

dj

T

j

dj

n

TnP

1)(

)(

)(

)(

)(

)()(

α

α

α

αz

Page 28: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Varying

decreasing increases sparsity

Page 29: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Varying decreasing

increases sparsity?

Page 30: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Number of words per topic

Topic

Num

ber

of w

ords

Page 31: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Nu U

zi

wi

(u)

(j)

(u)|su=0 Delta((u-1))(u)|su=1 Dirichlet()

zi Discrete( (u) )

(j) Dirichlet()

wi Discrete( (zi) )

T

Extension: a model for meetings

su

(u-1)…

(Purver, Kording, Griffiths, & Tenenbaum, 2006)

Page 32: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Sample of ICSI meeting corpus(25 meetings)

• no it's o_k. • it's it'll work. • well i can do that. • but then i have to end the presentation in the middle so i can go back to open up

javabayes. • o_k fine. • here let's see if i can. • alright. • very nice. • is that better. • yeah. • o_k. • uh i'll also get rid of this click to add notes. • o_k. perfect• NEW TOPIC (not supplied to algorithm)• so then the features we decided or we decided we were talked about. • right. • uh the the prosody the discourse verb choice. • you know we had a list of things like to go and to visit and what not. • the landmark-iness of uh. • i knew you'd like that. • nice coinage. • thank you.

Page 33: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Topic segmentation applied to meetings

Inferred Segmentation

Inferred Topics

Page 34: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Comparison with human judgments

Topics recovered are much more coherent than those found using random segmentation, no segmentation, or an HMM

Page 35: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Learning the number of topics

• Can use standard Bayes factor methods to evaluate models of different dimensionality• e.g. importance sampling via MCMC

• Alternative: nonparametric Bayes• fixed number of topics per document, unbounded

number of topics per corpus(Blei, Griffiths, Jordan, & Tenenbaum, 2004)

• unbounded number of topics for both (the hierarchical Dirichlet process)

(Teh, Jordan, Beal, & Blei, 2004)

Page 36: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

The Author-Topic model(Rosen-Zvi, Griffiths, Smyth, & Steyvers, 2004)

Nd D

zi

wi

(a)

(j)

(a) Dirichlet()

zi Discrete( (xi) ) (j) Dirichlet()

wi Discrete( (zi) )

T

xi

A

xi Uniform(A (d) )

each author has a distribution over topics the author of each

word is chosen uniformly at random

Page 37: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Four example topics from NIPS

WORD PROB. WORD PROB. WORD PROB. WORD PROB.

LIKELIHOOD 0.0539 RECOGNITION 0.0400 REINFORCEMENT 0.0411 KERNEL 0.0683

MIXTURE 0.0509 CHARACTER 0.0336 POLICY 0.0371 SUPPORT 0.0377

EM 0.0470 CHARACTERS 0.0250 ACTION 0.0332 VECTOR 0.0257

DENSITY 0.0398 TANGENT 0.0241 OPTIMAL 0.0208 KERNELS 0.0217

GAUSSIAN 0.0349 HANDWRITTEN 0.0169 ACTIONS 0.0208 SET 0.0205

ESTIMATION 0.0314 DIGITS 0.0159 FUNCTION 0.0178 SVM 0.0204

LOG 0.0263 IMAGE 0.0157 REWARD 0.0165 SPACE 0.0188

MAXIMUM 0.0254 DISTANCE 0.0153 SUTTON 0.0164 MACHINES 0.0168

PARAMETERS 0.0209 DIGIT 0.0149 AGENT 0.0136 REGRESSION 0.0155

ESTIMATE 0.0204 HAND 0.0126 DECISION 0.0118 MARGIN 0.0151

AUTHOR PROB. AUTHOR PROB. AUTHOR PROB. AUTHOR PROB.

Tresp_V 0.0333 Simard_P 0.0694 Singh_S 0.1412 Smola_A 0.1033

Singer_Y 0.0281 Martin_G 0.0394 Barto_A 0.0471 Scholkopf_B 0.0730

Jebara_T 0.0207 LeCun_Y 0.0359 Sutton_R 0.0430 Burges_C 0.0489

Ghahramani_Z 0.0196 Denker_J 0.0278 Dayan_P 0.0324 Vapnik_V 0.0431

Ueda_N 0.0170 Henderson_D 0.0256 Parr_R 0.0314 Chapelle_O 0.0210

Jordan_M 0.0150 Revow_M 0.0229 Dietterich_T 0.0231 Cristianini_N 0.0185

Roweis_S 0.0123 Platt_J 0.0226 Tsitsiklis_J 0.0194 Ratsch_G 0.0172

Schuster_M 0.0104 Keeler_J 0.0192 Randlov_J 0.0167 Laskov_P 0.0169

Xu_L 0.0098 Rashid_M 0.0182 Bradtke_S 0.0161 Tipping_M 0.0153

Saul_L 0.0094 Sackinger_E 0.0132 Schwartz_A 0.0142 Sollich_P 0.0141

TOPIC 19 TOPIC 24 TOPIC 29 TOPIC 87

Page 38: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Who wrote what?

A method1 is described which like the kernel1 trick1 in support1 vector1 machines1 SVMs1 lets us generalize distance1 based2 algorithms to operate in feature1 spaces usually nonlinearly related to the input1 space This is done by identifying a class of kernels1 which can be represented as norm1 based2 distances1 in Hilbert spaces It turns1 out that common kernel1 algorithms such as SVMs1 and kernel1 PCA1 are actually really distance1 based2 algorithms and can be run2 with that class of kernels1 too As well as providing1 a useful new insight1 into how these algorithms work the present2 work can form the basis1 for conceiving new algorithms

This paper presents2 a comprehensive approach for model2 based2 diagnosis2 which includes proposals for characterizing and computing2 preferred2 diagnoses2 assuming that the system2 description2 is augmented with a system2 structure2 a directed2 graph2 explicating the interconnections between system2 components2 Specifically we first introduce the notion of a consequence2 which is a syntactically2 unconstrained propositional2 sentence2 that characterizes all consistency2 based2 diagnoses2 and show2 that standard2 characterizations of diagnoses2 such as minimal conflicts1 correspond to syntactic2 variations1 on a consequence2 Second we propose a new syntactic2 variation on the consequence2 known as negation2 normal form NNF and discuss its merits compared to standard variations Third we introduce a basic algorithm2 for computing consequences in NNF given a structured system2 description We show that if the system2 structure2 does not contain cycles2 then there is always a linear size2 consequence2 in NNF which can be computed in linear time2 For arbitrary1 system2 structures2 we show a precise connection between the complexity2 of computing2 consequences and the topology of the underlying system2 structure2 Finally we present2 an algorithm2 that enumerates2 the preferred2 diagnoses2 characterized by a consequence2 The algorithm2 is shown1 to take linear time2 in the size2 of the consequence2 if the preference criterion1 satisfies some general conditions

Written by(1) Scholkopf_B

Written by(2) Darwiche_A

Page 39: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Analysis of PNAS abstracts

• Test topic models with a real database of scientific papers from PNAS

• All 28,154 abstracts from 1991-2001• All words occurring in at least five abstracts,

not on “stop” list (20,551)• Total of 3,026,970 tokens in corpus

(Griffiths & Steyvers, 2004)

Page 40: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

FORCESURFACE

MOLECULESSOLUTIONSURFACES

MICROSCOPYWATERFORCES

PARTICLESSTRENGTHPOLYMER

IONICATOMIC

AQUEOUSMOLECULARPROPERTIES

LIQUIDSOLUTIONS

BEADSMECHANICAL

HIVVIRUS

INFECTEDIMMUNODEFICIENCY

CD4INFECTION

HUMANVIRAL

TATGP120

REPLICATIONTYPE

ENVELOPEAIDSREV

BLOODCCR5

INDIVIDUALSENV

PERIPHERAL

MUSCLECARDIAC

HEARTSKELETALMYOCYTES

VENTRICULARMUSCLESSMOOTH

HYPERTROPHYDYSTROPHIN

HEARTSCONTRACTION

FIBERSFUNCTION

TISSUERAT

MYOCARDIALISOLATED

MYODFAILURE

STRUCTUREANGSTROM

CRYSTALRESIDUES

STRUCTURESSTRUCTURALRESOLUTION

HELIXTHREE

HELICESDETERMINED

RAYCONFORMATION

HELICALHYDROPHOBIC

SIDEDIMENSIONALINTERACTIONS

MOLECULESURFACE

NEURONSBRAIN

CORTEXCORTICAL

OLFACTORYNUCLEUS

NEURONALLAYER

RATNUCLEI

CEREBELLUMCEREBELLAR

LATERALCEREBRAL

LAYERSGRANULELABELED

HIPPOCAMPUSAREAS

THALAMIC

A selection of topics

TUMORCANCERTUMORSHUMANCELLS

BREASTMELANOMA

GROWTHCARCINOMA

PROSTATENORMAL

CELLMETASTATICMALIGNANT

LUNGCANCERS

MICENUDE

PRIMARYOVARIAN

Page 41: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Cold topics Hot topics

Page 42: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Cold topics Hot topics

2SPECIESGLOBALCLIMATE

CO2WATER

ENVIRONMENTALYEARS

MARINECARBON

DIVERSITYOCEAN

EXTINCTIONTERRESTRIALCOMMUNITYABUNDANCE

134MICE

DEFICIENTNORMAL

GENENULL

MOUSETYPE

HOMOZYGOUSROLE

KNOCKOUTDEVELOPMENT

GENERATEDLACKINGANIMALSREDUCED

179APOPTOSIS

DEATHCELL

INDUCEDBCL

CELLSAPOPTOTIC

CASPASEFAS

SURVIVALPROGRAMMED

MEDIATEDINDUCTIONCERAMIDE

EXPRESSION

Page 43: Statistical NLP Winter 2009 Lecture 5: Unsupervised text categorization Roger Levy Many thanks to Tom Griffiths for use of his slides

Cold topics Hot topics

2SPECIESGLOBALCLIMATE

CO2WATER

ENVIRONMENTALYEARS

MARINECARBON

DIVERSITYOCEAN

EXTINCTIONTERRESTRIALCOMMUNITYABUNDANCE

134MICE

DEFICIENTNORMAL

GENENULL

MOUSETYPE

HOMOZYGOUSROLE

KNOCKOUTDEVELOPMENT

GENERATEDLACKINGANIMALSREDUCED

179APOPTOSIS

DEATHCELL

INDUCEDBCL

CELLSAPOPTOTIC

CASPASEFAS

SURVIVALPROGRAMMED

MEDIATEDINDUCTIONCERAMIDE

EXPRESSION

37CDNA

AMINOSEQUENCE

ACIDPROTEIN

ISOLATEDENCODING

CLONEDACIDS

IDENTITYCLONE

EXPRESSEDENCODES

RATHOMOLOGY

289KDA

PROTEINPURIFIED

MOLECULARMASS

CHROMATOGRAPHYPOLYPEPTIDE

GELSDS

BANDAPPARENTLABELED

IDENTIFIEDFRACTIONDETECTED

75ANTIBODY

ANTIBODIESMONOCLONAL

ANTIGENIGG

MABSPECIFICEPITOPEHUMANMABS

RECOGNIZEDSERA

EPITOPESDIRECTED

NEUTRALIZING