39
Binary Classifica-on Wei Xu (many slides from Greg Durrett and Vivek Srikumar)

Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

BinaryClassifica-on

WeiXu(many slides from Greg Durrett and Vivek Srikumar)

Page 2: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

Administrivia

‣ Readingsoncoursewebsite

‣ Homework1isdueJanuary17.

‣ Pleaselookattheassignment,ifyouhaven’t.

‣ Ifthisseemslikeit’llbechallengingforyou,comeandtalktome(thisissmaller-scalethanthelaterassignments,whicharesmaller-scalethanthefinalproject)

Page 3: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

Alterna-ves

‣ LING5801orLING5802—coverssimilartopicsas5525atamoremoderatedifficultylevel.

‣ CSE5522or5524—CSE5525couldbechallengingforstudentswhohaven'ttaken5522or5523.Forundergraduatesandnon-CSmajor,itisrecommendedtotake5522first(thoughnotrequired).

‣ CSE5539s(2-crhrs)—taughtbytenure-trackfacultymembers.

‣ LING7890.08(1-or2-crhrs)—ClippersSeminar

Page 4: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

ThisLecture

‣ Linearclassifica-onfundamentals

‣ Threediscrimina-vemodels:logis-cregression,perceptron,SVM

‣ NaiveBayes,maximumlikelihoodingenera-vemodels

‣ Differentmo-va-onsbutverysimilarupdaterules/inference!

Page 5: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

Classifica-on

Page 6: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

Classifica-on

‣ Embeddatapointinafeaturespace

+++ +

++

++

- - -----

--

‣ Lineardecisionrule:

=[0.5,1.6,0.3]

[0.5,1.6,0.3,1]

x y 2 {0, 1}

f(x) 2 Rn

‣ Datapointwithlabel

butinthislectureandareinterchangeablexf(x)

w>f(x) + b > 0

f(x)‣ Candeletebiasifweaugmentfeaturespace:

w>f(x) > 0

Page 7: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

f(x)=[x1,x2,x12,x22,x1x2]f(x)=[x1,x2]

Linearfunc-onsarepowerful!

+++ + + +++

- - - -----

-+++ + + +++

- - - -----

-??

x

x

+++ + + +++

- - - -----

-

+++ + + +++

- - - -----

-

x1x

x

‣ “Kerneltrick”doesthisfor“free,”butistooexpensivetouseinNLPapplica-ons,trainingisinsteadofO(n2) O(n · (num feats))

hpps://www.quora.com/Why-is-kernelized-SVM-much-slower-than-linear-SVMhpp://ciml.info/dl/v0_99/ciml-v0_99-ch11.pdf

Page 8: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

Classifica-on:Sen-mentAnalysis

thismoviewasgreat!wouldwatchagain

Nega-ve

Posi-ve

thatfilmwasawful,I’llneverwatchagain

‣ Surfacecuescanbasicallytellyouwhat’sgoingonhere:presenceorabsenceofcertainwords(great,awful)

‣ Stepstoclassifica-on:‣ Turnexampleslikethisintofeaturevectors

‣ Pickamodel/learningalgorithm

‣ Trainweightsondatatogetourclassifier

Page 9: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

FeatureRepresenta-on

thismoviewasgreat!wouldwatchagain Posi-ve

‣ Convertthisexampletoavectorusingbag-of-wordsfeatures

‣ Requiresindexingthefeatures(mappingthemtoaxes)

[containsthe][containsa][containswas][containsmovie][containsfilm]

0 0 1 1 0

‣Moresophis-catedfeaturemappingspossible(s-idf),aswellaslotsofotherfeatures:charactern-grams,partsofspeech,lemmas,…

posi-on0 posi-on1 posi-on2 posi-on3 posi-on4

‣ Verylargevectorspace(sizeofvocabulary),sparsefeatures…f(x)=[

Page 10: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

NaiveBayes

Page 11: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

NaiveBayes‣ Datapoint,label

P (y|x) = P (y)P (x|y)P (x)

/ P (y)P (x|y)constant:irrelevantforfindingthemax

= P (y)nY

i=1

P (xi|y)

Bayes’Rule

“Naive”assump-on:

x = (x1, ..., xn) y 2 {0, 1}‣ Formulateaprobabilis-cmodelthatplacesadistribu-on

linearmodel!

P (y|x)

y

nxi

‣ Compute,predicttoclassify

P (x, y)

argmaxyP (y|x)

argmaxyP (y|x) = argmaxy logP (y|x) = argmaxy

"logP (y) +

nX

i=1

logP (xi|y)#

Page 12: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

NaiveBayesExample

P (y|x) / [ ]itwasgreat

P (y|x) / P (y)nY

i=1

P (xi|y)

argmaxyP (y|x) = argmaxy logP (y|x) = argmaxy

"logP (y) +

nX

i=1

logP (xi|y)#

Page 13: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

MaximumLikelihoodEs-ma-on‣ Datapointsprovided(jindexesoverexamples)

‣ Findvaluesofthatmaximizedatalikelihood(genera-ve):P (y), P (xi|y)

(xj , yj)

datapoints(j) features(i)

mY

j=1

P (yj , xj) =mY

j=1

P (yj)

"nY

i=1

P (xji|yj)#

ithfeatureofjthexample

Page 14: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

MaximumLikelihoodEs-ma-on‣ Imagineacoinflipwhichisheadswithprobabilityp

mX

j=1

logP (yj) = 3 log p+ log(1� p)

loglikelihood

p0 1

P(H)=0.75

‣Maximumlikelihoodparametersforbinomial/mul-nomial=readcountsoffofthedata+normalize

‣ Observe(H,H,H,T)andmaximizelikelihood:mY

j=1

P (yj) = p3(1� p)

‣ Easier:maximizeloglikelihood

hpp://fooplot.com/

Page 15: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

MaximumLikelihoodEs-ma-on‣ Datapointsprovided(jindexesoverexamples)

‣ Findvaluesofthatmaximizedatalikelihood(genera-ve):P (y), P (xi|y)

(xj , yj)

datapoints(j) features(i)

mY

j=1

P (yj , xj) =mY

j=1

P (yj)

"nY

i=1

P (xji|yj)#

‣ Equivalenttomaximizinglogarithmofdatalikelihood:mX

j=1

logP (yj , xj) =mX

j=1

"logP (yj) +

nX

i=1

logP (xji|yj)#

ithfeatureofjthexample

Page 16: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

MaximumLikelihoodforNaiveBayes

P (great|+) =1

2

P (great|�) =1

4

P (+) =1

2

+thismoviewasgreat!wouldwatchagain

thatfilmwasawful,I’llneverwatchagain

—Ididn’treallylikethatmovie

dryandabitdistasteful,itmissesthemark—greatpotenCalbutendedupbeingaflop —

+IlikeditwellenoughforanacConflick

IexpectedagreatfilmandleEhappy ++brilliantdirecCngandstunningvisuals

P (�) =1

2

P (y|x) / P (+)P (great|+)

P (�)P (great|�)[ ] = 1/4

1/8[ ]= 2/3

1/3[ ]itwasgreat

P (great|�) =1

4

Page 17: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

NaiveBayes:Summary

‣Model

P (x, y) = P (y)nY

i=1

P (xi|y)

‣ Learning:maximizebyreadingcountsoffthedata

‣ Inference

P (x, y)

argmaxy logP (y|x) = argmaxy

"logP (y) +

nX

i=1

logP (xi|y)#

‣ Alterna-vely: logP (y = +|x)� logP (y = �|x) > 0

, logP (y = +)

P (y = �)+

nX

i=1

logP (xi|y = +)

P (xi|y = �)> 0

<latexit sha1_base64="FZr/riCBo1+PNx2P5vFNKKQzJnQ=">AAACc3icbZDfTtswFMbdDBhk/CnbJTcWZVIrRJUwJHbDhNgNF1wUiUKlpkSO66QWjh3ZJ2xVlsfbQ+wZuB2XSDhtkSjlSJY/fec78vEvygQ34Hn/as6HpeWVj6tr7qf1jc2t+vbna6NyTVmXKqF0LyKGCS5ZFzgI1ss0I2kk2E1097Pq39wzbbiSVzDO2CAlieQxpwSsFdbD4ILFoHkyAqK1+oUDoRIcxJrQotMcn+y3ysl90CrxPg5MnoYFP/HLWzmf/B3yPy/pqa4mfnhhveG1vUnhReHPRAPNqhPWn4KhonnKJFBBjOn7XgaDgmjgVLDSDXLDMkLvSML6VkqSMjMoJiBK/NU6QxwrbY8EPHFfTxQkNWacRjaZEhiZt73KfK/XzyH+Pii4zHJgkk4finOBQeGKKh5yzSiIsRWEam53xXRELBqw7N25Z4am2m3uI0WWxHbp0nVdy8t/S2dRXB+2/W/tw8ujxunZjNwq2kG7qIl8dIxO0TnqoC6i6C96QP/RY+3R2XF2nb1p1KnNZr6guXIOngEgn75t</latexit>

y

nxi

Page 18: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

ProblemswithNaiveBayes

‣ NaiveBayesisnaive,butanotherproblemisthatit’sgeneraCve:spendscapacitymodelingP(x,y),whenwhatwecareaboutisP(y|x)

‣ Correlatedfeaturescompound:beauCfulandgorgeousarenotindependent!

thefilmwasbeauCful,stunningcinematographyandgorgeoussets,butboring —P (xbeautiful|+) = 0.1

P (xstunning|+) = 0.1

P (xgorgeous|+) = 0.1

P (xbeautiful|�) = 0.01

P (xstunning|�) = 0.01

P (xgorgeous|�) = 0.01

P (xboring|�) = 0.1P (xboring|+) = 0.01

‣ Discrimina-vemodelsmodelP(y|x)directly(SVMs,mostneuralnetworks,…)

Page 19: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

Homework1Demo(Numpy)

19

‣ Uselogprobabili-es‣Mul-variateBernoulliorMul-nominalNaiveBayes

‣ UseNumpyvector/matrixopera-ons.Avoidusingforloops.‣ Smoothing(ALPHA)

Page 20: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

Logis-cRegression

Page 21: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

Logis-cRegression

‣ Tolearnweights:maximizediscrimina-veloglikelihoodofdataP(y|x)

P (y = +|x) = logistic(w>x)

P (y = +|x) =exp(

Pni=1 wixi)

1 + exp(Pn

i=1 wixi)

L(xj , yj = +) = logP (yj = +|xj)

=nX

i=1

wixji � log

1 + exp

nX

i=1

wixji

!!

sumoverfeatures

Page 22: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

Logis-cRegression

@L(xj , yj)

@wi= xji �

@

@wilog

1 + exp

nX

i=1

wixji

!!

= xji �1

1 + exp (Pn

i=1 wixji)

@

@wi

1 + exp

nX

i=1

wixji

!!

= xji �1

1 + exp (Pn

i=1 wixji)xji exp

nX

i=1

wixji

!

derivoflog

derivofexp

= xji � xjiexp (

Pni=1 wixji)

1 + exp (Pn

i=1 wixji)= xji(1� P (yj = +|xj))

L(xj , yj = +) = logP (yj = +|xj) =nX

i=1

wixji � log

1 + exp

nX

i=1

wixji

!!

Page 23: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

Logis-cRegression

IfP(+)iscloseto1,makeverylipleupdateOtherwisemakewilookmorelikexji,whichwillincreaseP(+)

‣ Gradientofwionposi-veexample

‣ Gradientofwionnega-veexample

IfP(+)iscloseto0,makeverylipleupdateOtherwisemakewilooklesslikexji,whichwilldecreaseP(+)

xj(yj � P (yj = 1|xj))

= xji(�P (yj = +|xj))

= xji(yj � P (yj = +|xj))

‣ Cancombinethesegradientsas

‣ Recallthatyj=1forposi-veinstances,yj=0fornega-veinstances.

Page 24: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

Regulariza-on‣ Regularizinganobjec-vecanmeanmanythings,includinganL2-normpenaltytotheweights:

mX

j=1

L(xj , yj)� �kwk22

‣ Keepingweightssmallcanpreventoverfiyng

‣ FormostoftheNLPmodelswebuild,explicitregulariza-onisn’tnecessary

‣ Earlystopping

‣ Forneuralnetworks:dropoutandgradientclipping‣ Largenumbersofsparsefeaturesarehardtooverfitinareallybadway

Page 25: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

Logis-cRegression:Summary

‣Model

‣ Learning:gradientascentonthe(regularized)discrimina-velog-likelihood

‣ Inference

argmaxyP (y|x) fundamentallysameasNaiveBayes

P (y = 1|x) � 0.5 , w>x � 0

P (y = +|x) =exp(

Pni=1 wixi)

1 + exp(Pn

i=1 wixi)

Page 26: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

Perceptron/SVM

Page 27: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

Perceptron

‣ Simpleerror-drivenlearningapproachsimilartologis-cregression

‣ Decisionrule:

‣ Guaranteedtoeventuallyseparatethedataifthedataareseparable

‣ Ifincorrect:ifposi-ve,ifnega-ve,

w w + x

w w � x w w � xP (y = 1|x)w w + x(1� P (y = 1|x))

Logis-cRegressionw>x > 0

hpp://ciml.info/dl/v0_99/ciml-v0_99-ch04.pdf

Page 28: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

SupportVectorMachines

‣Manysepara-nghyperplanes—isthereabestone?

+++ +

++

++

- - -----

--

Page 29: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

SupportVectorMachines

‣Manysepara-nghyperplanes—isthereabestone?

+++ +

++

++

- - -----

-- margin

Page 30: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

SupportVectorMachines‣ Constraintformula-on:findwviafollowingquadra-cprogram:

Minimize

s.t.

Asasingleconstraint:

minimizingnormwithfixedmargin<=>maximizingmargin

kwk228j w>xj � 1 if yj = 1

w>xj �1 if yj = 0

8j (2yj � 1)(w>xj) � 1

‣ Generallynosolu-on(dataisgenerallynon-separable)—needslack!

Page 31: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

N-SlackSVMs

Minimize

s.t. 8j (2yj � 1)(w>xj) � 1� ⇠j 8j ⇠j � 0

‣ Thearea“fudgefactor”tomakeallconstraintssa-sfied⇠j

�kwk22 +mX

j=1

⇠j

‣ Takethegradientoftheobjec-ve:@

@wi⇠j = 0 if ⇠j = 0

@

@wi⇠j = (2yj � 1)xji if ⇠j > 0

= xji if yj = 1, �xji if yj = 0

‣ Looksliketheperceptron!Butupdatesmorefrequently

Page 32: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

GradientsonPosi-veExamplesLogis-cregression

Perceptron

x(1� P (y = 1|x)) = x(1� logistic(w>x))

x if w>x < 0, else 0

SVM(ignoringregularizer)

Hinge(SVM)

Logis-cPerceptron

0-1

Loss

w>x

*gradientsareformaximizingthings,whichiswhytheyareflipped

x if w>x < 1, else 0

Page 33: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

ComparingGradientUpdates(Reference)

x(y � P (y = 1|x)) x(y � logistic(w>x))

Perceptronifclassifiedincorrectly

0else

SVMifnotclassifiedcorrectlywithmarginof1

0else

(2y � 1)x

(2y � 1)x

=

y=1forpos,0forneg

Logis-cregression(unregularized)

Page 34: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

Op-miza-on—next-me…

‣ Rangeoftechniquesfromsimplegradientdescent(worksprepywell)tomorecomplexmethods(canworkbeper)

‣Mostmethodsboildownto:takeagradientandastepsize,applythegradientupdate-messtepsize,incorporatees-matedcurvatureinforma-ontomaketheupdatemoreeffec-ve

Page 35: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

Sen-mentAnalysis

BoPang,LillianLee,ShivakumarVaithyanathan(2002)

themoviewasgrossandoverwrought,butIlikedit

thismoviewasgreat!wouldwatchagain

‣ Bag-of-wordsdoesn’tseemsufficient(discoursestructure,nega-on)

thismoviewasnotreallyveryenjoyable

‣ Therearesomewaysaroundthis:extractbigramfeaturefor“notX”forallXfollowingthenot

++

Page 36: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

Sen-mentAnalysis

‣ Simplefeaturesetscandoprepywell!

BoPang,LillianLee,ShivakumarVaithyanathan(2002)

Page 37: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

Sen-mentAnalysis

WangandManning(2012)

Beforeneuralnetshadtakenoff—resultsweren’tthatgreat

NaiveBayesisdoingwell!

NgandJordan(2002)—NBcanbebeperforsmalldata

81.589.5Kim(2014)CNNs

Page 38: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

Recap

‣ Logis-cregression: P (y = 1|x) =exp (

Pni=1 wixi)

(1 + exp (Pn

i=1 wixi))

Gradient(unregularized):

‣ SVM:

Decisionrule:

Decisionrule:w>x � 0

P (y = 1|x) � 0.5 , w>x � 0

(Sub)gradient(unregularized):0ifcorrectwithmarginof1,else

x(y � P (y = 1|x))

x(2y � 1)

Page 39: Binary Classifica-on · Feature Representa-on this movie was great! would watch again Posi-ve ‣ Convert this example to a vector using bag-of-words features ‣ Requires indexing

Recap

‣ Logis-cregression,SVM,andperceptronarecloselyrelated

‣ SVMandperceptroninferencerequiretakingmaxes,logis-cregressionhasasimilarupdatebutis“so}er”duetoitsprobabilis-cnature

‣ Allgradientupdates:“makeitlookmoreliketherightthingandlesslikethewrongthing”