Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
CS 188 SECTION 12These slides are on Piazza! Search for “Daylen’s slides”
CORRECTION: LAPLACE SMOOTHINGLaplaceSmoothing
▪ Laplace’sestimate(extended):▪ Pretendyousaweveryoutcomekextratimes
▪ What’sLaplacewithk=0?▪ kisthestrengthoftheprior
▪ Laplaceforconditionals:▪ Smootheachconditionindependently:
r r b
Number of events that X can take on
CALCULUS REVIEW SECTIONS
➤ Session 1: today; 6-7:30 pm, Soda 405: single variable calculus
➤ Session 2: today; 7:30-9 pm, Soda 405: identical content to session 1
➤ Session 3: tomorrow; 6-7.30 pm, Soda 380; multi variable calculus
➤ Session 4: tomorrow; 7.30-9 pm, Soda 380; identical content to session 3
UPCOMING DEADLINES
➤ Project 5 due today @ 5pm
➤ HW 6 due Wednesday @ 11:59
➤ Project 6 due Sunday @ 5pm
➤ Final Exam next Thursday
BINARY PERCEPTRONSLinearClassifiers
▪ Inputsarefeaturevalues▪ Eachfeaturehasaweight▪ Sumistheactivation
▪ Iftheactivationis:▪ Positive,output+1▪ Negative,output-1
Σf1f2f3
w1w2w3
>0?
BINARY PERCEPTRONS Learning:BinaryPerceptron
▪ Startwithweights=0▪ Foreachtraininginstance:▪ Classifywithcurrentweights
▪ Ifcorrect(i.e.,y=y*),nochange!▪ Ifwrong:adjusttheweightvectorbyaddingorsubtractingthefeaturevector.Subtractify*is-1.
MULTICLASS PERCEPTRONS
MulticlassDecisionRule
▪ Ifwehavemultipleclasses:▪ Aweightvectorforeachclass:
▪ Score(activation)ofaclassy:
▪ Predictionhighestscorewins
Binary=multiclasswherethenegativeclasshasweightzero
Learning:MulticlassPerceptron
▪ Startwithallweights=0▪ Pickuptrainingexamplesonebyone▪ Predictwithcurrentweights
▪ Ifcorrect,nochange!▪ Ifwrong:lowerscoreofwronganswer,
raisescoreofrightanswer
MulticlassDecisionRule
▪ Ifwehavemultipleclasses:▪ Aweightvectorforeachclass:
▪ Score(activation)ofaclassy:
▪ Predictionhighestscorewins
Binary=multiclasswherethenegativeclasshasweightzero
OTHER CLASSIFIERS DISCUSSED
➤ Support Vector Machines
SupportVectorMachines
▪ Maximizingthemargin:goodaccordingtointuition,theory,practice▪ Onlysupportvectorsmatter;othertrainingexamplesareignorable▪ Supportvectormachines(SVMs)findtheseparatorwithmaxmargin▪ Basically,SVMsareMIRAwhereyouoptimizeoverallexamplesatonce
MIRA
SVM
SupportVectorMachines
▪ Maximizingthemargin:goodaccordingtointuition,theory,practice▪ Onlysupportvectorsmatter;othertrainingexamplesareignorable▪ Supportvectormachines(SVMs)findtheseparatorwithmaxmargin▪ Basically,SVMsareMIRAwhereyouoptimizeoverallexamplesatonce
MIRA
SVM
OTHER CLASSIFIERS DISCUSSED
➤ Nearest Neighbors
Parametric/Non-Parametric
▪ Parametricmodels:▪ Fixedsetofparameters▪ Moredatameansbettersettings
▪ Non-parametricmodels:▪ Complexityoftheclassifierincreaseswithdata▪ Betterinthelimit,oftenworseinthenon-limit
▪ (K)NNisnon-parametric Truth
2Examples 10Examples 100Examples 10000Examples
WORKSHEET