23
Classification Introduction to Supervised Learning Ana Karina Fermin ISEFAR http://fermin.perso.math.cnrs.fr/

Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

ClassificationIntroduction to Supervised Learning

Ana Karina Fermin

ISEFARhttp://fermin.perso.math.cnrs.fr/

Page 2: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Motivation

Credit Default, Credit Score, Bank Risk, Market Risk Management

Data: Client profile, Client credit history...Input: Client profileOutput: Credit risk

Page 3: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Motivation

Marketing: advertisement, recommendation...

Data: User profile, Web site history...Input: User profile, Current web pageOutput: Advertisement with price, recommendation...

Page 4: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Motivation

Spam detection (Text classification)

Data: email collectionInput: emailOutput : Spam or No Spam

Page 5: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Motivation

Face Detection

Data: Annotated database of imagesInput : Sub window in the imageOutput : Presence or no of a face...

Page 6: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Motivation

Number Recognition

Data: Annotated database of images (each image isrepresented by a vector of 28× 28 = 784 pixel intensities)Input: ImageOutput: Corresponding number

Page 7: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Machine Learning

A definition by Tom Mitchell (http://www.cs.cmu.edu/~tom/)A computer program is said to learn from experience E withrespect to some class of tasks T and performance measure P, if itsperformance at tasks in T, as measured by P, improves withexperience E.

Page 8: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Supervised Learning

Supervised Learning Framework

Input measurement X = (X (1),X (2), . . . ,X (d)) ∈ XOutput measurement Y ∈ Y.(X,Y ) ∼ P with P unknown.Training data : Dn = {(X1,Y1), . . . , (Xn,Yn)} (i.i.d. ∼ P)Often

X ∈ Rd and Y ∈ {−1, 1} (classification)or X ∈ Rd and Y ∈ R (regression).

A classifier is a function in F = {f : X → Y measurable}

Goal

Construct a good classifier f̂ from the training data.

Need to specify the meaning of good.Formally, classification and regression are the same problem!

Page 9: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Loss and Probabilistic Framework

Loss functionLoss function : `(f (x), y) measure how well f (x) “predicts"y .Examples:

Prediction loss: `(Y , f (X)) = 1Y 6=f (X)Quadratic loss: `(Y ,X) = |Y − f (X)|2

Risk of a generic classifierRisk measured as the average loss for a new couple:

R(f ) = E [`(Y , f (X))] = EX[EY |X [`(Y , f (X))]

]Examples:

Prediction loss: E [`(Y , f (X))] = P {Y 6= f (X)}Quadratic loss: E [`(Y , f (X))] = E

[|Y − f (X)|2

]Beware: As f̂ depends on Dn, R(f̂ ) is a random variable!

Page 10: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Supervised Learning

Experience, Task and Performance measureTraining data : D = {(X1,Y1), . . . , (Xn,Yn)} (i.i.d. ∼ P)Predictor: f : X → Y measurableCost/Loss function : `(f (X),Y ) measure how well f (X)“predicts" YRisk:

R(f ) = E [`(Y , f (X))]

Often `(f (X),Y ) = |f (X)− Y |2 or `(f (X),Y ) = 1Y 6=f (X)

Goal

Learn a rule to construct a classifier f̂ ∈ F from the trainingdata Dn s.t. the risk R(f̂ ) is small on average or with highprobability with respect to Dn.

Page 11: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Goal

Machine Learning

Learn a rule to construct a classifier f̂ ∈ F from the trainingdata Dn s.t. the risk R(f̂ ) is small on average or with highprobability with respect to Dn.

Canonical example: Empirical Risk MinimizerOne restricts f to a subset of functions S = {fθ, θ ∈ Θ}One replaces the minimization of the average loss by theminimization of the empirical loss

f̂ = fθ̂

= argminfθ,θ∈Θ

1n

n∑i=1

`(Yi , fθ(Xi ))

Examples:Linear regressionLinear discrimination with

S = {x 7→ sign{βT x + β0} /β ∈ Rd , β0 ∈ R}

Page 12: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Example: TwoClass Dataset

Synthetic DatasetTwo features/covariates.Two classes.

Dataset from Applied Predictive Modeling, M. Kuhn andK. Johnson, SpringerNumerical experiments with R and the caret package.

Page 13: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Example: Linear Discrimination

Page 14: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Example: More Complex Model

Page 15: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Under-fitting / Over-fitting Issue

Page 16: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Under-fitting / Over-fitting Issue

Page 17: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Over-fitting Issue

Error behaviourLearning/training error (error made on the learning/trainingset) decays when the complexity of the model increases.Quite different behavior when the error is computed on newobservations (generalization error).

Overfit for complex models: parameters learned are toospecific to the learning set!General situation! (Think of polynomial fit...)Need to use an other criterion than the training error!

Page 18: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Binary Classification Loss Issue

Empirical Risk Minimizer

f̂ = argminf ∈S

1n

n∑i=1

`0/1(Yi , f (Xi ))

Classification loss: `0/1(y , f (x)) = 1y 6=f (x)

Not convex and not smooth!

Page 19: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Statistical Point of ViewIdeal Solution and Estimation

The best solution f ∗ (which is independent of Dn) is

f ∗ = arg minf ∈F

R(f ) = arg minf ∈F

E [`(Y , f (X))] = arg minf ∈F

EX[EY |X [`(Y , f (x))]

]

Bayes Predictor (Ideal solution)In binary classification with 0− 1 loss:

f ∗(X) ={

+1 if P {Y = +1|X} ≥ P {Y = −1|X}−1 otherwise

Issue: Explicit solution requires to know E [Y |X] for all values of X!

Page 20: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Conditional prob. and Bayes Predictor

Page 21: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Machine Learning

Page 22: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Methods (in the Master ISEFAR):

1 k Nearest-Neighbors2 Generative Modeling (Naive Bayes, LDA, QDA)3 Logistic Modeling4 SVM5 Tree Based Methods (M. Zetlaoui course, Apprentissage)

Page 23: Classification - Introduction to Supervised Learning · Bibliography T.Hastie,R.Tibshirani,andJ.Friedman(2009) TheElementsofStatisticalLearning Springer Series in Statistics. G.James,D.Witten,T.HastieandR.Tibshirani(2013)

Bibliography

T. Hastie, R. Tibshirani, and J. Friedman (2009)The Elements of Statistical LearningSpringer Series in Statistics.

G. James, D. Witten, T. Hastie and R. Tibshirani (2013)An Introduction to Statistical Learning with Applications in RSpringer Series in Statistics.

B. Schölkopf, A. Smola (2002)Learning with kernels.The MIT Press