Upload
jane-franklin
View
228
Download
2
Tags:
Embed Size (px)
Citation preview
Machine Learning Overview
Tamara BergLanguage and Vision
Reminders
• HW1 – due Feb 16
• Discussion leaders for Feb 17/24 should schedule a meeting with me soon
Types of ML algorithms
• Unsupervised– Algorithms operate on unlabeled examples
• Supervised– Algorithms operate on labeled examples
• Semi/Partially-supervised– Algorithms combine both labeled and unlabeled
examplesSlide 3 of 113
Unsupervised Learning
Slide 4 of 113
Slide 5 of 113
K-means clustering• Want to minimize sum of squared Euclidean
distances between points xi and their nearest cluster centers mk
Algorithm:• Randomly initialize K cluster centers• Iterate until convergence:
• Assign each data point to the nearest center• Recompute each cluster center as the mean of all points assigned
to it
k
ki
ki mxMXDcluster
clusterinpoint
2)(),(
source: Svetlana Lazebnik Slide 6 of 113
Supervised Learning
Slide 7 of 113
Slide from Dan KleinSlide 8 of 113
Slide from Dan KleinSlide 9 of 113
Slide from Dan KleinSlide 10 of 113
Slide from Dan KleinSlide 11 of 113
Example: Image classification
apple
pear
tomato
cow
dog
horse
input desired output
Slide credit: Svetlana LazebnikSlide 12 of 113
Slide from Dan Kleinhttp://yann.lecun.com/exdb/mnist/index.html Slide 13 of 113
Example: Seismic data
Body wave magnitude
Surf
ace
wav
e m
agni
tude
Nuclear explosions
Earthquakes
Slide credit: Svetlana LazebnikSlide 14 of 113
Slide from Dan KleinSlide 15 of 113
The basic classification framework
y = f(x)
• Learning: given a training set of labeled examples {(x1,y1), …, (xN,yN)}, estimate the parameters of the prediction function f
• Inference: apply f to a never before seen test example x and output the predicted value y = f(x)
output classification function
input
Slide credit: Svetlana LazebnikSlide 16 of 113
17
Some ML classification methods
106 examples
Nearest neighbor
Shakhnarovich, Viola, Darrell 2003Berg, Berg, Malik 2005…
Neural networks
LeCun, Bottou, Bengio, Haffner 1998Rowley, Baluja, Kanade 1998…
Support Vector Machines and Kernels Conditional Random Fields
McCallum, Freitag, Pereira 2000Kumar, Hebert 2003…
Guyon, VapnikHeisele, Serre, Poggio, 2001…
Slide credit: Antonio Torralba
Example: Training and testing
• Key challenge: generalization to unseen examples
Training set (labels known) Test set (labels unknown)
Slide credit: Svetlana LazebnikSlide 18 of 113
Slide credit: Dan KleinSlide 19 of 113
Slide from Min-Yen Kan
Classification by Nearest Neighbor
Word vector document classification – here the vector space is illustrated as having 2 dimensions. How many dimensions would the data actually live in?
Slide 20 of 113
Slide from Min-Yen Kan
Classification by Nearest Neighbor
Slide 21 of 113
Classification by Nearest Neighbor
Classify the test document as the class of the document “nearest” to the query document (use vector similarity to find most similar doc)
Slide from Min-Yen Kan
Slide 22 of 113
Classification by kNN
Classify the test document as the majority class of the k documents “nearest” to the query document. Slide from Min-Yen Kan
Slide 23 of 113
Slide from Min-Yen Kan
What are the features? What’s the training data? Testing data? Parameters?
Classification by kNN
Slide 24 of 113
Slide from Min-Yen KanSlide 25 of 113
Slide from Min-Yen KanSlide 26 of 113
Slide from Min-Yen KanSlide 27 of 113
Slide from Min-Yen KanSlide 28 of 113
Slide from Min-Yen KanSlide 29 of 113
Slide from Min-Yen Kan
What are the features? What’s the training data? Testing data? Parameters?
Classification by kNN
Slide 30 of 113
31
NN for vision
Fast Pose Estimation with Parameter Sensitive HashingShakhnarovich, Viola, Darrell
J. Hays and A. Efros, IM2GPS: estimating geographic information from a single image, CVPR 2008
NN for vision
Decision tree classifier
Example problem: decide whether to wait for a table at a restaurant, based on the following attributes:1. Alternate: is there an alternative restaurant nearby?2. Bar: is there a comfortable bar area to wait in?3. Fri/Sat: is today Friday or Saturday?4. Hungry: are we hungry?5. Patrons: number of people in the restaurant (None, Some, Full)6. Price: price range ($, $$, $$$)7. Raining: is it raining outside?8. Reservation: have we made a reservation?9. Type: kind of restaurant (French, Italian, Thai, Burger)10. WaitEstimate: estimated waiting time (0-10, 10-30, 30-60, >60)
Slide credit: Svetlana LazebnikSlide 33 of 113
Decision tree classifier
Slide credit: Svetlana LazebnikSlide 34 of 113
Decision tree classifier
Slide credit: Svetlana LazebnikSlide 35 of 113
Linear classifier
• Find a linear function to separate the classes
f(x) = sgn(w1x1 + w2x2 + … + wDxD) = sgn(w x)
Slide credit: Svetlana LazebnikSlide 36 of 113
Discriminant Function• It can be arbitrary functions of x, such as:
Nearest Neighbor
Decision Tree
LinearFunctions
( ) Tg b x w x
Slide credit: Jinwei GuSlide 37 of 113
Linear Discriminant Function• g(x) is a linear function:
( ) Tg b x w x
x1
x2
wT x + b = 0
wT x + b < 0
wT x + b > 0
A hyper-plane in the feature space
Slide credit: Jinwei Gu
denotes +1
denotes -1
x1
Slide 38 of 113
• How would you classify these points using a linear discriminant function in order to minimize the error rate?
Linear Discriminant Function
denotes +1
denotes -1
x1
x2
Infinite number of answers!
Slide credit: Jinwei GuSlide 39 of 113
• How would you classify these points using a linear discriminant function in order to minimize the error rate?
Linear Discriminant Function
x1
x2
Infinite number of answers!
denotes +1
denotes -1Slide credit: Jinwei Gu
Slide 40 of 113
• How would you classify these points using a linear discriminant function in order to minimize the error rate?
Linear Discriminant Function
x1
x2
Infinite number of answers!
denotes +1
denotes -1Slide credit: Jinwei Gu
Slide 41 of 113
x1
x2• How would you classify these points using a linear discriminant function in order to minimize the error rate?
Linear Discriminant Function
Infinite number of answers!
Which one is the best?
denotes +1
denotes -1Slide credit: Jinwei Gu
Slide 42 of 113
Large Margin Linear Classifier
“safe zone”• The linear discriminant
function (classifier) with the maximum margin is the best
Margin is defined as the width that the boundary could be increased by before hitting a data point
Why it is the best? strong generalization ability
Margin
x1
x2
Linear SVMSlide credit: Jinwei Gu
Slide 43 of 113
Large Margin Linear Classifier
x1
x2 Margin
wT x + b = 0
wT x + b = -1w
T x + b = 1
x+
x+
x-
Support Vectors
Slide credit: Jinwei GuSlide 44 of 113
• A simple algorithm for learning robust classifiers– Freund & Shapire, 1995– Friedman, Hastie, Tibshhirani, 1998
• Provides efficient algorithm for sparse visual feature selection– Tieu & Viola, 2000– Viola & Jones, 2003
• Easy to implement, doesn’t require external optimization tools.
Boosting
Slide credit: Antonio TorralbaSlide 45 of 113
• Defines a classifier using an additive model:
Boosting
Strong classifier
Weak classifier
WeightFeaturesvector
Slide credit: Antonio TorralbaSlide 46 of 113
• Defines a classifier using an additive model:
• We need to define a family of weak classifiers
Boosting
Strong classifier
Weak classifier
WeightFeaturesvector
from a family of weak classifiers
Slide credit: Antonio TorralbaSlide 47 of 113
Adaboost
Slide credit: Antonio TorralbaSlide 48 of 113
Each data point has
a class label:
wt =1and a weight:
+1 ( )
-1 ( )yt =
Boosting• It is a sequential procedure:
xt=1
xt=2
xt
Slide credit: Antonio TorralbaSlide 49 of 113
Toy exampleWeak learners from the family of lines
h => p(error) = 0.5 it is at chance
Each data point has
a class label:
wt =1and a weight:
+1 ( )
-1 ( )yt =
Slide credit: Antonio TorralbaSlide 50 of 113
Toy example
This one seems to be the best
Each data point has
a class label:
wt =1and a weight:
+1 ( )
-1 ( )yt =
This is a ‘weak classifier’: It performs slightly better than chance.Slide credit: Antonio Torralba
Slide 51 of 113
Toy example
Each data point has
a class label:
wt wt exp{-yt Ht}
We update the weights:
+1 ( )
-1 ( )yt =
Slide credit: Antonio TorralbaSlide 52 of 113
Toy example
Each data point has
a class label:
wt wt exp{-yt Ht}
We update the weights:
+1 ( )
-1 ( )yt =
Slide credit: Antonio TorralbaSlide 53 of 113
Toy example
Each data point has
a class label:
wt wt exp{-yt Ht}
We update the weights:
+1 ( )
-1 ( )yt =
Slide credit: Antonio TorralbaSlide 54 of 113
Toy example
Each data point has
a class label:
wt wt exp{-yt Ht}
We update the weights:
+1 ( )
-1 ( )yt =
Slide credit: Antonio TorralbaSlide 55 of 113
Toy example
The strong (non- linear) classifier is built as the combination of all the weak (linear) classifiers.
f1 f2
f3
f4
Slide credit: Antonio TorralbaSlide 56 of 113
Adaboost
Slide credit: Antonio TorralbaSlide 57 of 113
Semi-Supervised Learning
Slide 58 of 113
Supervised learning has many successes• recognize speech,• steer a car,• classify documents• classify proteins• recognizing faces, objects in images• ...
Slide Credit: Avrim Blum Slide 59 of 113
60
However, for many problems, labeled data can be rare or expensive.
Unlabeled data is much cheaper.Need to pay someone to do it, requires special testing,…
Slide Credit: Avrim Blum
61
However, for many problems, labeled data can be rare or expensive.
Unlabeled data is much cheaper.
Speech
Images
Medical outcomes
Customer modeling
Protein sequences
Web pages
Need to pay someone to do it, requires special testing,…
Slide Credit: Avrim Blum
62
However, for many problems, labeled data can be rare or expensive.
Unlabeled data is much cheaper.
[From Jerry Zhu]
Need to pay someone to do it, requires special testing,…
Slide Credit: Avrim Blum
63
Need to pay someone to do it, requires special testing,…
However, for many problems, labeled data can be rare or expensive.
Unlabeled data is much cheaper.
Can we make use of cheap unlabeled data?
Slide Credit: Avrim Blum
Semi-Supervised LearningCan we use unlabeled data to augment a small
labeled sample to improve learning?
But unlabeled data is missing the most important info!!But maybe still has
useful regularities that we can use.
But…But…But…Slide Credit: Avrim Blum Slide 64 of 113
65
Method 1:
EM
How to use unlabeled data • One way is to use the EM algorithm
– EM: Expectation Maximization
• The EM algorithm is a popular iterative algorithm for maximum likelihood estimation in problems with missing data.
• The EM algorithm consists of two steps, – Expectation step, i.e., filling in the missing data – Maximization step – calculate a new maximum a posteriori
estimate for the parameters.
Slide 66 of 113
Example Algorithm
1. Train a classifier with only the labeled documents.
2. Use it to probabilistically classify the unlabeled documents.
3. Use ALL the documents to train a new classifier.
4. Iterate steps 2 and 3 to convergence.
Slide 67 of 113
68
Method 2:
Co-Training
Co-training[Blum&Mitchell’98]
Many problems have two different sources of info (“features/views”) you can use to determine label.
E.g., classifying faculty webpages: can use words on page or words on links pointing to the page.
My AdvisorProf. Avrim Blum My AdvisorProf. Avrim Blum
x2- Text infox1- Link infox - Link info & Text info
Slide Credit: Avrim BlumSlide 69 of 113
Co-trainingIdea: Use small labeled sample to learn initial rules.– E.g., “my advisor” pointing to a page is a good indicator it
is a faculty home page.– E.g., “I am teaching” on a page is a good indicator it is a
faculty home page.
my advisor
Slide Credit: Avrim BlumSlide 70 of 113
Co-trainingIdea: Use small labeled sample to learn initial rules.– E.g., “my advisor” pointing to a page is a good indicator it
is a faculty home page.– E.g., “I am teaching” on a page is a good indicator it is a
faculty home page.Then look for unlabeled examples where one view is
confident and the other is not. Have it label the example for the other.
Training 2 classifiers, one on each type of info. Using each to help train the other.
hx1,x2ihx1,x2ihx1,x2i
hx1,x2ihx1,x2ihx1,x2i
Slide Credit: Avrim BlumSlide 71 of 113
Co-training vs. EM
• Co-training splits features, EM does not.
• Co-training incrementally uses the unlabeled data.
• EM probabilistically labels all the data at each round; EM iteratively uses the unlabeled data.
Slide 72 of 113
Generative vs DiscriminativeDiscriminative version – build a classifier to discriminate
between monkeys and non-monkeys.P(monkey|image)
Generative version - build a model of the joint distribution.
P(image,monkey)
Generative vs Discriminative
Generative vs Discriminative
Can use Bayes rule to compute p(monkey|image) if we know p(image,monkey)
Generative vs Discriminative
Can use Bayes rule to compute p(monkey|image) if we know p(image,monkey)
DiscriminativeGenerative