Statistical Learning of Multi-View Face Detection Microsoft Research Asia Stan Li, Long Zhu, Zhen...

Preview:

Citation preview

Statistical Learning of Statistical Learning of Multi-View Face DetectionMulti-View Face Detection

Microsoft Research AsiaMicrosoft Research AsiaStan Li, Long Zhu, Zhen Qiu Zhang, Andrew Blake, Stan Li, Long Zhu, Zhen Qiu Zhang, Andrew Blake,

Hong Jiang Zhang, Harry ShumHong Jiang Zhang, Harry Shum

Presented by Derek HoiemPresented by Derek Hoiem

OverviewOverview

Viola-Jones AdaBoostViola-Jones AdaBoost FloatBoost ApproachFloatBoost Approach Multi-View Face DetectionMulti-View Face Detection FloatBoost ResultsFloatBoost Results FloatBoost vs. AdaBoostFloatBoost vs. AdaBoost FloatBoost DiscussionFloatBoost Discussion

Face Detection OverviewFace Detection Overview

Non-Object

Object

Classifier

Evaluate windows at all Evaluate windows at all locations in many scaleslocations in many scales

Viola-Jones AdaBoostViola-Jones AdaBoost

Weak classifiers formed out of simple Weak classifiers formed out of simple featuresfeatures

In sequential stages, features are selected In sequential stages, features are selected and weak classifiers trained with emphasis and weak classifiers trained with emphasis on misclassified exampleson misclassified examples

Integral images and a cascaded classifier Integral images and a cascaded classifier allow real-time face detectionallow real-time face detection

Viola-Jones FeaturesViola-Jones Features

For a 24 x 24 image: 190,800 semi-continuous featuresFor a 24 x 24 image: 190,800 semi-continuous features Computed in constant time using integral imageComputed in constant time using integral image Weak classifiers consist of filter response thresholdWeak classifiers consist of filter response threshold

Vertical

Horizontal

On-Off-On

Diagonal

Integral ImageIntegral Image

I( x1, y1 )

I( x3, y3 )

I( x2, y2 )

I( x4, y4 )

I( x6, y6 )

I( x5, y5 )

I( x7, y7 )

I( x8, y8 )

y = I8 – I7– I6 + I5+ I4 – I3 – I2 + I1

Cascade of ClassifiersCascade of Classifiers

1 Weak Classifier

5 Weak Classifiers

1200 Weak Classifiers

40%

40%

99.999%

40%

60%

60%

0.001%

Class 1 (Face)

Class 2 (Non-Face)

Stage 1

Stage 2

Stage N

Input Signal (Image Window)

Viola-Jones AdaBoost AlgorithmViola-Jones AdaBoost Algorithm

Strong classifier formed from weak classifiers:Strong classifier formed from weak classifiers:

At each stage, new weak classifier chosen to minimize At each stage, new weak classifier chosen to minimize bound on classification error (confidence weighted):bound on classification error (confidence weighted):

This gives the form for our weak classifier:This gives the form for our weak classifier:

Viola-Jones AdaBoost AlgorithmViola-Jones AdaBoost Algorithm

Viola-Jones AdaBoostViola-Jones AdaBoostPros and ConsPros and Cons

Very fastVery fast Moderately high accuracyModerately high accuracy Simple implementation/conceptSimple implementation/concept

Greedy search through feature spaceGreedy search through feature space Highly constrained featuresHighly constrained features Very high training timeVery high training time

FloatBoostFloatBoost

Weak classifiers formed out of simple featuresWeak classifiers formed out of simple features In each stage, the weak classifier that reduces In each stage, the weak classifier that reduces

error most is addederror most is added In each stage, if any previously added classifier In each stage, if any previously added classifier

contributes to error reduction less than the latest contributes to error reduction less than the latest addition, this classifier is removedaddition, this classifier is removed

Result is a smaller feature set with same Result is a smaller feature set with same classification accuracyclassification accuracy

MS FloatBoost FeaturesMS FloatBoost Features

For a 20 x 20 image: over 290,000 features (~500K ?)For a 20 x 20 image: over 290,000 features (~500K ?) Computed in constant time using integral imageComputed in constant time using integral image Weak classifiers consist of filter response thresholdWeak classifiers consist of filter response threshold

Microsoft

Viola-Jones

FloatBoost AlgorithmFloatBoost Algorithm

FloatBoost Weak ClassifiersFloatBoost Weak Classifiers

Can be portrayed as density estimation on single Can be portrayed as density estimation on single variables using average shifted histograms with variables using average shifted histograms with weighted examplesweighted examples

Each weak classifier is a 2-bin histogram from Each weak classifier is a 2-bin histogram from weighted examplesweighted examples

Weights serve to eliminate overcounting due to Weights serve to eliminate overcounting due to dependent variablesdependent variables

Strong classifier is a combination of estimated Strong classifier is a combination of estimated weighted PDFs for selected featuresweighted PDFs for selected features

Multi-View Face DetectionMulti-View Face DetectionHead RotationsHead Rotations

In-Plane Rotations: -45 to 45 degrees

Out of Plane Rotation: -90 to 90 degrees Moderate Nodding

Multi-View Face DetectionMulti-View Face DetectionDetector PyramidDetector Pyramid

Multi-View Face DetectionMulti-View Face DetectionMerging ResultsMerging Results

Frontal Right Side Left Side

Multi-View Face DetectionMulti-View Face DetectionSummarySummary

Simple, rectangular features usedSimple, rectangular features used FloatBoost selects and trains weak FloatBoost selects and trains weak

classifiersclassifiers A cascade of strong classifiers makes up A cascade of strong classifiers makes up

the overall detectorthe overall detector A coarse-to-fine evaluation is used to A coarse-to-fine evaluation is used to

efficiently find a broad range of out-of-efficiently find a broad range of out-of-plane rotated facesplane rotated faces

Results: Frontal (MIT+CMU)Results: Frontal (MIT+CMU)

20x20 images20x20 images 3000 original faces, 6000 total3000 original faces, 6000 total 100,000 non-faces100,000 non-faces

Schneiderman

FloatBoost/AdaBoost/RBK

FloatBoost FloatBoost vs. Adaboost

Results: Results: MS Adaboost vs. Viola-Jones AdaboostMS Adaboost vs. Viola-Jones Adaboost

More flexible featuresMore flexible features Confidence-weighted AdaBoostConfidence-weighted AdaBoost Smaller image sizeSmaller image size

Results: ProfileResults: Profile

No Quantitative Results!!!

FloatBoost vs. AdaBoostFloatBoost vs. AdaBoost

FloatBoost finds a more potent set of weak FloatBoost finds a more potent set of weak classifiers through a less greedy searchclassifiers through a less greedy search

FloatBoost results in a faster, more FloatBoost results in a faster, more accurate classifieraccurate classifier

FloatBoost requires longer training times FloatBoost requires longer training times (5 times longer)(5 times longer)

FloatBoost vs. AdaBoostFloatBoost vs. AdaBoost1 Strong Classifier, 4000 objects, 4000 non-objects, 99.5% fixed detection

FloatBoost: ProsFloatBoost: Pros

Very Fast Detection (5 fps multi-view)Very Fast Detection (5 fps multi-view)

Fairly High AccuracyFairly High Accuracy

Simple ImplementationSimple Implementation

FloatBoost: ConsFloatBoost: Cons

Very long training timeVery long training time

Not highest accuracyNot highest accuracy

Does it work well for non-frontal faces and Does it work well for non-frontal faces and other objects?other objects?

Recommended