Agenda

Preview:

DESCRIPTION

Agenda. Introduction Bag-of-words models Visual words with spatial location Part-based models Discriminative methods Segmentation and recognition Recognition-based image retrieval Datasets & Conclusions. Classifier based methods. Decision boundary. Background. Computer screen. - PowerPoint PPT Presentation

Citation preview

AgendaAgenda• Introduction

• Bag-of-words models

• Visual words with spatial location

• Part-based models

• Discriminative methods

• Segmentation and recognition

• Recognition-based image retrieval

• Datasets & Conclusions

Classifier based methodsObject detection and recognition is formulated as a classification problem.

Bag of image patches

… and a decision is taken at each window about if it contains a target object or not.

Decision boundary

Computer screen

Background

In some feature space

Where are the screens?

The image is partitioned into a set of overlapping windows

Discriminative methods

106 examples

Nearest neighbor Neural networks

Support Vector Machines and Kernels Conditional Random Fields

Nearest Neighbors

106 examples

Shakhnarovich, Viola, Darrell 2003

Difficult due to high intrinsic dimensionality of images- lots of data needed- slow neighbor lookup

Torralba, Fergus, Freeman 2008

Multi-layer Hubel-Wiesel architectures

Neural networks

LeCun, Bottou, Bengio, Haffner 1998Rowley, Baluja, Kanade 1998Hinton & Salakhutdinov 2006Ranzato, Huang, Boureau, LeCun 2007

Riesenhuber & Poggio 1999Serre, Wolf, Poggio. 2005Mutch & Lowe 2006

Biologically inspired

Support Vector Machines

Heisele, Serre, Poggio, 2001

Face detection

Pyramid Match Kernel

Combining Multiple Kernels

Varma & Roy 2007Bosch, Munoz, Zisserman 2007

Grauman & Darrell 2005Lazebnik, Schmid, Ponce 2006

Conditional Random FieldsKumar & Hebert 2003

Quattoni, Collins, Darrell 2004

More in Segmentation section

• A simple algorithm for learning robust classifiers– Freund & Shapire, 1995– Friedman, Hastie, Tibshhirani, 1998

• Provides efficient algorithm for sparse visual feature selection– Tieu & Viola, 2000– Viola & Jones, 2003

• Easy to implement, not requires external optimization tools.

Boosting

A simple object detector with Boosting Download

• Toolbox for manipulating dataset

• Code and dataset

Matlab code

• Gentle boosting

• Object detector using a part based model

Dataset with cars and computer monitors

http://people.csail.mit.edu/torralba/iccv2005/

Boosting

Boosting fits the additive model

by minimizing the exponential loss

Training samples

The exponential loss is a differentiable upper bound to the misclassification error.

Weak classifiers • The input is a set of weighted training

samples (x,y,w)

• Regression stumps: simple but commonly used in object detection.

Four parameters:

b=Ew(y [x> ])

a=Ew(y [x< ])x

fm(x)

From images to features:A myriad of weak detectors

We will now define a family of visual features that can be used as weak classifiers (“weak detectors”)

Takes image as input and the output is binary response.The output is a weak detector.

A myriad of weak detectors

• Yuille, Snow, Nitzbert, 1998• Amit, Geman 1998• Papageorgiou, Poggio, 2000• Heisele, Serre, Poggio, 2001• Agarwal, Awan, Roth, 2004• Schneiderman, Kanade 2004 • Carmichael, Hebert 2004• …

Weak detectors

Textures of textures Tieu and Viola, CVPR 2000

Every combination of three filters generates a different feature

This gives thousands of features. Boosting selects a sparse subset, so computations on test time are very efficient. Boosting also avoids overfitting to some extend.

Haar wavelets

Haar filters and integral imageViola and Jones, ICCV 2001

The average intensity in the block is computed with four sums independently of the block size.

Haar waveletsPapageorgiou & Poggio (2000)

Polynomial SVM

Edges and chamfer distance

Gavrila, Philomin, ICCV 1999

Edge fragments

Weak detector = k edge fragments and threshold. Chamfer distance uses 8 orientation planes

Opelt, Pinz, Zisserman, ECCV 2006

Histograms of oriented gradients

• Dalal & Trigs, 2006

• Shape contextBelongie, Malik, Puzicha, NIPS 2000• SIFT, D. Lowe, ICCV 1999

Weak detectors

Part based: similar to part-based generative models. We create weak detectors by using parts and voting for the object center location

Car model Screen model

These features are used for the detector on the course web site.

Weak detectors

First we collect a set of part templates from a set of training objects.Vidal-Naquet, Ullman, Nature Neuroscience 2003

Weak detectors

We now define a family of “weak detectors” as:

= =

Better than chance

*

Weak detectors

We can do a better job using filtered images

Still a weak detectorbut better than before

* * ===

Example: screen detectionFeature output

Example: screen detectionFeature output

Thresholded output

Weak ‘detector’Produces many false alarms.

Example: screen detectionFeature output

Thresholded output

Strong classifier at iteration 1

Example: screen detectionFeature output

Thresholded output

Strongclassifier

Second weak ‘detector’Produces a different set of false alarms.

Example: screen detection

+

Feature output

Thresholded output

Strongclassifier

Strong classifier at iteration 2

Example: screen detection

+

Feature output

Thresholded output

Strongclassifier

Strong classifier at iteration 10

Example: screen detection

+

Feature output

Thresholded output

Strongclassifier

Adding features

Finalclassification

Strong classifier at iteration 200

We want the complexity of the 3 features classifier with the performance of the 100 features classifier:

Cascade of classifiersFleuret and Geman 2001, Viola and Jones 2001

Recall

Precision

0% 100%

100%

3 features

30 features

100 features

Select a threshold with high recall for each stage.

We increase precision using the cascade

Some goals for object recognition

• Able to detect and recognize many object classes

• Computationally efficient• Able to deal with data starving situations:

– Some training samples might be harder to collect than others

– We want on-line learning to be fast

Shared features• Is learning the object class 1000 easier

than learning the first?

• Can we transfer knowledge from one object to another?

• Are the shared properties interesting by themselves?

Shared features

Screen detector

Car detector

Face detector

• Independent binary classifiers:

Torralba, Murphy, Freeman. CVPR 2004. PAMI 2007

Screen detector

Car detector

Face detector

• Binary classifiers that share features:

50 training samples/class29 object classes2000 entries in the dictionary

Results averaged on 20 runsError bars = 80% interval

Krempp, Geman, & Amit, 2002

Torralba, Murphy, Freeman. CVPR 2004

Shared features

Class-specific features

Generalization as a function of object similarities

12 viewpoints12 unrelated object classes

Number of training samples per class Number of training samples per class

Are

a un

der R

OC

Are

a un

der R

OC K = 2.1 K = 4.8

Torralba, Murphy, Freeman. CVPR 2004. PAMI 2007

Sharing patches• Bart and Ullman, 2004

For a new class, use only features similar to features that where good for other classes:

Proposed Dog features

Sharing transformationsMiller, E., Matsakis, N., and Viola, P. (2000). Learning from one example

through shared densities on transforms. In IEEE Computer Vision and Pattern Recognition.

Transformations are sharedand can be learnt from other tasks.

Some references on multiclass

• Caruana 1997• Schapire, Singer, 2000• Thrun, Pratt 1997• Krempp, Geman, Amit, 2002• E.L.Miller, Matsakis, Viola, 2000• Mahamud, Hebert, Lafferty, 2001• Fink 2004• LeCun, Huang, Bottou, 2004• Holub, Welling, Perona, 2005• …

Recommended