25
Learning deformable models Yali Amit, University of Chicago Alain Trouvé, CMLA Cachan.

Learning deformable models

  • Upload
    rowdy

  • View
    47

  • Download
    0

Embed Size (px)

DESCRIPTION

Learning deformable models. Yali Amit, University of Chicago Alain Trouv é , CMLA Cachan. Why modeling?. Generative models for object appearance allow us to move from learned objects to online decisions on object configurations. Probability models can be composed. - PowerPoint PPT Presentation

Citation preview

Page 1: Learning deformable models

Learning deformable models

Yali Amit, University of Chicago

Alain Trouvé, CMLA Cachan.

Page 2: Learning deformable models

Why modeling? Generative models for object appearance allow us to move from learned

objects to online decisions on object configurations. Probability models can be composed. Parameters can be estimated online.

Generative models allow us to learn sequentially and still be able to discriminate between objects.

Sequential learning of new objects. Sequential learning of sub-classes.

Proper modeling and accounting of invariances allows us to learn with small samples.

Large background samples not necessary.

Page 3: Learning deformable models

Modeling object appearance Object classes are recognized in data modulo strong variations –

geometric and photometric. Variations are modeled as group action on data. Data is noisy and sampled discretely. Model object appearance through group actions on a template which

then undergoes some degradation to become observed data.

Page 4: Learning deformable models

As vectors these are very far apart. Modulo translation, rotation and contrast they are identical except for the noise.Lower dimensional parameterization.

This structure could not be discovered through direct measurements onthe data. (Dictionary world or manifold world)

Page 5: Learning deformable models

Mathematical formulation

Page 6: Learning deformable models

Template estimation

Page 7: Learning deformable models

Unobserved deformations

Page 8: Learning deformable models

Example: handwritten digits

No modeling of contrast → contrast sensitive.

One way to avoid modeling a certain variability is to `mod' it out - Binary oriented edges. (Can't add binary images... )

Page 9: Learning deformable models

Oriented edge dataOriginal image

Transforming to oriented edges

Page 10: Learning deformable models

Deforming the data

Page 11: Learning deformable models

Simplest background model

Page 12: Learning deformable models

Mixtures

Page 13: Learning deformable models

Mixture models for the `micro-world'

Page 14: Learning deformable models

Mixture models for the `micro-world'

Page 15: Learning deformable models

Modulo deformations

Page 16: Learning deformable models
Page 17: Learning deformable models

Structured library of parts A mixture of models for local image windows – parts - is used to

recode the image data at much lower spatial resolution with little loss of information.

A mixture of deformable models (rotations) imposes a geometric structure on this code – tells us which parts are similar.

Page 18: Learning deformable models

Part based representation

Because parts are structured not much information lost with lower resolution.Much invariance gained.

Now estimate Bernoulli mixture models for object class with coarse part based representation. Or estimate hierarchy of mixture models.

Page 19: Learning deformable models

Simple non-linear deformations

Page 20: Learning deformable models

Patchwork model: gray levels

Page 21: Learning deformable models

Training a POP model Simple approximation: train each window separately with full E-step

in the EM algorithm. Assume homogeneous background model outside window.

Works for binary features not so well for gray level models. For gray level data: use current estimates for all other windows, at

optimal instantiation for each training sample as a background – iterative optimization of the full likelihood.

Page 22: Learning deformable models

Training a POP model continued

Page 23: Learning deformable models

Mixture models based on parts on coarse grid → POP models for each component based on oriented edge data → For each component of POPmodel compute mean image modulo shift → Produce gray level POP model from image means.

Page 24: Learning deformable models
Page 25: Learning deformable models

Conclusion Importance of modeling variability as hidden random variable. Estimation of templates and mixtures through EM type algorithms.

Local world – parts, dictionaries with symmetries. Global objects – non-linear deformations.

Instead of modeling variability – max over simple subsets of deformations applied to object parts. (Needs formalization.)

For object recognition there is rich structure in the subject matter beyond linear operations in function spaces. Distances should not be measured directly in observation space. The `manifold' is defined through the group action.

A wide range of open questions both theoretical and applied waiting to be studied.