32
Spatial Transformers in Feed-Forward Networks Max Jaederberg, Karen Simonyan, Andrew Zisserman and Koray Kavukcuoglu Google DeepMind and University of Oxford

Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

Spatial Transformers inFeed-Forward Networks

Max Jaederberg, Karen Simonyan, Andrew Zisserman and Koray Kavukcuoglu

Google DeepMind and University of Oxford

Page 2: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

ConvNets

• Interleaving convolutional layers with max-pooling layers allows translation invariance.

- Pooling is simplistic. - Only small invariances per pooling layer- Limited spatial transformation- Pools across entire image+ Exceptionally effective

• Can we do better?

Page 3: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

Motivation 1: transformations of input data

Rotated MNIST (+/- 90°)

Page 4: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

Motivation 2: attention

Page 5: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

Conditional Spatial Warping

• Conditional on input featuremap, spatially warp image.

+ Transforms data to a space expected by subsequent layers

+ Intelligently select features of interest (attention)+ Invariant to more generic warping

transform transform

Page 6: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

Conditional Spatial Warpingnetwork

inputSpatial

transformer

output

Page 7: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

U V

Localisation net

Sampler

Spatial Transformer

Grid generator

A differentiable module for spatially transforming data,conditional on the data itself

Page 8: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

U V

Can parameterise, e.g. affine transformation

Sampling Grid

Warp regular grid by an affine transformation

Ãxsiysi

!= Tθ(Gi) = Aθ

⎛⎜⎝ xtiyti1

⎞⎟⎠ = " θ11 θ12 θ13

θ21 θ22 θ23

#⎛⎜⎝ xtiyti1

⎞⎟⎠

Page 9: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

Can parameterise attention

Sampling Grid

Warp regular grid by an affine transformation Ã

xsiysi

!= Tθ(Gi) = Aθ

⎛⎜⎝ xtiyti1

⎞⎟⎠ =

·s 0 tx0 s ty

¸⎛⎜⎝ xtiyti1

⎞⎟⎠

U V

Page 10: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

Identity transformation

affine transformation

Page 11: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

Sampler

and gradients are defined to allow backprop, eg:

Sample input featuremap U to produce output feature map V(i.e. texture mapping)

U V

V ci =HXn

WXm

U cnmmax(0, 1− |xsi −m|)max(0, 1− |ysi − n|)

e.g. for bilinear interpolation:

∂V ci∂Ucnm

=HXn

WXm

max(0, 1− |xsi −m|)max(0, 1− |ysi − n|)

Page 12: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

U V

Localisation net

Sampler

Spatial Transformer

Grid generator

A differentiable module for spatially transforming data,conditional on the data itself

Page 13: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

Spatial Transformer Networks• Spatial Transformers is differentiable, and so can be inserted at any

point in a feed forward network and trained by back propogration

CNN

=

0

9

Example: • digit classification, loss: cross-entropy for 10 way classification

CNN

ST

0

9

Page 14: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

MNIST Digit Classification

Training data: 6000 examples of each digit

Testing data: 10k images

Can achieve testing error of 0.23%

Page 15: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

• Training and test randomly rotated by (+/- 90°)• Fully connected network with affine ST on input

Task: classify MNIST digits

network input

Spatial transformer output

Performance:

• FCN 2.1

• CNN 1.2

• ST-FCN 1.2

• ST-CNN 0.7

Page 16: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

Generalizations 1: transformations

• Affine transformation – 6 parameters

• Projective transformation – 8 parameters

• Thin plate spline transformation

• Etc

Any transformation where parameters can be regressed

Page 17: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

Rotated MNIST

7

6

2

7

8

7

3

2

9

7

Input ST

ST-FCN Affine ST-FCN Thin Plate SplineOutput Input ST Output

Page 18: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

Rotated, Translated & Scaled MNIST

6

7

3

1

7

9

0

5

8

5

Input ST

ST-FCN Projective ST-FCN Thin Plate SplineOutput Input ST Output

Page 19: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

Translated Cluttered MNIST

5

1

2

4

4

6

3

5

5

6

Input ST

ST-FCN Affine ST-CNN AffineOutput Input ST Output

Page 20: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

Results on performance

R: rotationRTS: rotation, translation, scale

P: projectiveE: elastic

Page 21: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

• Spatial Transformers can be inserted before/after conv layers, before/after max-pooling

• Can also have multiple Spatial Transformers at the same level

conv1conv2

conv3

ST1

ST2a

ST3

ST2b

= 0

9

Generalization 2: Multiple spatial transformers

Page 22: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

Task: Add digits in two images

MNIST digits under rotation, translation and scale

Architecture

Page 23: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

input (2 channels)MNIST 2-channel addition

Add up the digits.One per channel.Random per-channel rotation, scale and translation.

chan1 chan2

SpatialTransformer2SpatialTransformer1

chan1 chan2 chan2chan1

SpatialTransformer1 automatically specialises to rectify channel 1.

SpatialTransformer2 automatically specialises to rectify channel 2.

Task: Add digits in two images

Page 24: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

Task: Add digits in two images

Performance % error

MNIST digits under rotation, translation and scale

Page 25: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

Applications and comparisons with the state of the art

Page 26: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

2

2

2

nullnull

Street View House Numbers (SVHN)

conv1conv2

fc3

ST1ST2

….

200k real images of house numbers collected from Street ViewBetween 1 and 5 digits in each number

4 spatial transformer + conv layers, 4 conv layers, 3 fc layers, 5 character output layers

Architecture:

Page 27: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

SVHN 64x64

• CNN: 4.0% error• (single model) Goodfellow et al 2013

• Attention: 3.9% error• (ensemble with MC averaging)Ba et al, ICLR 2015

• ST net: 3.6% error• (single model)

Page 28: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

SVHN 128x128

• CNN: 5.6% error• (single model)

• Attention: 4.5% error• (ensemble with MC averaging)Ba et al, ICLR 2015

• ST net: 3.9% error• (single model)

Page 29: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

Fine Grained Visual Categorization

CUB-200-2011 birds dataset

• 200 species of birds• 6k training images• 5.8k test images

Page 30: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

Spatial Transformer Network

• Pre-train inception networks on ImageNet

• Train spatial transformer network on fine grained multi-way classification

Page 31: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

CUB Performance

Page 32: Spatial Transformers in Feed-Forward Networkslear.inrialpes.fr/workshop/allegro2015/slides/zisserman02.pdf · Spatial Transformers allow dynamic, conditional cropping and warping

Summary

● Spatial Transformers allow dynamic, conditional cropping and warping of images/feature maps.

● Can be constrained and used as very fast attention mechanism.

● Spatial Transformer Networks localise and rectify objects automatically. Achieve state of the art results.

● Can be used as a generic localisation mechanism which can be learnt with backprop.