68
Motion Segmentation By Hadas Shahar ( and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and YouTube ) 1

Motion Segmentation

  • Upload
    doane

  • View
    93

  • Download
    3

Embed Size (px)

DESCRIPTION

Motion Segmentation. By Hadas Shahar (and John Y.A.Wang , and Edward H. Adelson , and Wikipedia and YouTube). Introduction. When given a video input, we would like to divide it into segments according to the different movement types. This is useful for object tracking and - PowerPoint PPT Presentation

Citation preview

Page 1: Motion   Segmentation

1Motion Segmentation

By Hadas Shahar

(and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and YouTube)

Page 2: Motion   Segmentation

2

Introduction•When given a video input, we would like

to divide it into segments according to the different movement types.

•This is useful for object tracking and video analysis

Page 3: Motion   Segmentation

3

Session Map

•Building Blocks:▫Layered Image representation▫Optic Flow Estimation▫Affine Motion Estimation

•Algorithm Walkthrough•Examples

Page 4: Motion   Segmentation

4

Session Map

•Building Blocks:▫Layered Image representation▫Optic Flow Estimation▫Affine Motion Estimation

•Algorithm Walkthrough•Examples

Page 5: Motion   Segmentation

5

Layered Image Representation

•Given a simple movement, what would be the best way to represent it?

•Which parameters would you select to represent for the following scene?

Page 6: Motion   Segmentation

6

Layered Image RepresentationFor any movement we would like to have 3 maps:

•The Intensity Map

•The Alpha Channel (or- opacity)

•The Warp Map (or- optic flow)

Page 7: Motion   Segmentation

7

For example, if we take a scene of a hand moving over a background, we would like to get:

Page 8: Motion   Segmentation

8

•Given these maps, it’s possible to easily reproduce the occurring movement.

•But how can we produce these maps?

Page 9: Motion   Segmentation

9

Session Map

•Building Blocks:▫Layered Image representation▫Optic Flow Estimation▫Affine Motion Estimation

•Algorithm Walkthrough•Examples

Page 10: Motion   Segmentation

10

Optic Flow Estimation (this is the heavy part)

•The Optical Flow is a field of vectors describing the movement in the image

•For example:

Page 11: Motion   Segmentation

11

Optic Flow Estimation (this is the heavy part)

•Note! Optical Flow doesn’t describe the occurring movement, but the movement we perceive.

Look at the Barber’s pole for example

Page 12: Motion   Segmentation

12

Optic Flow Estimation (this is the heavy part)

• The actual motion is RIGHT

• But the perceived motion (or- Optical Flow) is UP

Page 13: Motion   Segmentation

13

Optic Flow Estimation- the Lucas-Kanade method

In order to identify movements correctly, we have to work with several assumptions:

•Brightness Consistency- the movement won’t affect the brightness of the object

•Constant Motion in a neighborhood- neighboring pixels will move together

Page 14: Motion   Segmentation

14

Optic Flow Estimation- the Lucas-Kanade method

Definitions:

X(t) is the point X at time t (X=x,y)

I(X(t)) is the brightness of point X at time t.

is the gradient.

IxIIy

Page 15: Motion   Segmentation

15

Optic Flow Estimation- the Lucas-Kanade method

Brightness Consistency Assumption:

I(x(t),t)= const for any t

Meaning- the brightness of point x(t) is constant.Therefore the time derivative must be 0:

We would like to focus on this part, the velocity

( ( ), ) 0Td dx II x t t Idt dt t

dxVdt

Page 16: Motion   Segmentation

16

Optic Flow Estimation- the Lucas-Kanade method

But why is the Intensity assumption not enough?

Let’s look at the following example and try to determine the optical flow:

Page 17: Motion   Segmentation

17

Optic Flow Estimation- the Lucas-Kanade method

It looks like the grid is moving down and to the right

But it can actually be one of the following:

Page 18: Motion   Segmentation

18

Optic Flow Estimation- the Lucas-Kanade method

Since our window of observation is too small, we can’t infer the actual motion taking place.

This is called the Aperture Problem

And this is why we need the 2nd constraint

Page 19: Motion   Segmentation

19

Optic Flow Estimation- the Lucas-Kanade method

Constant Motion In a Neighborhood:

We assume the velocity is the same in our entire window of observation-

( ', ) ( ', ) 0 ' ( )T II x t V x t x W xt

W(x) is the window or- environment, of x

Page 20: Motion   Segmentation

20

Optic Flow Estimation- the Lucas-Kanade method

There is a trade off here-

The larger the window the less accurately it represents the velocity (since we assume the velocity is constant there)

And in the other direction- the smaller the window the more likely we are to have the aperture problem

Page 21: Motion   Segmentation

21

Optic Flow Estimation- the Lucas-Kanade method

Sadly, since there are some changes in intensity (due to environment changes or even sensor noise) –the derivative will never actually be 0.

So, we take the least square error:2

( )

( ) | ( ', ) ( ', ) |Tt

W x

E V I x t V I x t

Page 22: Motion   Segmentation

22

Optic Flow Estimation- the Lucas-Kanade method

The minimal value will occur when the 2nd derivative is 0:

When:

So V is:

2 2 0dE MV qdV

( )

T

W x

M I I ( )

tW x

q I I

1V M q

Page 23: Motion   Segmentation

23

Optic Flow Estimation- the Lucas-Kanade method

A few notes regarding M:M is a 2x2 matrix -made up of the gradient times its transpose:

We can divide M into 3 cases (this is going to be very similar to the Harris corner detection)

2 2

2

2 2

2

,

,

I Ix x y

MI Ix y y

Page 24: Motion   Segmentation

24

Optic Flow Estimation- the Lucas-Kanade method

Case1: If the gradient is 0, M=0,there are no eigenvalues and V can have any value.

This occurs when our window is at a clear region:

Page 25: Motion   Segmentation

25

Optic Flow Estimation- the Lucas-Kanade method

Case2: If the gradient is constant, M is not 0 but we’ll receive only 1 eigenvalue.

This occurs when our window is at an edge:

Page 26: Motion   Segmentation

26

Optic Flow Estimation- the Lucas-Kanade method

Case1: If M is invertible (det=0), we can find V easily

This occurs when our window is at a corner:

Page 27: Motion   Segmentation

27

Optic Flow Estimation- the Lucas-Kanade method

After we find V for every window, we get Velocity vector map, or- the Optical Flow.

Page 28: Motion   Segmentation

28

Session Map

•Building Blocks:▫Layered Image representation▫Optic Flow Estimation▫Affine Motion Estimation

•Algorithm Walkthrough•Examples

Page 29: Motion   Segmentation

29

Affine Estimation

•In Affine Estimation, we assume our motions can be described by affine transformations.

•This includes:▫Translations▫Rotations▫Zoom▫Shear

And this does cover a lot of the motions we encounter in the real world

Page 30: Motion   Segmentation

30

Affine Estimation

The idea behind Affine Estimation is quite simple-

Find the affine transformation between 2 images, that will have the minimal difference.

Page 31: Motion   Segmentation

31

Affine EstimationQuick reminder:

Page 32: Motion   Segmentation

32

Affine Estimation

Page 33: Motion   Segmentation

33

Affine Estimation

Page 34: Motion   Segmentation

34

Affine EstimationThere are several ways to do this, most commonly by matching feature-points between the 2 images and calculating the Affine transformation matrix(remember?)

What we’ll use won’t be based on feature points, but on the Velocity vector calculated from the Optical Flow.

We’ll get to that later though, so for now- no formulas!

Page 35: Motion   Segmentation

35

Session Map

•Building Blocks:▫Layered Image representation▫Optic Flow Estimation▫Affine Motion Estimation

•Algorithm Walkthrough•Examples

Page 36: Motion   Segmentation

36

Part 2- The Algorithm Walkthrough•So how can we combine all the

information we gathered so far into creating our 3 maps for every frame?

Page 37: Motion   Segmentation

37

The Algorithm Walkthrough•Here’s the basic idea:

Page 38: Motion   Segmentation

38

The Algorithm Walkthrough•Here’s the basic idea:

First, we calculate the Optical Flow- this gives us the Warp map.

But since it will only look for 1 overall motion, it may disregard object boundaries and we’ll get several different objects in our motion.

Page 39: Motion   Segmentation

39

Optical Flow Estimator

Page 40: Motion   Segmentation

40

The Algorithm Walkthrough•Here’s the basic idea:

Then, we divide the image(s) into arbitrary sub-regions, and use Affine Estimation, which helps us find the local motions within every sub-region

Page 41: Motion   Segmentation

41

Affine Regression and Clustering

Page 42: Motion   Segmentation

42

The Algorithm Walkthrough•Here’s the basic idea:

Then we check the difference between our initial guess and the movement observed.

And reassign the sub-regions to minimize the error

Page 43: Motion   Segmentation

43

Our estimation using an affine transformation

Actual change

Hypothesis Testing

Page 44: Motion   Segmentation

44

The Algorithm Walkthrough•Here’s the basic idea:

We repeat the cycle iteratively, constantly refining the motion estimation.

Convergence is achieved when either:1. Only a few points

are reassigned in each iteration

2. Max number of iterations is reached

Page 45: Motion   Segmentation

45

Region reassignments- in each iteration we refine our estimation results

This segmentation is what provides us with the Opacity Map

Page 46: Motion   Segmentation

46

The Algorithm WalkthroughReminder- This is an Affine Transformation matrix:

Made up of 6 variables, to cover the rotation, translation, zoom and shear operations

11001''

yx

fedcba

yx

Page 47: Motion   Segmentation

47

The Algorithm Walkthrough- definitionsLet V be our Velocity (obtained by the Optical Flow estimation)

We would like to use the velocity to represent the Affine Transformation:

But how can we work with V in such a way?We break V into Vx and Vy, 2 vectors representing the velocity in the X and Y direction respectively

0 0 1 1

a b c xV d e f y

Page 48: Motion   Segmentation

48

The Algorithm Walkthrough- definitions

Vx(x,y)= [aX, bY, c]Vy(x,y)= [dX, eY, f]

Where a,b,c,d,e,f are the variables of the affine transformation

Page 49: Motion   Segmentation

49

The Algorithm Walkthrough- definitions

•Let be the ith hypothesis vector.

Meaning- is the affine transformation matrix we believe would best represent the ith region’s movement.

•We would like to break H into its x and y parts as well:

[ , ]i yi xiH H H

[ , , ][ , , ]

xi xi xi xi

yi yi yi yi

H a b cH d e f

Page 50: Motion   Segmentation

50

The Algorithm Walkthrough- definitionsAnd last but not least-

We define [ , ,1]T x y

*That’s our original coordinates vector

Page 51: Motion   Segmentation

51

The Algorithm WalkthroughSo basically so far we got the following parameterization:

11001''

yx

fedcba

yx

V Hi Φ

Page 52: Motion   Segmentation

52

The Algorithm WalkthroughThen we can define our affine equations like this:

( , )

( , )

Tx xi

Ty yi

V x y H

V x y H

Page 53: Motion   Segmentation

53

The Algorithm WalkthroughAnd we can calculate Hi from V using the following formula:

1

[ , ] ([ ( , ), ( , )] ) Tyi xi y x

Pi Pi

H H V x y V x y

This is the Velocity x and y parameters

This is the pseudo inverse matrix

Summed over all the pixels in the region

Page 54: Motion   Segmentation

54

The Algorithm WalkthroughWe know we can divide our region segmentations into 2 cases:

•A sub-region contains several object boundaries

(ie- the region contains several small movements)

•An object is covered by several sub-regions

(ie- we need to merge regions in order to view the full movement)

Page 55: Motion   Segmentation

55

The Algorithm WalkthroughCase 1- sub-region contains several object boundaries

Page 56: Motion   Segmentation

56

The Algorithm WalkthroughCase 1- sub-region contains several object boundaries

Page 57: Motion   Segmentation

57

The Algorithm Walkthrough

In this case, since we divide our image into fairly small regions, we would like to ignore these sections.

These regions will have a large residual error, so we can identify them and remove them from our calculations

Page 58: Motion   Segmentation

58

The Algorithm WalkthroughCase 2- An object is covered by several sub-regions

Page 59: Motion   Segmentation

59

The Algorithm Walkthrough

In this case, we would like to merge the 2 (or more) sub-regions, so they would cover our entire object.

Since the sub-regions contains the same moving object, its movement parameters will be very similar.

So how do we do it?

Page 60: Motion   Segmentation

60

The Algorithm WalkthroughWe move our Hypotheses into affine motion space-Parameterizing them using the velocity rather than the spatial values.

Then we group them usingK-Means Clustering.(we already know how to do that!)

This merges similar Hypotheses and provides us with a single representative for each motion.

Page 61: Motion   Segmentation

61

The Algorithm Walkthrough

Now that we calculated the affine transformation for each region, we would like check how we did compared to the actual movement.For this we use a mean square cost function:

Where:i(x,y) is the sub-region assigned to that x,y coordinateV(x,y) is the estimated motion fieldVHi(x,y) is the affine motion field of the ith hypothesis

2

,

( ( , )) ( ( , ) ( , ))Hix y

C i x y V x y V x y

Page 62: Motion   Segmentation

62

The Algorithm Walkthrough

We wish to minimize the difference between our hypothesis and the actual motion, so for each sub-region we’ll take the minimum value:

2

,

( ( , )) ( ( , ) ( , ))Hix y

C i x y V x y V x y

20 ( , ) arg min[ ( , ) ( , )]Hii x y V x y V x y

i0 is the minimum cost assignment- the minimal value for the ith region

Page 63: Motion   Segmentation

63

The Algorithm Walkthrough

Now we divide the image(s) into motion regions by taking a threshold on the i0 values as:

01 ( , )( , )

0i

i

if i x y HP x y

otherwise

This gives us the opacity map!

Page 64: Motion   Segmentation

64

The Algorithm Walkthrough

And then we just iterate until the regions stop changing, or until the max number of iterations is reached.

And we’re done!

Page 65: Motion   Segmentation

65

Examples!• https://www.youtube.com/watch?v=7BtlB8rEqrY

• https://www.youtube.com/watch?v=nnp9qc8O8eE

• https://www.youtube.com/watch?v=4ny8rR1hesU

Page 66: Motion   Segmentation

66

Summary

We saw how to calculate the Optical Flow in a given video, and how to use the Optical Flow in combination with the Affine estimation model iteratively to get better approximation of the motion.

Page 67: Motion   Segmentation

67

Conclusions

Motion segmentation is an important part of any motion related algorithm, and is a useful and powerful tool for in computer vision.

Page 68: Motion   Segmentation

68

Credits•John Y.A. Wang & Edward H. Adelson-

Layered Representation for Motion Analysis (1993)

•Edward H. Adelson- Layered Representation for Image Coding (1991)

•Lucas B. & Kanade T.- Optical flow algorithm