Motion Segmentation By Hadas Shahar (and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and...

Preview:

Citation preview

1

Motion Segmentation

By Hadas Shahar

(and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and YouTube)

2

Introduction

•When given a video input, we would like to divide it into segments according to the different movement types.

•This is useful for object tracking and video analysis

3

Session Map

•Building Blocks:▫Layered Image representation▫Optic Flow Estimation▫Affine Motion Estimation

•Algorithm Walkthrough•Examples

4

Session Map

•Building Blocks:▫Layered Image representation▫Optic Flow Estimation▫Affine Motion Estimation

•Algorithm Walkthrough•Examples

5

Layered Image Representation

•Given a simple movement, what would be the best way to represent it?

•Which parameters would you select to represent for the following scene?

6

Layered Image RepresentationFor any movement we would like to have 3 maps:

•The Intensity Map

•The Alpha Channel (or- opacity)

•The Warp Map (or- optic flow)

7

For example, if we take a scene of a hand moving over a background, we would like to get:

8

•Given these maps, it’s possible to easily reproduce the occurring movement.

•But how can we produce these maps?

9

Session Map

•Building Blocks:▫Layered Image representation▫Optic Flow Estimation▫Affine Motion Estimation

•Algorithm Walkthrough•Examples

10

Optic Flow Estimation (this is the heavy part)

•The Optical Flow is a field of vectors describing the movement in the image

•For example:

11

Optic Flow Estimation (this is the heavy part)

•Note! Optical Flow doesn’t describe the occurring movement, but the movement we perceive.

Look at the Barber’s pole for example

12

Optic Flow Estimation (this is the heavy part)

• The actual motion is RIGHT

• But the perceived motion (or- Optical Flow) is UP

13

Optic Flow Estimation- the Lucas-Kanade method

In order to identify movements correctly, we have to work with several assumptions:

•Brightness Consistency- the movement won’t affect the brightness of the object

•Constant Motion in a neighborhood- neighboring pixels will move together

14

Optic Flow Estimation- the Lucas-Kanade method

Definitions:

X(t) is the point X at time t (X=x,y)

I(X(t)) is the brightness of point X at time t.

is the gradient.

I

xI

I

y

15

Optic Flow Estimation- the Lucas-Kanade method

Brightness Consistency Assumption:

I(x(t),t)= const for any t

Meaning- the brightness of point x(t) is constant.Therefore the time derivative must be 0:

We would like to focus on this part, the velocity

( ( ), ) 0Td dx II x t t I

dt dt t

dx

Vdt

16

Optic Flow Estimation- the Lucas-Kanade method

But why is the Intensity assumption not enough?

Let’s look at the following example and try to determine the optical flow:

17

Optic Flow Estimation- the Lucas-Kanade method

It looks like the grid is moving down and to the right

But it can actually be one of the following:

18

Optic Flow Estimation- the Lucas-Kanade method

Since our window of observation is too small, we can’t infer the actual motion taking place.

This is called the Aperture Problem

And this is why we need the 2nd constraint

19

Optic Flow Estimation- the Lucas-Kanade method

Constant Motion In a Neighborhood:

We assume the velocity is the same in our entire window of observation-

( ', ) ( ', ) 0 ' ( )T II x t V x t x W x

t

W(x) is the window or- environment, of x

20

Optic Flow Estimation- the Lucas-Kanade method

There is a trade off here-

The larger the window the less accurately it represents the velocity (since we assume the velocity is constant there)

And in the other direction- the smaller the window the more likely we are to have the aperture problem

21

Optic Flow Estimation- the Lucas-Kanade method

Sadly, since there are some changes in intensity (due to environment changes or even sensor noise) –the derivative will never actually be 0.

So, we take the least square error:2

( )

( ) | ( ', ) ( ', ) |Tt

W x

E V I x t V I x t

22

Optic Flow Estimation- the Lucas-Kanade method

The minimal value will occur when the 2nd derivative is 0:

When:

So V is:

2 2 0dE

MV qdV

( )

T

W x

M I I ( )

tW x

q I I

1V M q

23

Optic Flow Estimation- the Lucas-Kanade method

A few notes regarding M:M is a 2x2 matrix -made up of the gradient times its transpose:

We can divide M into 3 cases (this is going to be very similar to the Harris corner detection)

2 2

2

2 2

2

,

,

I I

x x yM

I I

x y y

24

Optic Flow Estimation- the Lucas-Kanade method

Case1: If the gradient is 0, M=0,there are no eigenvalues and V can have any value.

This occurs when our window is at a clear region:

25

Optic Flow Estimation- the Lucas-Kanade method

Case2: If the gradient is constant, M is not 0 but we’ll receive only 1 eigenvalue.

This occurs when our window is at an edge:

26

Optic Flow Estimation- the Lucas-Kanade method

Case1: If M is invertible (det=0), we can find V easily

This occurs when our window is at a corner:

27

Optic Flow Estimation- the Lucas-Kanade method

After we find V for every window, we get Velocity vector map, or- the Optical Flow.

28

Session Map

•Building Blocks:▫Layered Image representation▫Optic Flow Estimation▫Affine Motion Estimation

•Algorithm Walkthrough•Examples

29

Affine Estimation

•In Affine Estimation, we assume our motions can be described by affine transformations.

•This includes:▫Translations▫Rotations▫Zoom▫Shear

And this does cover a lot of the motions we encounter in the real world

30

Affine Estimation

The idea behind Affine Estimation is quite simple-

Find the affine transformation between 2 images, that will have the minimal difference.

31

Affine Estimation

Quick reminder:

32

Affine Estimation

33

Affine Estimation

34

Affine Estimation

There are several ways to do this, most commonly by matching feature-points between the 2 images and calculating the Affine transformation matrix(remember?)

What we’ll use won’t be based on feature points, but on the Velocity vector calculated from the Optical Flow.

We’ll get to that later though, so for now- no formulas!

35

Session Map

•Building Blocks:▫Layered Image representation▫Optic Flow Estimation▫Affine Motion Estimation

•Algorithm Walkthrough•Examples

36

Part 2- The Algorithm Walkthrough•So how can we combine all the

information we gathered so far into creating our 3 maps for every frame?

37

The Algorithm Walkthrough

•Here’s the basic idea:

38

The Algorithm Walkthrough

•Here’s the basic idea:

First, we calculate the Optical Flow- this gives us the Warp map.

But since it will only look for 1 overall motion, it may disregard object boundaries and we’ll get several different objects in our motion.

39

Optical Flow Estimator

40

The Algorithm Walkthrough

•Here’s the basic idea:

Then, we divide the image(s) into arbitrary sub-regions, and use Affine Estimation, which helps us find the local motions within every sub-region

41

Affine Regression and Clustering

42

The Algorithm Walkthrough

•Here’s the basic idea:

Then we check the difference between our initial guess and the movement observed.

And reassign the sub-regions to minimize the error

43

Our estimation using an affine transformation

Actual change

Hypothesis Testing

44

The Algorithm Walkthrough

•Here’s the basic idea:

We repeat the cycle iteratively, constantly refining the motion estimation.

Convergence is achieved when either:1. Only a few points

are reassigned in each iteration

2. Max number of iterations is reached

45

Region reassignments- in each iteration we refine our estimation results

This segmentation is what provides us with the Opacity Map

46

The Algorithm Walkthrough

Reminder- This is an Affine Transformation matrix:

Made up of 6 variables, to cover the rotation, translation, zoom and shear operations

11001

'

'

y

x

fed

cba

y

x

47

The Algorithm Walkthrough- definitions

Let V be our Velocity (obtained by the Optical Flow estimation)

We would like to use the velocity to represent the Affine Transformation:

But how can we work with V in such a way?We break V into Vx and Vy, 2 vectors representing the velocity in the X and Y direction respectively

0 0 1 1

a b c x

V d e f y

48

The Algorithm Walkthrough- definitions

Vx(x,y)= [aX, bY, c]Vy(x,y)= [dX, eY, f]

Where a,b,c,d,e,f are the variables of the affine transformation

49

The Algorithm Walkthrough- definitions

•Let be the ith hypothesis vector.

Meaning- is the affine transformation matrix we believe would best represent the ith region’s movement.

•We would like to break H into its x and y parts as well:

[ , ]i yi xiH H H

[ , , ]

[ , , ]xi xi xi xi

yi yi yi yi

H a b c

H d e f

50

The Algorithm Walkthrough- definitions

And last but not least-

We define

[ , ,1]T x y

*That’s our original coordinates vector

51

The Algorithm Walkthrough

So basically so far we got the following parameterization:

11001

'

'

y

x

fed

cba

y

x

V Hi Φ

52

The Algorithm Walkthrough

Then we can define our affine equations like this:

( , )

( , )

Tx xi

Ty yi

V x y H

V x y H

53

The Algorithm Walkthrough

And we can calculate Hi from V using the following formula:

1

[ , ] ([ ( , ), ( , )] ) Tyi xi y x

Pi Pi

H H V x y V x y

This is the Velocity x and y parameters

This is the pseudo inverse matrix

Summed over all the pixels in the region

54

The Algorithm Walkthrough

We know we can divide our region segmentations into 2 cases:

•A sub-region contains several object boundaries

(ie- the region contains several small movements)

•An object is covered by several sub-regions

(ie- we need to merge regions in order to view the full movement)

55

The Algorithm Walkthrough

Case 1- sub-region contains several object boundaries

56

The Algorithm Walkthrough

Case 1- sub-region contains several object boundaries

57

The Algorithm Walkthrough

In this case, since we divide our image into fairly small regions, we would like to ignore these sections.

These regions will have a large residual error, so we can identify them and remove them from our calculations

58

The Algorithm Walkthrough

Case 2- An object is covered by several sub-regions

59

The Algorithm Walkthrough

In this case, we would like to merge the 2 (or more) sub-regions, so they would cover our entire object.

Since the sub-regions contains the same moving object, its movement parameters will be very similar.

So how do we do it?

60

The Algorithm Walkthrough

We move our Hypotheses into affine motion space-Parameterizing them using the velocity rather than the spatial values.

Then we group them usingK-Means Clustering.(we already know how to do that!)

This merges similar Hypotheses and provides us with a single representative for each motion.

61

The Algorithm Walkthrough

Now that we calculated the affine transformation for each region, we would like check how we did compared to the actual movement.For this we use a mean square cost function:

Where:i(x,y) is the sub-region assigned to that x,y coordinateV(x,y) is the estimated motion fieldVHi(x,y) is the affine motion field of the ith hypothesis

2

,

( ( , )) ( ( , ) ( , ))Hix y

C i x y V x y V x y

62

The Algorithm Walkthrough

We wish to minimize the difference between our hypothesis and the actual motion, so for each sub-region we’ll take the minimum value:

2

,

( ( , )) ( ( , ) ( , ))Hix y

C i x y V x y V x y

20 ( , ) arg min[ ( , ) ( , )]Hii x y V x y V x y

i0 is the minimum cost assignment- the minimal value for the ith region

63

The Algorithm Walkthrough

Now we divide the image(s) into motion regions by taking a threshold on the i0 values as:

01 ( , )( , )

0i

i

if i x y HP x y

otherwise

This gives us the opacity map!

64

The Algorithm Walkthrough

And then we just iterate until the regions stop changing, or until the max number of iterations is reached.

And we’re done!

65

Examples!• https://www.youtube.com/watch?v=7BtlB8rEqrY

• https://www.youtube.com/watch?v=nnp9qc8O8eE

• https://www.youtube.com/watch?v=4ny8rR1hesU

66

Summary

We saw how to calculate the Optical Flow in a given video, and how to use the Optical Flow in combination with the Affine estimation model iteratively to get better approximation of the motion.

67

Conclusions

Motion segmentation is an important part of any motion related algorithm, and is a useful and powerful tool for in computer vision.

68

Credits

•John Y.A. Wang & Edward H. Adelson- Layered Representation for Motion Analysis (1993)

•Edward H. Adelson- Layered Representation for Image Coding (1991)

•Lucas B. & Kanade T.- Optical flow algorithm