19
Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University Yu-song Syu 20060822 IEEE Multimedia 2006

Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University

  • View
    216

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University

Documenting Motion Sequences with Personalized Annotation System

Kanav Kahol,Priyamvada Tripathi, and

Sethuraman PanchanathanArizona State University

Yu-song Syu

20060822

IEEE Multimedia 2006

Page 2: Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University

Outline Gestures & complex motion sequences Steps of annotation

Modeling gestures anatomically Motion capture Gesture segmentation Gesture recognition Movement annotation

Results Future work

Page 3: Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University

Gestures A sequence of poses

Modeled by state transition Each state corresponds to a pose in the

sequence

Start pose

Endpose

Page 4: Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University

When it becomes complex…

In dance, a large vocabulary of gestures are used

A scalable gesture segmentation / recognition methodology is needed

HMM is needed here

Page 5: Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University

HMM – Hidden Markov Model

We have: Possible symbols Possible states Possibility of transition between

states Possibility of symbols in every state

Symbol series are given State series are hidden

Page 6: Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University

Modeling gestures anatomically

Model the anatomy with 23 segments & 14 joints A parent segment inherits the characteristics of

its children

Page 7: Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University

Two adjacent segments can be perceived as one when They have similar motion vectors Angle of the joint between them

doesn’t change in a time period Dynamic body hierarchy

Modeling gestures anatomically

Page 8: Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University

Dynamic body hierarchy

Segments behaving as one

unit have the same color

Page 9: Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University

Motion capsure

7 choreographers Each creates 3 short dance sequences Every sequences are repeated 3 times Choreographers write down:

Original score for every dance sequence Detail score for every gesture Score: a verbal description

Page 10: Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University

Motion capture

Page 11: Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University

Gesture segmentation

For every body segment Derivate the spatial orientation, velocity, and

acceleration Dynamic hierarchy

Compute the activity SegmentForce = SegmentMass * SegmentAcceleration SegmentMomentum = SegmentMass * SegmentVelocity SegmentKE = SegmentMass * segmentVelocity2

Derive parent activities by vector addition of roots Gesture boundary determination

Page 12: Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University

Gesture segmentation Gesture boundary determination

Find out local minima as binary triples When force reaches its minimum, mark “1” In common with momentum and KE I.e. (100), (011), …

Not every local minimum is a gesture boundary 22 real-world physical configurations in which

adjacent body segments could coalesce We use the 23 triples and 22-elements vector

to train classifier to figure out whether the local minimum is a gesture boundary

Page 13: Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University

Gesture recognition

Find minima of total force of segments Find stabilization of joints

Change of joint angle doesn’t exceed a threshold during a time period

segmentHMM with 23 states jointHMM with 14 states cHMM couples above-mentioned

HMMs

Page 14: Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University

cHMM

Page 15: Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University

cHMM

Θc’c: coupling weight from jointHMM to segmentHMM

d(t,i): distance between segmentt and jointi

Page 16: Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University

Movement annotation

Movement annotation can be useful while teaching dance

Use Anvil annotation software while training http://www.dfki.de/~kipp/anvil Choreographers can use it to

add/modify annotations and set gesture boundaries

Page 17: Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University

Anvil

Page 18: Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University

Motion annotation results The proposed system is simple to use

Xml language and Anvil interface A manual annotation of a 4-5 minute dance

takes about 60 minutes This system takes only 1 minute

A 6-9 percent improvement in accuracy

Page 19: Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University

Future work

Extend this system to annotate generic human movements i.e. walking, running, and washing

utensils Develop a common motion

language with this kind of software