11
RONCHI AND PERONA: DESCRIBING COMMON HUMAN VISUAL ACTIONS IN IMAGES 1 Describing Common Human Visual Actions in Images Matteo Ruggero Ronchi http://vision.caltech.edu/~mronchi/ Pietro Perona [email protected] Computational Vision Lab California Institute of Technology Pasadena, CA, USA Abstract Which common human actions and interactions are recognizable in monocular still images? Which involve objects and/or other people? How many is a person performing at a time? We address these questions by exploring the actions and interactions that are detectable in the images of the MS COCO dataset. We make two main contributions. First, a list of 140 common ‘visual actions’, obtained by analyzing the largest on-line verb lexicon currently available for English (VerbNet) and human sentences used to de- scribe images in MS COCO. Second, a complete set of annotations for those ‘visual actions’, composed of subject-object and associated verb, which we call COCO-a (a for ‘actions’). COCO-a is larger than existing action datasets in terms of number instances of actions, and is unique because it is data-driven, rather than experimenter-biased. Other unique features are that it is exhaustive, and that all subjects and objects are localized. A statistical analysis of the accuracy of our annotations and of each action, interaction and subject-object combination is provided. Appendix Overview In the appendix we provide: (I) Statistics on the type of images in COCO-a. (II) Complete list of adverbs and visual actions. (III) Complete list of the objects of interactions and occurrence count. (IV) Complete list of the visual actions and occurrence count. (V) User interface used to collect the interactions in the COCO-a dataset. (VI) User interface used to collect the visual actions in the COCO-a dataset. c 2015. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.

Describing Common Human Visual Actions in Imagesmronchi/papers/BMVC15_Describing...all the annotations that are being provided for the specific interaction, which helps avoid ambiguous

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Describing Common Human Visual Actions in Imagesmronchi/papers/BMVC15_Describing...all the annotations that are being provided for the specific interaction, which helps avoid ambiguous

RONCHI AND PERONA: DESCRIBING COMMON HUMAN VISUAL ACTIONS IN IMAGES 1

Describing Common Human Visual Actionsin Images

Matteo Ruggero Ronchihttp://vision.caltech.edu/~mronchi/

Pietro [email protected]

Computational Vision LabCalifornia Institute of TechnologyPasadena, CA, USA

Abstract

Which common human actions and interactions are recognizable in monocular stillimages? Which involve objects and/or other people? How many is a person performingat a time? We address these questions by exploring the actions and interactions that aredetectable in the images of the MS COCO dataset. We make two main contributions.First, a list of 140 common ‘visual actions’, obtained by analyzing the largest on-lineverb lexicon currently available for English (VerbNet) and human sentences used to de-scribe images in MS COCO. Second, a complete set of annotations for those ‘visualactions’, composed of subject-object and associated verb, which we call COCO-a (a for‘actions’). COCO-a is larger than existing action datasets in terms of number instancesof actions, and is unique because it is data-driven, rather than experimenter-biased. Otherunique features are that it is exhaustive, and that all subjects and objects are localized. Astatistical analysis of the accuracy of our annotations and of each action, interaction andsubject-object combination is provided.

Appendix Overview

In the appendix we provide:

(I) Statistics on the type of images in COCO-a.

(II) Complete list of adverbs and visual actions.

(III) Complete list of the objects of interactions and occurrence count.

(IV) Complete list of the visual actions and occurrence count.

(V) User interface used to collect the interactions in the COCO-a dataset.

(VI) User interface used to collect the visual actions in the COCO-a dataset.

c© 2015. The copyright of this document resides with its authors.It may be distributed unchanged freely in print or electronic forms.

Page 2: Describing Common Human Visual Actions in Imagesmronchi/papers/BMVC15_Describing...all the annotations that are being provided for the specific interaction, which helps avoid ambiguous

2 RONCHI AND PERONA: DESCRIBING COMMON HUMAN VISUAL ACTIONS IN IMAGES

Appendix I: Unbiased Nature of COCO-aWe show in Figure 9 the unbiased nature of the images contained in our dataset. Differentactions usually occur in different environments, so in order to balance the content of ourdataset we selected an approximately equal number images of three types of scenes: sports,outdoors and indoors. We also selected images of various complexity, containing singlesubjects, small groups (2-4 subjects) and crowds (>4 subjects). The image selection processconsists of the following two steps: (1) categorize all the images containing people in MSCOCO based on the types of objects they contain; (2) randomly sample images in aboutequal percentages from all categories and complexities.

38.9%

26.0%

35.1%

sportindooroutdoor

47.9%

38.2%

13.9%

1[2, 4]> 4

Figure 9: Scene and subjects distributions. (Left) The distribution of the type of scenescontained in the dataset. (Right) The distribution of the number of subjects appearing in eachimage.

Appendix II: Visual Actions and Adverbs by CategoryIn order to reduce the possibility of annotators using a term instead of another in the datacollection interface, we organized visual actions into 8 groups – ‘posture/motion’, ‘solo ac-tions’, ‘contact actions’, ‘actions with objects’, ‘social actions’, ‘nutrition actions’, ‘com-munication actions’, ‘perception actions’. This was based on two simple rules: (a) actionsin the same group share some important property, e.g. being performed solo, with objects,with people, or indifferently with people and objects, or being an action of posture; (b) ac-tions in the same group tend to be mutually exclusive, e.g. a person can be drinking oreating at a certain moment, not both. Furthermore, we included in our study 3 ‘adverb’ cat-egories: ‘emotion’ of the subject, ‘location’ and ‘relative distance’ of object with respect tothe subject. Tables 2 and 3 contain a break down of the visual actions and adverbs into thecategories that were presented to the Amazon Mechanical Turk workers.

AdverbsEmotion (6)

angerdisgust

fearhappinesssadnesssurprise

Relative Location (6)abovebehindbelow

in frontleft

right

Relative Distance (4)far

full contactlight contact

near

Table 2: Adverbs ordered by category. The complete list of high level visual cues collected,describing the subjects (emotion) and localization of the interaction (relative location anddistance).

Page 3: Describing Common Human Visual Actions in Imagesmronchi/papers/BMVC15_Describing...all the annotations that are being provided for the specific interaction, which helps avoid ambiguous

RONCHI AND PERONA: DESCRIBING COMMON HUMAN VISUAL ACTIONS IN IMAGES 3

Visual ActionsPosture / Motion (23)

balance hang runbend jump sitbow kneel squat

climb lean standcrouch lie straddle

fall perch swimfloat recline walkfly roll

Communication (6)call

shoutsignaltalk

whistlewink

Contact (22)avoid massage

bit petbump pinchcaress poke

hit pullhold punchhug pushkick reachkiss slaplick squeezelift tickle

Social (24*)accompany give play baseball

be with groom play basketballchase help play frisbeedance hunt play soccerdine kill play tennisdress meet precedefeed payfight shake hands

follow teach

Perception (5)listenlooksnifftastetouch

Nutrition (7)chewcook

devourdrinkeat

preparespread

Solo (24*)blow play soccerclap play tenniscry play instrument

draw posegroan singlaugh sleeppaint smile

photograph writeplay skate

play baseball skiplay basketball snowboard

play frisbee surf

With objects (34)bend fill separatebreak get showbrush lay spillbuild light spraycarry mix stealcatch pour putclear read throwcut remove use

disassemble repair washdrive ride weardrop row

exchange sail

Table 3: Visual actions ordered by category. The complete list of visual actions containedin Visual VerbNet. Visual actions in one category are usually mutually exclusive, visualactions of different categories may co-occur. (*) There are five visual actions (play baseball,play basketball, play frisbee, play soccer, play tennis) that are considered both ‘social’ and‘solo’ types of actions.

Page 4: Describing Common Human Visual Actions in Imagesmronchi/papers/BMVC15_Describing...all the annotations that are being provided for the specific interaction, which helps avoid ambiguous

4 RONCHI AND PERONA: DESCRIBING COMMON HUMAN VISUAL ACTIONS IN IMAGES

Appendix III: Object Occurrences in InteractionsWe show the full lists of objects that people interact in Figure 10.

9580

375

pers

onsp

orts

ball

skis

tenn

is ra

cket

hand

bag

~

300

225

150

75

dinin

g ta

ble

skat

eboa

rd

chair

cell p

hone

back

pack

surfb

oard

umbr

ella

base

ball b

atfri

sbee

rem

ote

mot

orcy

clebe

nch

lapto

psn

owbo

ardtie

hors

eba

seba

ll glov

eca

kesu

itcas

e

dog

cup

kite

bed

couc

h tvele

phan

tkn

ifebi

cycle

toot

hbru

shtru

ck

bowl

pizz

ado

nut

book

boat

wine

glas

sho

t dog

bana

natra

insh

eep

oven

oran

ge

tedd

y bea

rsa

ndwi

ch fork

bottle ca

r

bus

spoo

n

gira

ffe

bird cow

traffic

light

toile

tsin

kfire

hyd

rant

park

ing m

eter

hair

drier

refri

gera

tor

cat

clock

appl

eze

bra

potte

d pl

ant

mou

se

airpl

ane

stop

sign

keyb

oard

carro

t

Figure 10: Most frequent objects. The complete lists of interacting objects obtained fromthe annotators. The scale is linear.

Appendix IV: Visual Action OccurrencesWe show the complete lists of ‘visual actions’ annotated from the images and their occur-rences in Figure 11.

6892

5750

be w

ithto

uch

use

hold

smile

~

4600

3450

2300

1150

pose

look

acco

mpa

nypl

ay ba

seba

llpl

ay so

ccer ride

dine

carry

play

frisb

eepl

ay te

nnis

play talk

liste

nsk

ate

wear ski

snow

boar

dsu

rflau

gh help

phot

ogra

ph

reac

hea

t

thro

wta

ste hitav

oidpr

epar

e

hug

follo

w

drive cu

tpu

llbi

tepu

sh

signa

lpe

t

laydr

inkbl

ow feed

catc

hlift kick

sleep

read

sque

eze

show ge

t

teac

hwr

itepl

ay in

strum

ent

mee

tsh

ake h

ands sail

repa

ir cry

chewrow

give

chas

ebr

ush

prec

ede

kiss fill

sniff

play

bas

ket

devo

urca

ress

wash

sing

rem

ove

bum

psla

p

pour pay

fight

danc

eco

ok

groo

mdr

aw call

bend

Figure 11: Most frequent visual actions. The complete lists of ‘visual actions’ obtainedfrom the annotators. The scale is linear.

If we consider tail all the actions with less than 2000 occurrences then 90% of the actionsare in the tail and cover 27% of the total number of occurrences. The distribution of thevisual actions’ counts follows a heavy tail distribution, to which we fit a line, shown inFigure 12, with slope α ∼−3. This seems to indicate that the MS COCO dataset is sufficientfor a thorough representation and study of about 20 to 30 visual actions, however we areconsidering methods to bias our image selection process in order to obtain more samples ofthe actions contained in the tail.

Page 5: Describing Common Human Visual Actions in Imagesmronchi/papers/BMVC15_Describing...all the annotations that are being provided for the specific interaction, which helps avoid ambiguous

RONCHI AND PERONA: DESCRIBING COMMON HUMAN VISUAL ACTIONS IN IMAGES 5

100 101 102

Visual Actions

100

101

102

103

104

Num

ber o

f Occ

urre

nces

Figure 12: Visual actions heavy tail analysis. The plot in log-log scale of the list of visualactions against the number of occurrences.

Page 6: Describing Common Human Visual Actions in Imagesmronchi/papers/BMVC15_Describing...all the annotations that are being provided for the specific interaction, which helps avoid ambiguous

6 RONCHI AND PERONA: DESCRIBING COMMON HUMAN VISUAL ACTIONS IN IMAGES

Appendix V: Interactions User InterfaceIn Figure 13 we show the AMT interface developed to collect interaction annotations fromimages. Each worker is presented with a series of 10 images, each containing a subjecthighlighted in blue and asked to (1) flag the subject if it is mostly occluded or invisible; (2)if the subject is sufficiently visible, click on all the objects he/she is interacting with. Theinterface provides feedback to the annotator by highlighting in white all the annotated objectswhen the mouse is hovered over the image, and selecting in green the objects once they areclicked. Annotators can remove annotations by either clicking on the object segmentation onthe image a second time or using the appropriate button in the annotation panel. We includeda comments text box to obtain specific feedback workers on each image.

Figure 13: Interactions GUI. In this image the blue subject is interacting with anotherperson, the bed and the laptop.

Page 7: Describing Common Human Visual Actions in Imagesmronchi/papers/BMVC15_Describing...all the annotations that are being provided for the specific interaction, which helps avoid ambiguous

RONCHI AND PERONA: DESCRIBING COMMON HUMAN VISUAL ACTIONS IN IMAGES 7

Appendix VI: Visual Actions User InterfaceIn Figures 14, 15, 16, 17 and 18 we show the sequences of steps required in the AMTinterface developed to collect visual action annotations. We collect visual actions for allthe interactions obtained from the previously shown GUI having an agreement of 3 out of5 workers, as explained in more details in Section 4.3 of the main paper. Each workeris presented with a single image containing a subject (highlighted in blue) and an object(highlighted in green) and asked to go through 8 panels, one for each category of visualactions, and select all the visual actions that apply to the visualized interaction. Annotatorscan skip a category if no visual action applies (i.e. nutrition visual actions only apply forfood items). As they proceed through the 8 panels workers have the chance to visualizeall the annotations that are being provided for the specific interaction, which helps avoidambiguous annotations. Depending on the object involved in the interaction some panelsmight not be shown (i.e. the communication panel is not shown when the object of interactionis inanimate, as well as the nutrition panel is not shown when the object of interaction isanother person).

Step 1: Flag the interaction if subject is occluded

Figure 14: Visual Actions GUI. (I/V)

Page 8: Describing Common Human Visual Actions in Imagesmronchi/papers/BMVC15_Describing...all the annotations that are being provided for the specific interaction, which helps avoid ambiguous

8 RONCHI AND PERONA: DESCRIBING COMMON HUMAN VISUAL ACTIONS IN IMAGES

Step 2: Provide Relative Location

Step 3: Provide Distance of Interaction

Figure 15: Visual Actions GUI. (II/V)

Page 9: Describing Common Human Visual Actions in Imagesmronchi/papers/BMVC15_Describing...all the annotations that are being provided for the specific interaction, which helps avoid ambiguous

RONCHI AND PERONA: DESCRIBING COMMON HUMAN VISUAL ACTIONS IN IMAGES 9

Step 4: Provide Senses used in Interaction

Step 5: Provide Nutrition Visual Actions (none in this case)

Figure 16: Visual Actions GUI. (III/V)

Page 10: Describing Common Human Visual Actions in Imagesmronchi/papers/BMVC15_Describing...all the annotations that are being provided for the specific interaction, which helps avoid ambiguous

10 RONCHI AND PERONA: DESCRIBING COMMON HUMAN VISUAL ACTIONS IN IMAGES

Step 6: Provide Contact Visual Actions (free-typing is allowed)

Figure 17: Visual Actions GUI. (IV/V)

Page 11: Describing Common Human Visual Actions in Imagesmronchi/papers/BMVC15_Describing...all the annotations that are being provided for the specific interaction, which helps avoid ambiguous

RONCHI AND PERONA: DESCRIBING COMMON HUMAN VISUAL ACTIONS IN IMAGES 11

Step 7: Provide Object Visual Actions (free-typing is allowed)

Figure 18: Visual Actions GUI. (V/V)