16
MediaEval task: Context of Experience Michael Riegler Martha Larson, Concetto Spampinato, Pål Halvorsen, Carsten Griwodz Recommending videos suiting a watching situation

MediaEval 2016 - Context of Experience Task

Embed Size (px)

Citation preview

Page 1: MediaEval 2016 - Context of Experience Task

MediaEval task:Context of Experience

Michael RieglerMartha Larson, Concetto Spampinato, Pål Halvorsen, Carsten Griwodz

Recommending videos suiting a watching situation

Page 2: MediaEval 2016 - Context of Experience Task

How to make a cool task…

lets mix…movies

situationand recommendations

Page 3: MediaEval 2016 - Context of Experience Task

Goals of the taskIn this case, airplanes

small screens, distractions…

People wants to be entertained,to make time ”fly”

Classify in “good” or “bad” to be watched on an airplaneà give better recommendations

Page 4: MediaEval 2016 - Context of Experience Task

How is it to watch a movie ona plane???

Engine noise

Announcements

Turbulence

Narrow spaces

Small, glary screens

Fellow passengers, kids

Page 5: MediaEval 2016 - Context of Experience Task

DatasetMovies collected from KLM over 3 month(February – April 2015)

318 movies

Metadata (names, ratings…)

Audio features

Visual features

Links to videos

Posters

Split into test and train set (70/30)

Page 6: MediaEval 2016 - Context of Experience Task

Dataset CrowdsourcingCrowdsourced user opinions

“Good” on a plane … or not

Only workers that have experience

548 different workers,1644 judgments

Ranking of videos to get consent

Page 7: MediaEval 2016 - Context of Experience Task

Possible runsUse all information available

Content (visual, audio…)

Metadata (ratings, comments…)

More…

Page 8: MediaEval 2016 - Context of Experience Task

The teamsTeam Runs Method

TUD-MMC(Bo Wang, Cynthia C. S. Liem)

5 Multimodal classifier stacking

ITEC – AAU(Polyxeni Sgouroglou, Tarek Markus Abdel

Aziz, Mathias Lux)4 Deep learning, text-based

naive bayes, SVMs, ...

Simula(Konstantin Pogorelov, Michael Riegler, Pål

Halvorsen, Carsten Griwodz)3 PART classifier, global

features

Page 9: MediaEval 2016 - Context of Experience Task

TUD-MMCRun Precision Recall F1-score

User Rating 0,371 0,609 0,461

Visual 0,447 0,476 0,458

Metadata 0,524 0,516 0,519Metadata+user

rating 0,581 0,6 0,583

Metadata + visual 0,584 0,6 0,586

Page 10: MediaEval 2016 - Context of Experience Task

ITEC / AAUUsed Data Precision Recall F1-score

Visual posters 0,625 0,639 0,632

Visual trailers 0,605 0,676 0,638

Text 0,625 0,676 0,650

Text + keywords 0,619 0,647 0,633

Page 11: MediaEval 2016 - Context of Experience Task

SimulaUsed Data Precision Recall F1-score

Metadata + visual 0,608 0,742 0,668

Metadata 0,604 0,933 0,734

Visual information 0,633 0,977 0,768

Page 12: MediaEval 2016 - Context of Experience Task

Team rankingRank Team Precision Recall F1-score

1 AAU / ITEC(Text) 0,625 0,676 0,650

2 TUD-MMC(Meta+Visual) 0,584 0,6 0,586

- Simula(Visual) 0,633 0,977 0,768

Page 13: MediaEval 2016 - Context of Experience Task

Interesting insightsVisual features perform best!?

Text achieves better results than metadata

No obvious correlation between popular ranking sites (IMDB, Meteoritic…) and what people choose

Some genres are more popular(Comedies, Family movies leading)

Page 14: MediaEval 2016 - Context of Experience Task

Possible improvementsAlways get more data

longer periods

other airlines

videos in generalAdd more features

Collect data from people that are actually flying (maybe researchers would be a good choice)

Page 15: MediaEval 2016 - Context of Experience Task

Learn more!

Friday Afternoon session: 1630 – 1700

• AAU: Mathias Lux

• TUD: Bo Wang

• Simula: Konstantin Pogorelov

Page 16: MediaEval 2016 - Context of Experience Task

See you again next year?