24
Verifying Multimedia Use at MediaEval 2016 Christina Boididou 1 , Stuart E. Middleton 5 , Symeon Papadopoulos 1 , Duc-Tien Dang-Nguyen 2,3 , Giulia Boato 2 , Michael Riegler 4 & Yiannis Kompatsiaris 1 1 Information Technologies Institute (ITI), CERTH, Greece 2 University of Trento, Italy. 3 Insight Centre for Data Analytics at Dublin City University, Ireland.

Verifying Multimedia Use at MediaEval 2016

Embed Size (px)

Citation preview

Page 1: Verifying Multimedia Use at MediaEval 2016

Verifying Multimedia Use at MediaEval 2016

Christina Boididou1, Stuart E. Middleton5, Symeon Papadopoulos1, Duc-Tien Dang-Nguyen2,3, Giulia Boato2, Michael Riegler4 & Yiannis Kompatsiaris1

1 Information Technologies Institute (ITI), CERTH, Greece2 University of Trento, Italy.3 Insight Centre for Data Analytics at Dublin City University, Ireland.4 Simula Research Lab, Norway.5 University of Southampton IT Innovation Centre, UK.

Page 2: Verifying Multimedia Use at MediaEval 2016

REAL OR FAKEThe verification problem

1

Page 3: Verifying Multimedia Use at MediaEval 2016

Real photo Captured in Dublin’s Olympia Theatre

A photo of Eagles of Death Metal in concert

ButMislabeled on social media as showing the crowd at the Bataclan theatre just before gunmen began firing.

Page 4: Verifying Multimedia Use at MediaEval 2016

A TYPOLOGY OF FAKE: REPOSTING OF REAL

Photos from past events reposted as being associated to current event

‘Eiffel Tower lights up in solidarity with Pakistan’

‘Syrian refugee girl selling gum in Jordan’

Page 5: Verifying Multimedia Use at MediaEval 2016

A TYPOLOGY OF FAKE: PHOTOSHOPPING

Digitally manipulated photos / Tampered

‘Sharks in New York during Hurricane Sandy’

‘Sikh man is a suspect of Paris attacks’

Page 6: Verifying Multimedia Use at MediaEval 2016

TASK DEFINITION2

Page 7: Verifying Multimedia Use at MediaEval 2016

MAIN TASK

POST

IMAGE

MEDIAEVAL SYSTEM

FAKE

REAL

AUTHOR(PROFILE)

‘Given a post (image + metadata), return a decision (fake, real, unknown) on whether the information presented by the post reflects the reality’

Page 8: Verifying Multimedia Use at MediaEval 2016

SUB-TASK

Given an image, return a decision (tampered, non-tampered, unknown) on whether the image has been digitally modified or not.

IMAGE MEDIAEVAL SYSTEM

TAMPERED

NON TAMPERED

Page 9: Verifying Multimedia Use at MediaEval 2016

VERIFICATION CORPUS3

Page 10: Verifying Multimedia Use at MediaEval 2016

GROUND TRUTH GENERATION

Multimedia cases were labeled as fake/real after consulting online reports (articles, blogs)

Data (post) collection associated to these cases performed using Topsy (historic events) or using streaming and search API (real-time events)

Post set expansion: Near-duplicate image search + journalist debunking reports + human inspection was used to increase the number of associated posts

Crowdsourcing campaign carried out with microWorkers platform; each worker asked to provide three cases of multimedia misuse

Page 11: Verifying Multimedia Use at MediaEval 2016

DEVELOPMENT SETEvent

Fake RealMultimedia Posts Multimedia Posts

Hurricane Sandy 62 5,559 148 4,664

Boston Marathon bombing 35 189 28 344

Sochi Olympics 26 274 - -MH370 Flight 29 501 - -Bring Back Our Girls 7 131 - -Columbian Chemicals 15 185 - -Passport hoax 2 44 - -Rock Elephant 1 13 - -Underwater bedroom 3 113 - -Livr mobile app 4 9 - -Pig fish 1 14 - -Solar Eclipse 6 137 4 140Samurai with girl 4 218 - -Nepal Earthquake 21 356 11 1004Garissa Attack 2 6 2 73Syrian boy 1 1786 - -Varoufakis 1 61 - -Total 220 9596 193 6225

Page 12: Verifying Multimedia Use at MediaEval 2016

TEST SETEvent

Fake Real

MM Posts MM Posts

American Soldier Quran 1 17 - -

Airstrikes 1 24 - -Attacks in Paris 3 44 22 536

Ankara Explosions - - 3 19Bush book 1 27 - -Black Lion 1 7 - -

Boko Haram 1 31 - -Bowie David 2 24 4 48Brussels Car

Metro 3 41 - -Brussels

Explosions 3 69 1 9

Burst in KFC 1 25 - -Convoy

Explosion Turkey - - 3 13

Donald Trump Attacker 1 25 - -

Eagle Kid 1 334 - -Five Headed

Snake 5 6 - -

Fuji Lenticular Clouds 1 123 1 53

Total 66 1230 64 998

EventFake Real

MM Posts MM Posts

Gandhi Dancing 1 29 - -Half of

Everything 9 39 - -Hubble

Telescope 1 18 - -Immigrants’

fear 5 33 3 18ISIS children 2 3 - -John Guevara 1 33 - -Mc Donalds’

Fee 1 6 - -Nazi

Submarine 2 11 - -North Korea 2 10 - -Not Afraid 2 32 3 35

Pakistan Explosion 1 53 - -

Pope Francis 1 29 - -Protest 1 30 10 34

Refugees 4 35 13 33Rio Moon 1 33 - -

Snowboard Girl  2 14 - -Soldier Stealing 1 1 - -

Syrian Children 1 12 1 200Ukrainian Nazi 1 1 - -

Woman 14 children 2 11 - -

Page 13: Verifying Multimedia Use at MediaEval 2016

EVALUATION & RESULTS4

Page 14: Verifying Multimedia Use at MediaEval 2016

Main taskTarget class: Fake

TASK EVALUATION

Sub-taskTarget class: Tampered

Classic IR metricsPrecision RecallF1-score -> main evaluation metricParticipants were allowed to mark a case as “unknown” (expected to result in reduced recall)

Page 15: Verifying Multimedia Use at MediaEval 2016

TASK SUBMISSIONS

10 submissions for the main task

2 submissions for the sub-task (just one team)

3 teams submitted(+1 the organizers)

Page 16: Verifying Multimedia Use at MediaEval 2016

TRENDS IN APPROACHES

Features being used- Text features (most common)- Post and user metadata- Image forensics- Video quality metadata- Topics of post- Text similarity of posts (per image case)- Trusted sources attributed in text- Mentioned online external sources

Page 17: Verifying Multimedia Use at MediaEval 2016

RESULTS: MAIN TASKTeam Run Recall Precision F1-Score

Linkmedia

run1TextKnn 0.9227 0.6397 0.7556

run2CBIR1 0.3406 0.4917 0.4024

run3Sources 0.9463 0.9030 0.9241

run4Fusion 0.9121 0.7525 0.8246

MMLAB@DISI

RUN1 0.5487 0.7060 0.6175

RUN2 0.9365 0.8135 0.8707

RUN3 0.9398 0.7405 0.8283

MCGICT

hybrid 0.6097 0.7637 0.6781

image 0.5138 0.6975 0.5917

text 0.6292 0.7471 0.6831

VMU

Run1 0.8512 0.9812 0.9116

Run2 0.9056 0.7709 0.8328

Run3 0.8869 0.9882 0.9348

Run4 0.8739 0.9799 0.9239

Run5 0.9951 0.5873 0.7386

Series1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Participants and organizers F1

Series1

00.10.20.30.40.50.60.70.80.91

Participants F1

Page 18: Verifying Multimedia Use at MediaEval 2016

RESULTS: MAIN TASKTeam Run Recall Precision F1-Score

Linkmedia

run1TextKnn 0.9227 0.6397 0.7556

run2CBIR1 0.3406 0.4917 0.4024

run3Sources 0.9463 0.9030 0.9241

run4Fusion 0.9121 0.7525 0.8246

MMLAB@DISI

RUN1 0.5487 0.7060 0.6175

RUN2 0.9365 0.8135 0.8707

RUN3 0.9398 0.7405 0.8283

MCGICT

hybrid 0.6097 0.7637 0.6781

image 0.5138 0.6975 0.5917

text 0.6292 0.7471 0.6831

VMU

Run1 0.8512 0.9812 0.9116

Run2 0.9056 0.7709 0.8328

Run3 0.8869 0.9882 0.9348

Run4 0.8739 0.9799 0.9239

Run5 0.9951 0.5873 0.7386

Series1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Participants and organizers F1

Series1

00.10.20.30.40.50.60.70.80.91

Participants F1

Page 19: Verifying Multimedia Use at MediaEval 2016

RESULTS: MAIN TASKTeam Run Recall Precision F1-Score

Linkmedia

run1TextKnn 0.9227 0.6397 0.7556

run2CBIR1 0.3406 0.4917 0.4024

run3Sources 0.9463 0.9030 0.9241

run4Fusion 0.9121 0.7525 0.8246

MMLAB@DISI

RUN1 0.5487 0.7060 0.6175

RUN2 0.9365 0.8135 0.8707

RUN3 0.9398 0.7405 0.8283

MCGICT

hybrid 0.6097 0.7637 0.6781

image 0.5138 0.6975 0.5917

text 0.6292 0.7471 0.6831

VMU

Run1 0.8512 0.9812 0.9116

Run2 0.9056 0.7709 0.8328

Run3 0.8869 0.9882 0.9348

Run4 0.8739 0.9799 0.9239

Run5 0.9951 0.5873 0.7386

Series1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Participants and organizers F1

Series1

00.10.20.30.40.50.60.70.80.91

Participants F1

Page 20: Verifying Multimedia Use at MediaEval 2016

RESULTS: MAIN TASKTeam Run Recall Precision F1-Score

Linkmedia

run1TextKnn 0.9227 0.6397 0.7556

run2CBIR1 0.3406 0.4917 0.4024

run3Sources 0.9463 0.9030 0.9241

run4Fusion 0.9121 0.7525 0.8246

MMLAB@DISI

RUN1 0.5487 0.7060 0.6175

RUN2 0.9365 0.8135 0.8707

RUN3 0.9398 0.7405 0.8283

MCGICT

hybrid 0.6097 0.7637 0.6781

image 0.5138 0.6975 0.5917

text 0.6292 0.7471 0.6831

VMU

Run1 0.8512 0.9812 0.9116

Run2 0.9056 0.7709 0.8328

Run3 0.8869 0.9882 0.9348

Run4 0.8739 0.9799 0.9239

Run5 0.9951 0.5873 0.7386

Series1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Participants and organizers F1

Series1

00.10.20.30.40.50.60.70.80.91

Participants F1

Page 21: Verifying Multimedia Use at MediaEval 2016

RESULTS: MAIN TASKTeam Run Recall Precision F1-Score

Linkmedia

run1TextKnn 0.9227 0.6397 0.7556

run2CBIR1 0.3406 0.4917 0.4024

run3Sources 0.9463 0.9030 0.9241

run4Fusion 0.9121 0.7525 0.8246

MMLAB@DISI

RUN1 0.5487 0.7060 0.6175

RUN2 0.9365 0.8135 0.8707

RUN3 0.9398 0.7405 0.8283

MCGICT

hybrid 0.6097 0.7637 0.6781

image 0.5138 0.6975 0.5917

text 0.6292 0.7471 0.6831

VMU

Run1 0.8512 0.9812 0.9116

Run2 0.9056 0.7709 0.8328

Run3 0.8869 0.9882 0.9348

Run4 0.8739 0.9799 0.9239

Run5 0.9951 0.5873 0.7386

Series1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Participants and organizers F1

Series1

00.10.20.30.40.50.60.70.80.91

Participants F1

Page 22: Verifying Multimedia Use at MediaEval 2016

RESULTS: SUB-TASK

Team Run Recall Precision F1-Score

MMLAB@DISI

RUN1 0.5 0.4827 0.4912

RUN2 0.9285 0.4906 0.6420

Series10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

RUN1

RUN2

Page 23: Verifying Multimedia Use at MediaEval 2016

FUTURE PLANS

Reconsider the fake/real distinction

Think about different evaluation metrics

Use posts from other social media

Page 24: Verifying Multimedia Use at MediaEval 2016

Thanks for your attention!

ANY QUESTIONS?Get in touch [email protected] (@cmpoi)[email protected] (@sympap)

www.revealproject.eu@RevealEU