87
CROWDSOURCING (ANAPHORIC) ANNOTATION Massimo Poesio University of Essex Part 1: Intro, Microtask crowdsourcing

CROWDSOURCING (ANAPHORIC) ANNOTATION

  • Upload
    lena

  • View
    58

  • Download
    0

Embed Size (px)

DESCRIPTION

CROWDSOURCING (ANAPHORIC) ANNOTATION. Massimo Poesio University of Essex Part 1: Intro, Microtask crowdsourcing. ANNOTATED CORPORA: AN ERA OF PLENTY?. - PowerPoint PPT Presentation

Citation preview

Page 1: CROWDSOURCING (ANAPHORIC) ANNOTATION

CROWDSOURCING (ANAPHORIC) ANNOTATION

Massimo PoesioUniversity of Essex

Part 1: Intro, Microtask crowdsourcing

Page 2: CROWDSOURCING (ANAPHORIC) ANNOTATION

ANNOTATED CORPORA: AN ERA OF PLENTY?

• With the release of the OntoNotes and ANC corpora for English and a number of corpora for other languages (Prague Dependency Treebank especially) we may think we won’t need to do any more annotation for a while

• But this is not the case:– Size still an issue– Problems– Novel tasks

Page 3: CROWDSOURCING (ANAPHORIC) ANNOTATION

THE CASE OF ANAPHORIC ANNOTATION

• After many years of scarcity we now live in an era of abundance to study anaphora

• The release after 2008 of a number of anaphoric corpora annotated according to more linguistically inspired guidelines has enabled computational linguists to start working on a version of the problem more resembling the way linguists look at it– Also one hears less of coreference and more of anaphora

• Recent competitions all use corpora of this type:– SEMEVAL 2010, CONLL 2011 / 2012, EVALITA 2011, …

Page 4: CROWDSOURCING (ANAPHORIC) ANNOTATION

BUT …

• The larger corpora – Not that large: still only 1M words at the most

• See problems of overfitting with Penn Treebank– Cover only a few genres (news mostly)– Anno schemes still pretty basic

Page 5: CROWDSOURCING (ANAPHORIC) ANNOTATION

ONTONOTES: OUT OF SCOPE

There was not a moment to be lost: away went Alice like the wind, and was just in time to hear it say, as it turned a corner, 'Oh my ears and whiskers, how late it's getting!' She was close behind it when she turned the corner, but the Rabbit was no longer to be seen: she found herself in a long, low hall, which was lit up by a row of lamps hanging from the roof.

There were doors all round the hall, but they were all locked; and when Alice had been all the way down one side and up the other, trying every door, she walked sadly down the middle, wondering how she was ever to get out again.

Page 6: CROWDSOURCING (ANAPHORIC) ANNOTATION

ONTONOTES: OUT OF SCOPE

There was not a moment to be lost: away went Alice like the wind, and was just in time to hear it say, as it turned a corner, 'Oh my ears and whiskers, how late it's getting!' She was close behind it when she turned the corner, but the Rabbit was no longer to be seen: she found herself in a long, low hall, which was lit up by a row of lamps hanging from the roof.

There were doors all round the hall, but they were all locked; and when Alice had been all the way down one side and up the other, trying every door, she walked sadly down the middle, wondering how she was ever to get out again.

Page 7: CROWDSOURCING (ANAPHORIC) ANNOTATION

ONTONOTES: OUT OF SCOPE

There was nothing so VERY remarkable in that; nor did Alice think it so VERY much out of the way to hear the Rabbit say to itself, 'Oh dear! Oh dear! I shall be late!' (when she thought it over afterwards, it occurred to her that she ought to have wondered at this, but at the time it all seemed quite natural); but when the Rabbit actually TOOK A WATCH OUT OF ITS WAISTCOAT-POCKET, and looked at it, and then hurried on, Alice started to her feet, for it flashed across her mind that she had never before seen a rabbit with either a waistcoat-pocket, or a watch to take out of it, and burning with curiosity, she ran across the field after it, and fortunately was just in time to see it pop down a large rabbit-hole under the hedge.

Page 8: CROWDSOURCING (ANAPHORIC) ANNOTATION

ALSO …

• In many respects the doubts concerning the empirical approach to anaphora raised by Zaenen in her 2006 CL squib (Markup barking up the wrong tree) still apply

• “anaphora is not like syntax – we don’t understand the phenomenon quite as well”

• Even the simpler types of anaphoric annotation are still problematic– If not for linguists, for computational types

• And we only started grappling with the more complex types of anaphora

Page 9: CROWDSOURCING (ANAPHORIC) ANNOTATION

She could hear the rattle of the teacups as [[the March Hare] and [his friends]]

shared their never-ending meal

PLURALS WITH COORDINATED ANTECEDENTS: SPLIT ANTECEDENTS (GNOME,ARRAU,SERENGETI,TUBA/DZ?)

'In THAT direction,' the Cat said, waving its right paw round, 'lives [a Hatter]: and in THAT direction,' waving the other paw, 'lives [a March Hare]. Visit either you like: they're both mad.'

Alice had no idea what [Latitude] was, or[Longitude] either, but thought they were nice grand words to say.

Page 10: CROWDSOURCING (ANAPHORIC) ANNOTATION

15.12 M: we’re gonna take the engine E3

15.13 : and shove it over to Corning

15.14 : hook [it] up to [the tanker car]

15.15 : _and_

15.16 : send it back to Elmira

(from the TRAINS-91 dialogues collected at the University of Rochester)

AMBIGUITY: REFERENT

Page 11: CROWDSOURCING (ANAPHORIC) ANNOTATION

www.phrasedetectives.com

About 160 workers at a factory that made paper for the Kent filters were exposed to asbestos in the 1950s.

Areas of the factory were particularly dusty where the crocidolite was used.

Workers dumped large burlap sacks of the imported material into a huge bin, poured in cotton and acetate fibers and mechanically mixed the dry fibers in a process used to make filters.

Workers described "clouds of blue dust" that hung over parts of the factory,

even though exhaust fans ventilated the area.

AMBIGUITY: REFERENT

Page 12: CROWDSOURCING (ANAPHORIC) ANNOTATION

AMBIGUITY: EXPLETIVES

'I beg your pardon!' said the Mouse, frowning, but very politely: 'Did you speak?'

'Not I!' said the Lory hastily.

'I thought you did,' said the Mouse. '--I proceed. "Edwin and Morcar,the earls of Mercia and Northumbria, declared for him: and even Stigand,the patriotic archbishop of Canterbury, found it advisable--"'

'Found WHAT?' said the Duck.

'Found IT,' the Mouse replied rather crossly: 'of course you know what"it" means.'

Page 13: CROWDSOURCING (ANAPHORIC) ANNOTATION

AND ANYWAY …

• A great deal, if not most, research in (computational) linguistics requires anno of new types– A new syntactic phenomenon– Named entities in a new domain– A type of anaphoric annotation not yet annotated

(or annotated badly)

Page 14: CROWDSOURCING (ANAPHORIC) ANNOTATION

OUR CONTENTION

• CROWDSOURCING can be (part of) the solution to these problems– Microtask crowdsourcing for small-to-medium

scale annotation projects• Including the majority of annotation carried out by

linguists / psycholinguists (see eg Munro et al 2010)– Games-with-a-purpose for larger scale projects– Either to gather more evidence about the

linguistic phenomena

Page 15: CROWDSOURCING (ANAPHORIC) ANNOTATION

OUTLINE OF THE LECTURES

• Crowdsourcing in AI and NLP (today)– What is crowdsourcing; applications to NLP

• Games with a purpose– ESP, Verbosity

• Phrase Detectives• Analyzing crowdsourced data

Page 16: CROWDSOURCING (ANAPHORIC) ANNOTATION

CROWDSOURCING

Page 17: CROWDSOURCING (ANAPHORIC) ANNOTATION

THE PROBLEM

• Many AI problems require knowledge on a massive scale– Commonsense knowledge (700M facts?)– Vision

• Previous common wisdom:– Impossible to codify such knowledge by hand (witness CYC)– Need to learn all from scratch

• New common wisdom: “Given the advent of the World Wide Web, AI projects now have access to the minds of millions” (Singh 2002)

Page 18: CROWDSOURCING (ANAPHORIC) ANNOTATION

CROWDSOURCING

• Take a task traditionally performed by one of a few agents

• Outsource it to the crowd on the web

Page 19: CROWDSOURCING (ANAPHORIC) ANNOTATION

THE WISDOM OF CROWDS

• By rights using the ‘crowd’ should lead to poor quality

• But in fact it often turns out that the judgment of the crowd is as good or better than that of the experts

Page 20: CROWDSOURCING (ANAPHORIC) ANNOTATION

WIKIPEDIA

Page 21: CROWDSOURCING (ANAPHORIC) ANNOTATION

WIKIPEDIA AND CROWDSOURCING

• Wikipedia, although not an AI project, is perhaps the best illustration of the power of crowdsourcing – how putting together many minds may result in an output often of incredible quality

• It is also a great illustration of what works and doesn’t:– E.g., Editorial control must be exercised by the

web collaborators themselves

Page 22: CROWDSOURCING (ANAPHORIC) ANNOTATION

• The OPEN DIRECTORY PROJECT

• Crater mapping (results) – Kanefsky

• Citizen Science

• Cognition and Language Laboratory

• Web Experiments

• Galaxy Zoo – Oxford University

OTHER WEB COLLABORATION PROJECTS

www.phrasedetectives.com

Page 23: CROWDSOURCING (ANAPHORIC) ANNOTATION

Galaxy Zoo

Page 24: CROWDSOURCING (ANAPHORIC) ANNOTATION

GalaxyZoo• Launched in July 2007• 1M galaxies imaged• 50M classifications in first year, from

150,000 visitors

Page 25: CROWDSOURCING (ANAPHORIC) ANNOTATION

COLLECTIVE RESOURCE CREATION FOR AI: OPEN MIND COMMONSENSE

Page 26: CROWDSOURCING (ANAPHORIC) ANNOTATION

• Singh 2002:“Every ordinary person has common sense of the kind we want to give to our machines”

WEB COLLABORATION IN AI: OPEN MIND COMMONSENSE

www.phrasedetectives.com

Page 27: CROWDSOURCING (ANAPHORIC) ANNOTATION

OPEN MIND COMMONSENSE

Page 28: CROWDSOURCING (ANAPHORIC) ANNOTATION

OPEN MIND COMMONSENSE

• A project started in 1999 (Chklovski, 1999) to take to collect commonsense from NETIZENS

• Around 30 ACTIVITIES organized to collect knowledge– About taxonomies (CATS ARE MAMMALS)– About the uses of objects (CHAIRS ARE FOR

SITTING ON)

Page 29: CROWDSOURCING (ANAPHORIC) ANNOTATION

WHAT’S IN OPEN MIND COMMONSENSE: CAR

Page 30: CROWDSOURCING (ANAPHORIC) ANNOTATION

Twenty Semantic Relation Types in ConceptNet (Liu and Singh, 2004)

THINGS (52,000 assertions)

IsA: (IsA "apple" "fruit") Part of: (PartOf "CPU" "computer") PropertyOf: (PropertyOf "coffee" "wet") MadeOf: (MadeOf "bread" "flour") DefinedAs: (DefinedAs "meat" "flesh of animal")

EVENTS (38,000 assertions)

PrerequisiteeventOf: (PrerequisiteEventOf "read letter" "open envelope") SubeventOf: (SubeventOf "play sport" "score goal") FirstSubeventOF: (FirstSubeventOf "start fire" "light match") LastSubeventOf: (LastSubeventOf "attend classical concert" "applaud")

AGENTS (104,000 assertions)

CapableOf: (CapableOf "dentist" "pull tooth")

SPATIAL (36,000 assertions)

LocationOf: (LocationOf "army" "in war")

TEMPORAL time & sequence

CAUSAL (17,000 assertions)

EffectOf: (EffectOf "view video" "entertainment") DesirousEffectOf: (DesirousEffectOf "sweat" "take shower")

AFFECTIONAL (mood, feeling, emotions) (34,000 assertions)

DesireOf (DesireOf "person" "not be depressed") MotivationOf (MotivationOf "play game" "compete")

FUNCTIONAL (115,000 assertions)

IsUsedFor: (UsedFor "fireplace" "burn wood") CapableOfReceivingAction: (CapableOfReceivingAction "drink" "serve")

ASSOCIATION K-LINES (1.25 million assertions)

SuperThematicKLine: (SuperThematicKLine "western civilization" "civilization") ThematicKLine: (ThematicKLine "wedding dress" "veil") ConceptuallyRelatedTo: (ConceptuallyRelatedTo "bad breath" "mint")

Page 31: CROWDSOURCING (ANAPHORIC) ANNOTATION

COLLECTING COMMONSENSE KNOWLEDGE

• Originally:– Using TEMPLATES– Asking people to write stories

• Now: just templates?

Page 32: CROWDSOURCING (ANAPHORIC) ANNOTATION

OPEN MIND COMMONSENSE: ADDING KNOWLEDGE

Page 33: CROWDSOURCING (ANAPHORIC) ANNOTATION

TEMPLATES FOR ADDING KNOWLEDGE

Page 34: CROWDSOURCING (ANAPHORIC) ANNOTATION

OPEN MIND COMMONSENSE: CHECKING KNOWLEDGE

Page 35: CROWDSOURCING (ANAPHORIC) ANNOTATION

FROM OPENMIND COMMONSENSE TO CONCEPT NET

• ConceptNet (Havasi et al, 2009) is a semantic network extracted from OpenMind Commonsense assertions using simple heuristics

Page 36: CROWDSOURCING (ANAPHORIC) ANNOTATION
Page 37: CROWDSOURCING (ANAPHORIC) ANNOTATION

CONCEPT NET

Page 38: CROWDSOURCING (ANAPHORIC) ANNOTATION

FROM OPENMIND COMMONSENSE FACTS TO CONCEPTNET

A lime is a very sour fruit

isa(lime,fruit)

property_of(lime,very_sour)

Page 39: CROWDSOURCING (ANAPHORIC) ANNOTATION

• Learner / Learner2 / 1001 Paraphrases – Chklovski

• FACTory – CyCORP

• Hot or Not – 8 Days

• Semantic Wikis: www.semwiki.org

OTHER USES OF WEB COLLABORATION IN AI

www.phrasedetectives.com

Page 40: CROWDSOURCING (ANAPHORIC) ANNOTATION

CROWDSOURCING: INCENTIVES

Page 41: CROWDSOURCING (ANAPHORIC) ANNOTATION

What motivated thousands or millions of people to collaborate on the web?

• Shared intent– Wikipedia– Citizen Science

• Financial incentives– Microtask crowdsourcing

• Enjoyment– Games-with-a-purpose

Page 42: CROWDSOURCING (ANAPHORIC) ANNOTATION

MICROTASK CROWDSOURCING

Page 43: CROWDSOURCING (ANAPHORIC) ANNOTATION

THE FINANCIAL INCENTIVE: MECHANICAL TURK

• Wikipedia, OpenMind Commonsense, all rely on the voluntary effort of web users

• Mechanical Turk was developed by Amazon to take advantage of the willingness of large numbers of web users to do some work for very little pay

Page 44: CROWDSOURCING (ANAPHORIC) ANNOTATION

THE MECHANICAL TURK

Page 45: CROWDSOURCING (ANAPHORIC) ANNOTATION

AMAZON MECHANICAL TURK

Page 46: CROWDSOURCING (ANAPHORIC) ANNOTATION

HITs

• On the Mechanical Turk site, a REQUESTER creates a HUMAN INTELLIGENCE TASK (HIT) and specify how much he is willing to pay for TURKERS to complete it– Typically, the payment is of the order of 1 to 10

cents per task

Page 47: CROWDSOURCING (ANAPHORIC) ANNOTATION

A TYPICAL HIT

Page 48: CROWDSOURCING (ANAPHORIC) ANNOTATION

CREATING A HIT

• Design• Publish• Manage

Page 49: CROWDSOURCING (ANAPHORIC) ANNOTATION

RESOURCE CENTER

Page 50: CROWDSOURCING (ANAPHORIC) ANNOTATION

DESIGN

Page 51: CROWDSOURCING (ANAPHORIC) ANNOTATION

EXAMPLE: CATEGORIZATION

Page 52: CROWDSOURCING (ANAPHORIC) ANNOTATION

USING AMT IS CHEAP

Page 53: CROWDSOURCING (ANAPHORIC) ANNOTATION

… AND FAST

Page 54: CROWDSOURCING (ANAPHORIC) ANNOTATION

USING MICROTASK CROWDSOURCING FOR NLP

• Su et al. (2007): name resolution, attribute extraction

• Nakov (2008): paraphrasing noun compounds• Kaisser and Lowe (2008): sentence-level QA

annotation• Zaenen (2008): evaluating RTE agreement• Snow et al (2008): using MT for a variety of NLP

annotation purposes• Callison-Burch (2009): using MT to evaluate MT

Page 55: CROWDSOURCING (ANAPHORIC) ANNOTATION

EXAMPLE: DIALECT IDENTIFICATION

Page 56: CROWDSOURCING (ANAPHORIC) ANNOTATION

EXAMPLE: SPELLING CORRECTION

Page 57: CROWDSOURCING (ANAPHORIC) ANNOTATION

USING MICROTASK CROWDSOURCING FOR NLP

• Su et al. (2007): name resolution, attribute extraction

• Nakov (2008): paraphrasing noun compounds• Kaisser and Lowe (2008): sentence-level QA

annotation• Zaenen (2008): evaluating RTE agreement• Snow et al (2008): using AMT for a variety of NLP

annotation purposes to test its quality• Callison-Burch (2009): using MT to evaluate MT

Page 58: CROWDSOURCING (ANAPHORIC) ANNOTATION

SNOW ET AL, 2008

• Objective: assess the quality of AMT annotation using a variety of NLP annotation tasks

• Quality assessment: numeric (inter-annotator agreement) and qualitative (error analysis)

Page 59: CROWDSOURCING (ANAPHORIC) ANNOTATION

SNOW ET AL: THE TASKS

Page 60: CROWDSOURCING (ANAPHORIC) ANNOTATION

AFFECT RECOGNITION

Page 61: CROWDSOURCING (ANAPHORIC) ANNOTATION

SNOW ET AL: QUALITY

• Compare ITA of turkers with ITA of experts

Page 62: CROWDSOURCING (ANAPHORIC) ANNOTATION

ITA: EXPERTS

• 6 total experts• One expert’s ITA is

calculated as the average of Pearson correlations from each annotator to the avg. of the other 5 annotators.

Page 63: CROWDSOURCING (ANAPHORIC) ANNOTATION

ITA: TURKERS

• Average over k annotations to create a single “proto-labeler”

• Plot the ITA of this proto-labeler for up to 10 annotations and compare to the average single expert ITA.

Page 64: CROWDSOURCING (ANAPHORIC) ANNOTATION

ITA ACROSS TASKS

Page 65: CROWDSOURCING (ANAPHORIC) ANNOTATION

SNOW ET AL: COST

Page 66: CROWDSOURCING (ANAPHORIC) ANNOTATION

CALLISON-BURCH: USING MECHANICAL TURK INSTEAD OF AUTOMATIC METRICS FOR MT• It is generally thought that human evaluation of

the quality of translations is too expensive• So automatic metrics like BLEU are used instead• Callison-Burch:

– Turkers produce judgments very similar to experts and much more correlated than BLEU

– Mechanical Turk can be used for a variety of tasks, including creating reference translations and evaluation through reading comprehension

Page 67: CROWDSOURCING (ANAPHORIC) ANNOTATION

Evaluation at Workshop on Statistical MT 2008

• 11 systems• Their output on test sentences ranked by

experts

Page 68: CROWDSOURCING (ANAPHORIC) ANNOTATION

Using Mechanical Turk

• Each Turker presented with the original sentence and the outputs of 5 systems; have radio buttons to rank them

• Required 975 HITs• Total cost: $9.75

Page 69: CROWDSOURCING (ANAPHORIC) ANNOTATION

AGREEMENT WITH EXPERTS

Page 70: CROWDSOURCING (ANAPHORIC) ANNOTATION

CORRELATION BETWEEN RANKINGS

Page 71: CROWDSOURCING (ANAPHORIC) ANNOTATION

CROWDSOURCING PRACTICE

Page 72: CROWDSOURCING (ANAPHORIC) ANNOTATION

• Export linguistic data as CSV file and load up into Amazon or CrowdFlower

• Create instructions as HTML

• Customise the annotation UI (e.g. may need JavaScript for markable selection)

• Select how many judgments per micro-task and any restrictions on the annotators (e.g. country of origin)

• Test it and revisit any of the above, as needed

• Launch it and collect the data

• Download the results and put together the corpus

• Adjudicate

WHAT YOU HAVE TO DO

Page 73: CROWDSOURCING (ANAPHORIC) ANNOTATION

Example: CrowdFlower Instructions

Page 74: CROWDSOURCING (ANAPHORIC) ANNOTATION

Example: Marking Locations in tweets

Page 75: CROWDSOURCING (ANAPHORIC) ANNOTATION

Example: Locations selected

Page 76: CROWDSOURCING (ANAPHORIC) ANNOTATION

Example 2: Entity Linking Annotation in Crowdflower

Page 77: CROWDSOURCING (ANAPHORIC) ANNOTATION

QUALITY CONTROL

• Have each task be performed by multiple Turkers• Turkers may have to meet qualifications, e.g.,

qualifications / approval rate– Each Turker has a ‘reliability score’ as in Ebay– P. Ipeirotis. Be a Top Mechanical Turk Worker: You Need $5

and 5 Minutes.• honey pots (trap questions with known answers)• Defense against spammers• May reject work / block worker

– But: serious consequences

Page 78: CROWDSOURCING (ANAPHORIC) ANNOTATION

Dealing with bad workers

• Pay for “bad” work instead of rejecting it? – Pro: preserve reputation, admit if poor design at

fault– Con: promote fraud, undermine approval rating

system• Use bonus as incentive

– Pay the minimum $0.01 and $0.01 for bonus – Better than rejecting a $0.02 task

• Detect and block spammers

Page 79: CROWDSOURCING (ANAPHORIC) ANNOTATION

CROWDSOURCING WITH GATE

Page 80: CROWDSOURCING (ANAPHORIC) ANNOTATION

Crowdsourcing with GATE

• Download and use the GATE Crowdsourcing plugin

• https://gate.ac.uk/wiki/crowdsourcing.html

• Transforms automatically texts with GATE annotations into CrowdFlower jobs

• Generates the CF User Interface (based on templates)

• Researcher then checks and runs the project in CF

• On completion, the plugin imports automatically the results back into GATE, aligning to sentences and representing the multiple annotators

Page 81: CROWDSOURCING (ANAPHORIC) ANNOTATION

GATE Crowdsourcing Overview (1)

• Choose a job builder

– Classification

– Sequence Selection

• Configure the corresponding user interface and provide the task instructions

Page 82: CROWDSOURCING (ANAPHORIC) ANNOTATION

GATE Crowdsourcing Overview (2)

• Pre-process the corpus with TwitIE/ANNIE, e.g.

– Tokenisation

– POS tagging

– Sentence splitting

– NE recognition

• Create automatically the target annotations and any dynamic values required for classification

• Execute the job builder to upload units to CF automatically

Page 83: CROWDSOURCING (ANAPHORIC) ANNOTATION

Configure and execute the job in CF

Gold data units can also be uploaded from GATE, so CF controls quality

Page 84: CROWDSOURCING (ANAPHORIC) ANNOTATION

Automatic CF Import into GATE

• Each CF judgement is imported back as a separate annotation with some metadata

• Adjudication can happen automatically (e.g. majority vote and/or trust-based) or manually (Annotation Stack editor)

• The resulting corpus is ready to use for experiments or can be exported out of GATE as XML/XCES

Page 85: CROWDSOURCING (ANAPHORIC) ANNOTATION

CONCLUSIONS

• Microtask crowdsourcing is here to stay, especially for small to medium scale annotation

• A viable alternative to human evaluation by experts and a much better alternative than ranking by BLEU

Page 86: CROWDSOURCING (ANAPHORIC) ANNOTATION

READINGS• Push Singh (2002). The Public Acquisition of Commonsense

Knowledge, in Proc. Of the AAAI Spring Symposium on Acquiring Linguistic and World Knowledge for Information Access

• Snow et al (2008). Cheap and fast: but is it good? Proc. Of EMNLP.

• Chris Callison-Burch (2009). Fast, cheap and creative: Evaluating Translation Quality Using Amazon’s Mechanical Turk. Proc. Of EMNLP

• Poesio, Chamberlain & Kruschwitz (forthcoming). Crowdsourcing. In N. Ide & J. Pustejovsky (eds), Handbook of Linguistic Annotation.

Page 87: CROWDSOURCING (ANAPHORIC) ANNOTATION

ACKNOWLEDGMENTS

• A number of slides borrowed from– Snow et al’s presentation at EMNLP 2008– Matt Lease, Uni Texas at Austin– Kalina Bontcheva and Leon Derczynski’s EACL 2014

Tutorial on NLP for Social Media