Basic Text Processing

  • Upload
    march

  • View
    46

  • Download
    0

Embed Size (px)

DESCRIPTION

Basic Text Processing. Regular Expressions. Regular expressions. A formal language for specifying text strings How can we search for any of these? woodchuck woodchucks Woodchuck Woodchucks. Regular Expressions: Disjunctions. Letters inside square brackets [] Ranges [A-Z]. - PowerPoint PPT Presentation

Citation preview

  • Basic Text ProcessingRegular Expressions

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Regular expressionsA formal language for specifying text stringsHow can we search for any of these?woodchuckwoodchucksWoodchuckWoodchucks

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Regular Expressions: DisjunctionsLetters inside square brackets []

    Ranges [A-Z]

    PatternMatches[wW]oodchuckWoodchuck, woodchuck[1234567890]Any digit

    PatternMatches[A-Z]An upper case letterDrenched Blossoms[a-z]A lower case lettermy beans were impatient[0-9]A single digitChapter 1: Down the Rabbit Hole

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Regular Expressions: Negation in DisjunctionNegations [^Ss]Caret means negation only when first in []

    PatternMatches[^A-Z]Not an upper case letterOyfn pripetchik[^Ss]Neither S nor sI have no exquisite reason[^e^]Neither e nor ^Look herea^bThe pattern a caret bLook up a^b now

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Regular Expressions: More DisjunctionWoodchucks is another name for groundhog!The pipe | for disjunction

    PatternMatchesgroundhog|woodchuckyours|mineyours minea|b|c= [abc][gG]roundhog|[Ww]oodchuck

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Regular Expressions: ? * + .Stephen C KleeneKleene *, Kleene +

    PatternMatchescolou?rOptional previous charcolor colouroo*h!0 or more of previous charoh! ooh! oooh! ooooh!o+h!1 or more of previous charoh! ooh! oooh! ooooh!baa+baa baaa baaaa baaaaabeg.nbegin begun begun beg3n

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Regular Expressions: Anchors ^ $

    PatternMatches^[A-Z] Palo Alto^[^A-Za-z] 1 Hello\.$ The end..$ The end? The end!

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    ExampleFind me all instances of the word the in a text.the Misses capitalized examples[tT]he Incorrectly returns other or theology[^a-zA-Z][tT]he[^a-zA-Z]

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    ErrorsThe process we just went through was based on fixing two kinds of errorsMatching strings that we should not have matched (there, then, other)False positives (Type I)Not matching things that we should have matched (The)False negatives (Type II)

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Errors cont.In NLP we are always dealing with these kinds of errors.Reducing the error rate for an application often involves two antagonistic efforts: Increasing accuracy or precision (minimizing false positives)Increasing coverage or recall (minimizing false negatives).

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    SummaryRegular expressions play a surprisingly large roleSophisticated sequences of regular expressions are often the first model for any text processing textFor many hard tasks, we use machine learning classifiersBut regular expressions are used as features in the classifiersCan be very useful in capturing generalizations

    *

    JHU CS 600.465/ Intro to NLP/Jan Hajic

  • Basic Text ProcessingWord tokenization

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Text NormalizationEvery NLP task needs to do text normalization: Segmenting/tokenizing words in running textNormalizing word formatsSegmenting sentences in running text

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    How many words?I do uh main- mainly business data processingFragments, filled pausesSeusss cat in the hat is different from other cats! Lemma: same stem, part of speech, rough word sensecat and cats = same lemmaWordform: the full inflected surface formcat and cats = different wordforms

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    How many words?they lay back on the San Francisco grass and looked at the stars and their

    Type: an element of the vocabulary.Token: an instance of that type in running text.How many?15 tokens (or 14)13 types (or 12) (or 11?)

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    How many words?N = number of tokensV = vocabulary = set of types|V| is the size of the vocabulary

    Church and Gale (1990): |V| > O(N)

    Tokens = NTypes = |V|Switchboard phone conversations2.4 million20 thousandShakespeare884,00031 thousandGoogle N-grams1 trillion13 million

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Simple Tokenization in UNIX(Inspired by Ken Churchs UNIX for Poets.)Given a text file, output the word tokens and their frequenciestr -sc A-Za-z \n < shakes.txt | sort | uniq c

    1945 A 72 AARON 19 ABBESS 5 ABBOT ... ... 25 Aaron 6 Abate 1 Abates 5 Abbess 6 Abbey 3 Abbot.... Change all non-alpha to newlinesSort in alphabetical orderMerge and count each type

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    The first step: tokenizingtr -sc A-Za-z \n < shakes.txt | head

    THESONNETSbyWilliamShakespeareFromfairestcreaturesWe...

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    The second step: sortingtr -sc A-Za-z \n < shakes.txt | sort | head

    AAAAAAAAA...

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    More countingMerging upper and lower casetr A-Z a-z < shakes.txt | tr sc A-Za-z \n | sort | uniq c Sorting the countstr A-Z a-z < shakes.txt | tr sc A-Za-z \n | sort | uniq c | sort n r23243 the22225 i18618 and16339 to15687 of12780 a12163 you10839 my10005 in8954 d

    What happened here?

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Issues in TokenizationFinlands capital Finland Finlands Finlands ?whatre, Im, isnt What are, I am, is notHewlett-Packard Hewlett Packard ?state-of-the-art state of the art ?Lowercase lower-case lowercase lower case ?San Francisco one token or two?m.p.h., PhD. ??

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Tokenization: language issuesFrenchL'ensemble one token or two?L ? L ? Le ?Want lensemble to match with un ensemble

    German noun compounds are not segmentedLebensversicherungsgesellschaftsangestellterlife insurance company employeeGerman information retrieval needs compound splitter

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Tokenization: language issuesChinese and Japanese no spaces between words: Sharapova now lives in US southeastern FloridaFurther complicated in Japanese, with multiple alphabets intermingledDates/amounts in multiple formats500$500K(6,000)End-user can express query entirely in hiragana!

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Word Tokenization in ChineseAlso called Word SegmentationChinese words are composed of charactersCharacters are generally 1 syllable and 1 morpheme.Average word is 2.4 characters long.Standard baseline segmentation algorithm: Maximum Matching (also called Greedy)

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Maximum MatchingWord Segmentation AlgorithmGiven a wordlist of Chinese, and a string.Start a pointer at the beginning of the stringFind the longest word in dictionary that matches the string starting at pointerMove the pointer over the word in stringGo to 2

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Max-match segmentation illustrationThecatinthehatThetabledownthere

    Doesnt generally work in English!

    But works astonishingly well in Chinese Modern probabilistic segmentation algorithms even betterthe table down therethe cat in the hattheta bled own there

    JHU CS 600.465/ Intro to NLP/Jan Hajic

  • Basic Text Processing

    Word Normalization and Stemming

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    NormalizationNeed to normalize terms Information Retrieval: indexed text & query terms must have same form.We want to match U.S.A. and USAWe implicitly define equivalence classes of termse.g., deleting periods in a termAlternative: asymmetric expansion:Enter: windowSearch: window, windowsEnter: windowsSearch: Windows, windows, windowEnter: WindowsSearch: WindowsPotentially more powerful, but less efficient

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Case foldingApplications like IR: reduce all letters to lower caseSince users tend to use lower casePossible exception: upper case in mid-sentence?e.g., General MotorsFed vs. fedSAIL vs. sailFor sentiment analysis, MT, Information extractionCase is helpful (US versus us is important)

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    LemmatizationReduce inflections or variant forms to base formam, are, is becar, cars, car's, cars' carthe boy's cars are different colors the boy car be different colorLemmatization: have to find correct dictionary headword formMachine translationSpanish quiero (I want), quieres (you want) same lemma as querer want

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    MorphologyMorphemes:The small meaningful units that make up wordsStems: The core meaning-bearing unitsAffixes: Bits and pieces that adhere to stemsOften with grammatical functions

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    StemmingReduce terms to their stems in information retrievalStemming is crude chopping of affixeslanguage dependente.g., automate(s), automatic, automation all reduced to automat.for example compressed and compression are both accepted as equivalent to compress.for exampl compress andcompress ar both acceptas equival to compress

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Porters algorithmThe most common English stemmer Step 1asses ss caresses caressies i ponies poniss ss caress caresss cats cat Step 1b(*v*)ing walking walk sing sing(*v*)ed plastered plaster

    Step 2 (for long stems)ational ate relational relateizer ize digitizer digitizeator ate operator operate Step 3 (for longer stems)al revival revivable adjustable adjustate activate activ

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Viewing morphology in a corpusWhy only strip ing if there is a vowel?(*v*)ing walking walk sing sing

    *

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Viewing morphology in a corpusWhy only strip ing if there is a vowel?(*v*)ing walking walk sing sing

    *tr -sc 'A-Za-z' '\n' < shakes.txt | grep ing$' | sort | uniq -c | sort nr

    tr -sc 'A-Za-z' '\n' < shakes.txt | grep '[aeiou].*ing$' | sort | uniq -c | sort nr548 being541 nothing152 something145 coming130 morning122 having120 living117 loving116 Being102 going1312 King 548 being 541 nothing 388 king 375 bring 358 thing 307 ring 152 something 145 coming 130 morning

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Dealing with complex morphology is sometimes necessarySome languages requires complex morpheme segmentationTurkishUygarlastiramadiklarimizdanmissinizcasina`(behaving) as if you are among those whom we could not civilizeUygar `civilized + las `become + tir `cause + ama `not able + dik `past + lar plural+ imiz p1pl + dan abl + mis past + siniz 2pl + casina as if

    JHU CS 600.465/ Intro to NLP/Jan Hajic

  • Basic Text Processing

    Sentence Segmentation and Decision Trees

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Sentence Segmentation!, ? are relatively unambiguousPeriod . is quite ambiguousSentence boundaryAbbreviations like Inc. or Dr.Numbers like .02% or 4.3Build a binary classifierLooks at a .Decides EndOfSentence/NotEndOfSentenceClassifiers: hand-written rules, regular expressions, or machine-learning

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Determining if a word is end-of-sentence: a Decision Tree

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    More sophisticated decision tree featuresCase of word with .: Upper, Lower, Cap, NumberCase of word after .: Upper, Lower, Cap, Number

    Numeric featuresLength of word with .Probability(word with . occurs at end-of-s)Probability(word after . occurs at beginning-of-s)

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Implementing Decision TreesA decision tree is just an if-then-else statementThe interesting research is choosing the featuresSetting up the structure is often too hard to do by handHand-building only possible for very simple features, domainsFor numeric features, its too hard to pick each thresholdInstead, structure usually learned by machine learning from a training corpus

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Decision Trees and other classifiersWe can think of the questions in a decision treeAs features that could be exploited by any kind of classifierLogistic regressionSVMNeural Netsetc.

    JHU CS 600.465/ Intro to NLP/Jan Hajic

  • 10/04/1999JHU CS 600.465/Jan Hajic*Introduction to Natural Language Processing (600.465)

    Words and the Company They KeepDr. Jan HajiCS Dept., Johns Hopkins [email protected]/~hajic

    JHU CS 600.465/ Intro to NLP/Jan Hajic

  • 10/04/1999JHU CS 600.465/ Intro to NLP/Jan Hajic*MotivationEnvironment: mostly not a full analysis (sentence/text parsing)Tasks where words & company are important:word sense disambiguation (MT, IR, TD, IE)lexical entries: subdivision & definitions (lexicography)language modeling (generalization, [kind of] smoothing)word/phrase/term translation (MT, Multilingual IR)NL generation (natural phrases) (Generation, MT)parsing (lexically-based selectional preferences)

    JHU CS 600.465/ Intro to NLP/Jan Hajic

  • 10/04/1999JHU CS 600.465/ Intro to NLP/Jan Hajic*CollocationsCollocationFirth: word is characterized by the company it keeps; collocations of a given word are statements of the habitual or customary places of that word.non-compositionality of meaningcannot be derived directly from its parts (heavy rain)non-substitutability in contextfor parts (red light)non-modifiability (& non-transformability)kick the yellow bucket; take exceptions to

    JHU CS 600.465/ Intro to NLP/Jan Hajic

  • 10/04/1999JHU CS 600.465/ Intro to NLP/Jan Hajic*Association and Co-occurence;TermsDoes not fall under collocation, but:Interesting just because it does often [rarely] appear together or in the same (or similar) context:(doctors, nurses)(hardware,software)(gas, fuel)(hammer, nail)(communism, free speech)Terms:need not be > 1 word (notebook, washer)

    JHU CS 600.465/ Intro to NLP/Jan Hajic

  • 10/04/1999JHU CS 600.465/ Intro to NLP/Jan Hajic*Collocations of Special InterestIdioms: really fixed phraseskick the bucket, birds-of-a-feather, run for officeProper names: difficult to recognize even with listsTuesday (persons name), May, Winston Churchill, IBM, Inc.Numerical expressionscontaining ordinary wordsMonday Oct 04 1999, two thousand seven hundred fiftyPhrasal verbsSeparable parts: look up, take off

    JHU CS 600.465/ Intro to NLP/Jan Hajic

  • 10/04/1999JHU CS 600.465/ Intro to NLP/Jan Hajic*Further NotionsSynonymy: different form/word, same meaning:notebook / laptopAntonymy: opposite meaning:new/old, black/white, start/stopHomonymy: same form/word, different meaning:true (random, unrelated): can (aux. verb / can of Coke)related: polysemy; notebook, shift, grade, ...Other:Hyperonymy/Hyponymy: general vs. special: vehicle/carMeronymy/Holonymy: whole vs. part: body/leg

    JHU CS 600.465/ Intro to NLP/Jan Hajic

  • 10/04/1999JHU CS 600.465/ Intro to NLP/Jan Hajic*How to Find Collocations?FrequencyplainfilteredHypothesis testingt test c2 testPointwise (poor mans) Mutual Information(Average) Mutual Information

    JHU CS 600.465/ Intro to NLP/Jan Hajic

  • 10/04/1999JHU CS 600.465/ Intro to NLP/Jan Hajic*FrequencySimpleCount n-grams; high frequency n-grams are candidates:mostly function wordsfrequent namesFilteredStop list: words/forms which (we think) cannot be a part of a collocationa, the, and, or, but, not, ...Part of Speech (possible collocation patterns)A+N, N+N, N+of+N, ...

    JHU CS 600.465/ Intro to NLP/Jan Hajic

  • 10/04/1999JHU CS 600.465/ Intro to NLP/Jan Hajic*Hypothesis TestingHypothesissomething we test (against)Most often:compare possibly interesting thing vs. random chanceNull hypothesis: something occurs by chance (thats what we suppose).Assuming this, prove that the probabilty of the real world is then too low (typically < 0.05, also 0.005, 0.001)... therefore reject the null hypothesis (thus confirming interesting things are happening!)Otherwise, its possibile there is nothing interesting.

    JHU CS 600.465/ Intro to NLP/Jan Hajic

  • 10/04/1999JHU CS 600.465/ Intro to NLP/Jan Hajic*t test (Students t test)Significance of differencecompute magic number against normal distribution (mean m)using real-world data: (x real data mean, s2 variance, N size):t = (x - m) / s2 / Nfind in tables (see MS, p. 609):d.f. = degrees of freedom (parameters which are not determined by other parameters)percentile level p = 0.05 (or better) (90% confidence; double tail)the bigger t:the better chances that there is the interesting feature we hope for (i.e. we can reject the null hypothesis)t: at least the value from the table(s)

    JHU CS 600.465/ Intro to NLP/Jan Hajic

  • 10/04/1999JHU CS 600.465/ Intro to NLP/Jan Hajic*t test on wordsnull hypothesis: independence p(w1,w2)=p(w1)p(w2)mean m: p(w1) p(w2)data estimates:x = MLE of joint probability from datas2 is p(1-p), i.e. almost p for small p; N is the data sizeExample: (d.f. ~ sample size)general term (in corpus): c(general) = 108, c(term) = 40c(general,term) = 2; expected p(general)p(term) = 8.8E-8t = (9.0E-6 - 8.8E-8) / (9.0E-6 / 221097)1/2 = 1.40 (not > 2.576) thus general term is not a collocation with confidence 0.005 (99%) (not even with 0.05(90%)true species: (84/1779/9): t = 2.774 > 2.576 !!

    JHU CS 600.465/ Intro to NLP/Jan Hajic

  • 10/04/1999JHU CS 600.465/ Intro to NLP/Jan Hajic*Pearsons Chi-square test c2 test (general formula): Si,j (Oij-Eij)2 / Eijwhere Oij/Eij is the observed/expected count of events i, jfor two-outcomes-only events:

    c2 = 221097(219243x9-75x1770)2/1779x84x221013x219318 (formula in MS book) = 103.39 > 7.88 (at .005 thus we can reject the independence assumption) (OK with even .001)

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    wright \ wleft

    = true

    true

    = species

    9

    1,770

    species

    75

    219,243

  • 10/04/1999JHU CS 600.465/ Intro to NLP/Jan Hajic*Pointwise Mutual InformationThis is NOT the MI as defined in Information Theory(IT: average of the following; not of values)...but might be useful: I(a,b) = log2 (p(a,b) / p(a)p(b)) = log2 (p(a|b) / p(a))Example (same): I(true,species) = log2 (4.1e-5 / 3.8e-4 x 8.0e-3) = 3.74 I(general,term) = log2 (9.0e-6 / 1.8e-4 x 4.9e-4) = 6.68measured in bits but it is difficult to give it an interpretationused for ranking (~ the null hypothesis tests)No good for sparse data (good for independence, but not for dependence)Dunnings Likelihood ratio is better especially for sparse data (see 172p in MS book) (log likelihood ratio between two hypotheses)

    JHU CS 600.465/ Intro to NLP/Jan Hajic

  • Minimum Edit Distance

    Definition of Minimum Edit Distance

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    How similar are two strings?Spell correctionThe user typed graffeWhich is closest? grafgraftgrailgiraffe

    Computational BiologyAlign two sequences of nucleotides

    Resulting alignment:Also for Machine Translation, Information Extraction, Speech RecognitionAGGCTATCACCTGACCTCCAGGCCGATGCCCTAGCTATCACGACCGCGGTCGATTTGCCCGAC-AGGCTATCACCTGACCTCCAGGCCGA--TGCCC---TAG-CTATCAC--GACCGC--GGTCGATTTGCCCGAC

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Edit DistanceThe minimum edit distance between two stringsIs the minimum number of editing operationsInsertionDeletionSubstitutionNeeded to transform one into the other

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Minimum Edit DistanceTwo strings and their alignment:

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Minimum Edit DistanceIf each operation has cost of 1Distance between these is 5If substitutions cost 2 (Levenshtein)Distance between them is 8

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Alignment in Computational BiologyGiven a sequence of bases

    An alignment:

    Given two sequences, align each letter to a letter or gap-AGGCTATCACCTGACCTCCAGGCCGA--TGCCC---TAG-CTATCAC--GACCGC--GGTCGATTTGCCCGACAGGCTATCACCTGACCTCCAGGCCGATGCCCTAGCTATCACGACCGCGGTCGATTTGCCCGAC

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Other uses of Edit Distance in NLPEvaluating Machine Translation and speech recognitionR Spokesman confirms senior government adviser was shotH Spokesman said the senior adviser was shot dead S I D INamed Entity Extraction and Entity CoreferenceIBM Inc. announced todayIBM profitsStanford President John Hennessy announced yesterdayfor Stanford University President John Hennessy

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    How to find the Min Edit Distance?Searching for a path (sequence of edits) from the start string to the final string:Initial state: the word were transformingOperators: insert, delete, substituteGoal state: the word were trying to get toPath cost: what we want to minimize: the number of edits

    *

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Minimum Edit as SearchBut the space of all edit sequences is huge!We cant afford to navigate navelyLots of distinct paths wind up at the same state.We dont have to keep track of all of themJust the shortest path to each of those revisted states.

    *

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Defining Min Edit DistanceFor two stringsX of length n Y of length mWe define D(i,j)the edit distance between X[1..i] and Y[1..j] i.e., the first i characters of X and the first j characters of YThe edit distance between X and Y is thus D(n,m)

    JHU CS 600.465/ Intro to NLP/Jan Hajic

  • Minimum Edit Distance

    Computing Minimum Edit Distance

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Dynamic Programming forMinimum Edit DistanceDynamic programming: A tabular computation of D(n,m)Solving problems by combining solutions to subproblems.Bottom-upWe compute D(i,j) for small i,j And compute larger D(i,j) based on previously computed smaller valuesi.e., compute D(i,j) for all i (0 < i < n) and j (0 < j < m)

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Defining Min Edit Distance (Levenshtein)InitializationD(i,0) = iD(0,j) = jRecurrence Relation:For each i = 1..M For each j = 1..N D(i-1,j) + 1 D(i,j)= min D(i,j-1) + 1 D(i-1,j-1) + 2; if X(i) Y(j) 0; if X(i) = Y(j)Termination:D(N,M) is distance

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    The Edit Distance Table

    N9O8I7T6N5E4T3N2I1#0123456789#EXECUTION

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    The Edit Distance Table

    N9O8I7T6N5E4T3N2I1#0123456789#EXECUTION

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Edit Distance

    N9O8I7T6N5E4T3N2I1#0123456789#EXECUTION

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    The Edit Distance Table

    N989101112111098O8789101110989I767891098910T656789891011N5456789101110E43456789109T3456787898N2345678787I1234567678#0123456789#EXECUTION

    JHU CS 600.465/ Intro to NLP/Jan Hajic

  • Minimum Edit Distance

    Backtrace for Computing Alignments

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Computing alignmentsEdit distance isnt sufficientWe often need to align each character of the two strings to each otherWe do this by keeping a backtraceEvery time we enter a cell, remember where we came fromWhen we reach the end, Trace back the path from the upper right corner to read off the alignment

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Edit Distance

    N9O8I7T6N5E4T3N2I1#0123456789#EXECUTION

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    MinEdit with Backtrace

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Adding Backtrace to Minimum Edit DistanceBase conditions: Termination:D(i,0) = i D(0,j) = j D(N,M) is distance Recurrence Relation:For each i = 1..M For each j = 1..N D(i-1,j) + 1 D(i,j)= min D(i,j-1) + 1 D(i-1,j-1) + 2; if X(i) Y(j) 0; if X(i) = Y(j) LEFT ptr(i,j)= DOWN DIAG

    insertiondeletionsubstitutioninsertiondeletionsubstitution

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    The Distance MatrixSlide adapted from Serafim Batzoglouy0 yMx0 xNEvery non-decreasing path

    from (0,0) to (M, N)

    corresponds to an alignment of the two sequences

    An optimal alignment is composed of optimal subalignments

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Result of BacktraceTwo strings and their alignment:

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    PerformanceTime:O(nm)Space:O(nm)BacktraceO(n+m)

    JHU CS 600.465/ Intro to NLP/Jan Hajic

  • Minimum Edit Distance

    Weighted Minimum Edit Distance

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Weighted Edit DistanceWhy would we add weights to the computation?Spell Correction: some letters are more likely to be mistyped than othersBiology: certain kinds of deletions or insertions are more likely than others

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Confusion matrix for spelling errors

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Weighted Min Edit DistanceInitialization:D(0,0) = 0D(i,0) = D(i-1,0) + del[x(i)]; 1 < i ND(0,j) = D(0,j-1) + ins[y(j)]; 1 < j MRecurrence Relation: D(i-1,j) + del[x(i)]D(i,j)= min D(i,j-1) + ins[y(j)] D(i-1,j-1) + sub[x(i),y(j)]Termination:D(N,M) is distance

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Where did the name, dynamic programming, come from? The 1950s were not good years for mathematical research. [the] Secretary of Defense had a pathological fear and hatred of the word, research

    I decided therefore to use the word, programming.

    I wanted to get across the idea that this was dynamic, this was multistage I thought, lets take a word that has an absolutely precise meaning, namely dynamic its impossible to use the word, dynamic, in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. Its impossible.

    Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to.

    Richard Bellman, Eye of the Hurricane: an autobiography 1984.

    JHU CS 600.465/ Intro to NLP/Jan Hajic

  • Minimum Edit Distance

    Minimum Edit Distance in Computational Biology

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Sequence Alignment-AGGCTATCACCTGACCTCCAGGCCGA--TGCCC---TAG-CTATCAC--GACCGC--GGTCGATTTGCCCGACAGGCTATCACCTGACCTCCAGGCCGATGCCCTAGCTATCACGACCGCGGTCGATTTGCCCGAC

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Why sequence alignment?Comparing genes or regions from different speciesto find important regionsdetermine functionuncover evolutionary forcesAssembling fragments to sequence DNACompare individuals to looking for mutations

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Alignments in two fieldsIn Natural Language ProcessingWe generally talk about distance (minimized)And weightsIn Computational BiologyWe generally talk about similarity (maximized)And scores

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    The Needleman-Wunsch AlgorithmInitialization:D(i,0) = -i * dD(0,j) = -j * dRecurrence Relation: D(i-1,j) - dD(i,j)= min D(i,j-1) - d D(i-1,j-1) + s[x(i),y(j)]Termination:D(N,M) is distance

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    The Needleman-Wunsch MatrixSlide adapted from Serafim Batzogloux1 xMy1 yN(Note that the origin is at the upper left.)

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    A variant of the basic algorithm:Maybe it is OK to have an unlimited # of gaps in the beginning and end:Slide from Serafim Batzoglou----------CTATCACCTGACCTCCAGGCCGATGCCCCTTCCGGCGCGAGTTCATCTATCAC--GACCGC--GGTCG--------------If so, we dont want to penalize gaps at the ends

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Different types of overlapsSlide from Serafim BatzoglouExample:2 overlappingreads from a sequencing project Example:Search for a mouse genewithin a human chromosome

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    The Overlap Detection variantChanges:

    InitializationFor all i, j,F(i, 0) = 0F(0, j) = 0

    Termination maxi F(i, N)FOPT = max maxj F(M, j)Slide from Serafim Batzogloux1 xMy1 yN

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Given two strings x = x1xM, y = y1yN

    Find substrings x, y whose similarity (optimal global alignment value)is maximum

    x = aaaacccccggggttay = ttcccgggaaccaaccSlide from Serafim BatzoglouThe Local Alignment Problem

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    The Smith-Waterman algorithmIdea: Ignore badly aligning regions

    Modifications to Needleman-Wunsch:

    Initialization:F(0, j) = 0F(i, 0) = 0 0Iteration: F(i, j) = max F(i - 1, j) - d F(i, j - 1) - d F(i - 1, j - 1) + s(xi, yj) Slide from Serafim Batzoglou

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    The Smith-Waterman algorithmTermination:If we want the best local alignmentFOPT = maxi,j F(i, j)Find FOPT and trace back

    If we want all local alignments scoring > t

    ??For all i, j find F(i, j) > t, and trace back?

    Complicated by overlapping local alignments

    Slide from Serafim Batzoglou

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Local alignment exampleX = ATCATY = ATTATC

    Let: m = 1 (1 point for match) d = 1 (-1 point for del/ins/sub)

    ATTATC0000000A0T0C0A0T0

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Local alignment exampleX = ATCATY = ATTATC

    ATTATC0000000A0100100T0021020C0011013A0100212T0020132

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Local alignment exampleX = ATCATY = ATTATC

    ATTATC0000000A0100100T0021020C0011013A0100212T0020132

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    Dan Jurafsky

    Local alignment exampleX = ATCATY = ATTATC

    ATTATC0000000A0100100T0021020C0011013A0100212T0020132

    JHU CS 600.465/ Intro to NLP/Jan Hajic

    *************************Intro to NLP9/14/1999JHU CS 600.465/Jan Hajic*Intro to NLP9/14/1999JHU CS 600.465/Jan Hajic*Intro to NLP9/14/1999JHU CS 600.465/Jan Hajic*Intro to NLP9/14/1999JHU CS 600.465/Jan Hajic*Intro to NLP9/14/1999JHU CS 600.465/Jan Hajic*Intro to NLP9/14/1999JHU CS 600.465/Jan Hajic*Intro to NLP9/14/1999JHU CS 600.465/Jan Hajic*Intro to NLP9/14/1999JHU CS 600.465/Jan Hajic*Intro to NLP9/14/1999JHU CS 600.465/Jan Hajic*Intro to NLP9/14/1999JHU CS 600.465/Jan Hajic*Intro to NLP9/14/1999JHU CS 600.465/Jan Hajic*Intro to NLP9/14/1999JHU CS 600.465/Jan Hajic*Intro to NLP9/14/1999JHU CS 600.465/Jan Hajic********************************