24
CS 416 Artificial Intelligence Lecture 18 Lecture 18 Reasoning over Time Reasoning over Time Chapter 15 Chapter 15

CS 416 Artificial Intelligence

  • Upload
    lelia

  • View
    16

  • Download
    0

Embed Size (px)

DESCRIPTION

CS 416 Artificial Intelligence. Lecture 18 Reasoning over Time Chapter 15. Final Exam. December 17 th (Friday) in the evening time slot (7:00) This is the same slot used by introductory foreign languages Conflicts? Email me. Cluster Analysis. Automatic classification of data - PowerPoint PPT Presentation

Citation preview

Page 1: CS 416 Artificial Intelligence

CS 416Artificial Intelligence

Lecture 18Lecture 18

Reasoning over TimeReasoning over Time

Chapter 15Chapter 15

Lecture 18Lecture 18

Reasoning over TimeReasoning over Time

Chapter 15Chapter 15

Page 2: CS 416 Artificial Intelligence

Final Exam

December 17December 17thth (Friday) in the evening time slot (Friday) in the evening time slot (7:00)(7:00)

• This is the same slot used by introductory foreign languagesThis is the same slot used by introductory foreign languages

Conflicts? Email meConflicts? Email me

December 17December 17thth (Friday) in the evening time slot (Friday) in the evening time slot (7:00)(7:00)

• This is the same slot used by introductory foreign languagesThis is the same slot used by introductory foreign languages

Conflicts? Email meConflicts? Email me

Page 3: CS 416 Artificial Intelligence

Cluster Analysis

Automatic classification of dataAutomatic classification of data

• What are important similarities?What are important similarities?

• What are important distinctions?What are important distinctions?

• What are important correlations?What are important correlations?

Automatic classification of dataAutomatic classification of data

• What are important similarities?What are important similarities?

• What are important distinctions?What are important distinctions?

• What are important correlations?What are important correlations?

Page 4: CS 416 Artificial Intelligence

Hidden Markov Models (HMMs)

Represent the state of the world with a single Represent the state of the world with a single discrete variablediscrete variable

• If your state has multiple variables, form one variable whose If your state has multiple variables, form one variable whose value takes on all possible tuples of multiple variablesvalue takes on all possible tuples of multiple variables

– A two-variable system (heads/tails and red/green/blue) A two-variable system (heads/tails and red/green/blue) becomesbecomes

A single-variable system with six values (heads/red, A single-variable system with six values (heads/red, tails/red, …)tails/red, …)

Represent the state of the world with a single Represent the state of the world with a single discrete variablediscrete variable

• If your state has multiple variables, form one variable whose If your state has multiple variables, form one variable whose value takes on all possible tuples of multiple variablesvalue takes on all possible tuples of multiple variables

– A two-variable system (heads/tails and red/green/blue) A two-variable system (heads/tails and red/green/blue) becomesbecomes

A single-variable system with six values (heads/red, A single-variable system with six values (heads/red, tails/red, …)tails/red, …)

Page 5: CS 416 Artificial Intelligence

HMMs

• Let number of states be SLet number of states be S

– Transition model T is an SxS matrix filled by P( XTransition model T is an SxS matrix filled by P( X tt | X | Xt-1t-1 ) )

Probability of transitioning from any state to anotherProbability of transitioning from any state to another

– Consider obtaining evidence eConsider obtaining evidence e tt at each timestep at each timestep

Construct an SxS matrix O consisting of P( eConstruct an SxS matrix O consisting of P( e tt | X | Xtt = i ) = i )

along the diagonal and zero elsewherealong the diagonal and zero elsewhere

• Let number of states be SLet number of states be S

– Transition model T is an SxS matrix filled by P( XTransition model T is an SxS matrix filled by P( X tt | X | Xt-1t-1 ) )

Probability of transitioning from any state to anotherProbability of transitioning from any state to another

– Consider obtaining evidence eConsider obtaining evidence e tt at each timestep at each timestep

Construct an SxS matrix O consisting of P( eConstruct an SxS matrix O consisting of P( e tt | X | Xtt = i ) = i )

along the diagonal and zero elsewherealong the diagonal and zero elsewhere

Page 6: CS 416 Artificial Intelligence

HMMs

Rewriting the FORWARD algorithmRewriting the FORWARD algorithm

• Constructing the predicted sequence of states from 0Constructing the predicted sequence of states from 0t+1 t+1 given egiven e00 e et+1t+1

– Technically, fTechnically, f1:t+11:t+1 = = FORWARD (fFORWARD (f1:t1:t, e, et+1t+1))

Rewriting the FORWARD algorithmRewriting the FORWARD algorithm

• Constructing the predicted sequence of states from 0Constructing the predicted sequence of states from 0t+1 t+1 given egiven e00 e et+1t+1

– Technically, fTechnically, f1:t+11:t+1 = = FORWARD (fFORWARD (f1:t1:t, e, et+1t+1))

Page 7: CS 416 Artificial Intelligence

HMMs

OptimizationsOptimizations

• FORWARD and BACKWARD can be written in matrix formFORWARD and BACKWARD can be written in matrix form

• Matrix forms permit reinspection for speedupsMatrix forms permit reinspection for speedups

– Consult book if interested in these for assignmentConsult book if interested in these for assignment

OptimizationsOptimizations

• FORWARD and BACKWARD can be written in matrix formFORWARD and BACKWARD can be written in matrix form

• Matrix forms permit reinspection for speedupsMatrix forms permit reinspection for speedups

– Consult book if interested in these for assignmentConsult book if interested in these for assignment

Page 8: CS 416 Artificial Intelligence

Kalman Filters

Gauss invented least-squares estimation and Gauss invented least-squares estimation and important parts of statistics in 1745important parts of statistics in 1745• When he was 18 and trying to understand the revolution of When he was 18 and trying to understand the revolution of

heavy bodies (by collecting data from telescopes) heavy bodies (by collecting data from telescopes)

Invented by Kalman in 1960Invented by Kalman in 1960• A means to update predictions of continuous variables given A means to update predictions of continuous variables given

observations (fast and discrete for computer programs)observations (fast and discrete for computer programs)

– Critical for getting Apollo spacecrafts to insert into orbit Critical for getting Apollo spacecrafts to insert into orbit around Moon.around Moon.

Gauss invented least-squares estimation and Gauss invented least-squares estimation and important parts of statistics in 1745important parts of statistics in 1745• When he was 18 and trying to understand the revolution of When he was 18 and trying to understand the revolution of

heavy bodies (by collecting data from telescopes) heavy bodies (by collecting data from telescopes)

Invented by Kalman in 1960Invented by Kalman in 1960• A means to update predictions of continuous variables given A means to update predictions of continuous variables given

observations (fast and discrete for computer programs)observations (fast and discrete for computer programs)

– Critical for getting Apollo spacecrafts to insert into orbit Critical for getting Apollo spacecrafts to insert into orbit around Moon.around Moon.

Page 9: CS 416 Artificial Intelligence

Speech recognition vs. Speech understanding

RecognitionRecognition

• Convert acoustic signal into wordsConvert acoustic signal into words

– P (words | signal) = P (words | signal) = P (signal | words) P (words) P (signal | words) P (words)

UnderstandingUnderstanding

• Recognizing the context and semantics of the wordsRecognizing the context and semantics of the words

RecognitionRecognition

• Convert acoustic signal into wordsConvert acoustic signal into words

– P (words | signal) = P (words | signal) = P (signal | words) P (words) P (signal | words) P (words)

UnderstandingUnderstanding

• Recognizing the context and semantics of the wordsRecognizing the context and semantics of the words

We have a model of this

We have a model of this too

Page 10: CS 416 Artificial Intelligence

Applications

• NaturallySpeaking (interesting story from Wired), Viavoice…NaturallySpeaking (interesting story from Wired), Viavoice…

– 90% hit rate is 10% error rate90% hit rate is 10% error rate

– want 98% or 99% success rate want 98% or 99% success rate

• DictationDictation

– Cheaper to play doctor’s audio tapes into telephone so Cheaper to play doctor’s audio tapes into telephone so someone in India can type the text and email it backsomeone in India can type the text and email it back

• User-control of devicesUser-control of devices

– ““Call home”Call home”

• NaturallySpeaking (interesting story from Wired), Viavoice…NaturallySpeaking (interesting story from Wired), Viavoice…

– 90% hit rate is 10% error rate90% hit rate is 10% error rate

– want 98% or 99% success rate want 98% or 99% success rate

• DictationDictation

– Cheaper to play doctor’s audio tapes into telephone so Cheaper to play doctor’s audio tapes into telephone so someone in India can type the text and email it backsomeone in India can type the text and email it back

• User-control of devicesUser-control of devices

– ““Call home”Call home”

Page 11: CS 416 Artificial Intelligence

Spectrum of choices

Constrained Constrained DomainDomain

Unconstrained Unconstrained DomainDomain

Speaker Speaker DependentDependent

Voice tags (e.g. Voice tags (e.g. phone)phone)

Trained Dictation Trained Dictation (Viavoice)(Viavoice)

Speaker Speaker IndependentIndependent

GalaxyGalaxy

(we are here)(we are here)What everyone What everyone

wantswants

Page 12: CS 416 Artificial Intelligence

Waveform to phonemes

• 40 – 50 40 – 50 phonesphones in all human languages in all human languages

• 48 48 phonemesphonemes in English (according to ARPAbet) in English (according to ARPAbet)

– Ceiling = [s iy l ih ng] [s iy l ix ng] [s iy l en]Ceiling = [s iy l ih ng] [s iy l ix ng] [s iy l en]

Nothing is precise here, so HMM with state variable XNothing is precise here, so HMM with state variable X tt

corresponding to the phone uttered at time tcorresponding to the phone uttered at time t

• P (EP (Ett | X | Xtt): given phoneme, what is its waveform): given phoneme, what is its waveform

– Must have models that adjust for pitch, speed, volume…Must have models that adjust for pitch, speed, volume…

• 40 – 50 40 – 50 phonesphones in all human languages in all human languages

• 48 48 phonemesphonemes in English (according to ARPAbet) in English (according to ARPAbet)

– Ceiling = [s iy l ih ng] [s iy l ix ng] [s iy l en]Ceiling = [s iy l ih ng] [s iy l ix ng] [s iy l en]

Nothing is precise here, so HMM with state variable XNothing is precise here, so HMM with state variable X tt

corresponding to the phone uttered at time tcorresponding to the phone uttered at time t

• P (EP (Ett | X | Xtt): given phoneme, what is its waveform): given phoneme, what is its waveform

– Must have models that adjust for pitch, speed, volume…Must have models that adjust for pitch, speed, volume…

Page 13: CS 416 Artificial Intelligence

Analog to digital (A to D)

• Diaphragm of microphone is displaced by movement of airDiaphragm of microphone is displaced by movement of air

• Analog to digital converter samples the signal at discrete time Analog to digital converter samples the signal at discrete time intervals (8 – 16 kHz, 8-bit for speech)intervals (8 – 16 kHz, 8-bit for speech)

• Diaphragm of microphone is displaced by movement of airDiaphragm of microphone is displaced by movement of air

• Analog to digital converter samples the signal at discrete time Analog to digital converter samples the signal at discrete time intervals (8 – 16 kHz, 8-bit for speech)intervals (8 – 16 kHz, 8-bit for speech)

Page 14: CS 416 Artificial Intelligence

Data compression

• 8kHz at 8 bits is 0.5 MB for one minute of speech8kHz at 8 bits is 0.5 MB for one minute of speech

– Too much information for constructing P(XToo much information for constructing P(X t+1t+1 | X | Xtt) tables) tables

– Reduce signal to overlapping Reduce signal to overlapping frames frames (10 msecs)(10 msecs)

– frames haveframes have features features that are evaluated based on signalthat are evaluated based on signal

• 8kHz at 8 bits is 0.5 MB for one minute of speech8kHz at 8 bits is 0.5 MB for one minute of speech

– Too much information for constructing P(XToo much information for constructing P(X t+1t+1 | X | Xtt) tables) tables

– Reduce signal to overlapping Reduce signal to overlapping frames frames (10 msecs)(10 msecs)

– frames haveframes have features features that are evaluated based on signalthat are evaluated based on signal

Page 15: CS 416 Artificial Intelligence

More data compression

Features are still too bigFeatures are still too big

• Consider n features with 256 values eachConsider n features with 256 values each

– 256256nn possible frames possible frames

• A table of P (features | phones) would be too largeA table of P (features | phones) would be too large

• Cluster!Cluster!

– Reduce number of options from 256Reduce number of options from 256nn to something to something manageablemanageable

Features are still too bigFeatures are still too big

• Consider n features with 256 values eachConsider n features with 256 values each

– 256256nn possible frames possible frames

• A table of P (features | phones) would be too largeA table of P (features | phones) would be too large

• Cluster!Cluster!

– Reduce number of options from 256Reduce number of options from 256nn to something to something manageablemanageable

Page 16: CS 416 Artificial Intelligence

Phone subdivision

Phones last 5-10 framesPhones last 5-10 frames

• Possible to subdivide a phone into three partsPossible to subdivide a phone into three parts

– Onset, mid, endOnset, mid, end

– [t] = [silent beginning, small explosion, hissing end][t] = [silent beginning, small explosion, hissing end]

• The sound of a phone changes based on surrounding phonesThe sound of a phone changes based on surrounding phones

– Brain coordinates ending of one phone and beginning of upcoming Brain coordinates ending of one phone and beginning of upcoming ones (coarticulation)ones (coarticulation)

– Sweet vs. stopSweet vs. stop

• State space is increased, but improved accuracyState space is increased, but improved accuracy

Phones last 5-10 framesPhones last 5-10 frames

• Possible to subdivide a phone into three partsPossible to subdivide a phone into three parts

– Onset, mid, endOnset, mid, end

– [t] = [silent beginning, small explosion, hissing end][t] = [silent beginning, small explosion, hissing end]

• The sound of a phone changes based on surrounding phonesThe sound of a phone changes based on surrounding phones

– Brain coordinates ending of one phone and beginning of upcoming Brain coordinates ending of one phone and beginning of upcoming ones (coarticulation)ones (coarticulation)

– Sweet vs. stopSweet vs. stop

• State space is increased, but improved accuracyState space is increased, but improved accuracy

Page 17: CS 416 Artificial Intelligence

Words

You say [t ow m ey t ow]You say [t ow m ey t ow]

• P (t ow m ey t ow | “tomato”)P (t ow m ey t ow | “tomato”)

I say [t ow m aa t ow]I say [t ow m aa t ow]

You say [t ow m ey t ow]You say [t ow m ey t ow]

• P (t ow m ey t ow | “tomato”)P (t ow m ey t ow | “tomato”)

I say [t ow m aa t ow]I say [t ow m aa t ow]

Page 18: CS 416 Artificial Intelligence

Words - coarticulation

The first syllable changes based on dialectThe first syllable changes based on dialect

There are four ways to say “tomato” and we would store There are four ways to say “tomato” and we would store P( [pronunciation] | “tomato”) for eachP( [pronunciation] | “tomato”) for each

• Remember diagram would have three stages per phoneRemember diagram would have three stages per phone

The first syllable changes based on dialectThe first syllable changes based on dialect

There are four ways to say “tomato” and we would store There are four ways to say “tomato” and we would store P( [pronunciation] | “tomato”) for eachP( [pronunciation] | “tomato”) for each

• Remember diagram would have three stages per phoneRemember diagram would have three stages per phone

Page 19: CS 416 Artificial Intelligence

Words - segmentation

““Hearing” words in sentences seems easy to usHearing” words in sentences seems easy to us

• Waveforms are fuzzyWaveforms are fuzzy

• There are no clear gaps to designate word boundariesThere are no clear gaps to designate word boundaries

• One must work the probabilities to decide if current word is One must work the probabilities to decide if current word is continuing with another syllable or if it seems likely that continuing with another syllable or if it seems likely that another word is startinganother word is starting

““Hearing” words in sentences seems easy to usHearing” words in sentences seems easy to us

• Waveforms are fuzzyWaveforms are fuzzy

• There are no clear gaps to designate word boundariesThere are no clear gaps to designate word boundaries

• One must work the probabilities to decide if current word is One must work the probabilities to decide if current word is continuing with another syllable or if it seems likely that continuing with another syllable or if it seems likely that another word is startinganother word is starting

Page 20: CS 416 Artificial Intelligence

Sentences

Bigram ModelBigram Model

• P (wP (wii | w | w1:i-11:i-1) has a lot of values to determine) has a lot of values to determine

• P (wP (wii | w | wi-1i-1) is much more manageable) is much more manageable

– We make a first-order Markov assumption about word We make a first-order Markov assumption about word sequencessequences

– Easy to train this through text filesEasy to train this through text files

• Much more complicated models are possible that take syntax Much more complicated models are possible that take syntax and semantics into accountand semantics into account

Bigram ModelBigram Model

• P (wP (wii | w | w1:i-11:i-1) has a lot of values to determine) has a lot of values to determine

• P (wP (wii | w | wi-1i-1) is much more manageable) is much more manageable

– We make a first-order Markov assumption about word We make a first-order Markov assumption about word sequencessequences

– Easy to train this through text filesEasy to train this through text files

• Much more complicated models are possible that take syntax Much more complicated models are possible that take syntax and semantics into accountand semantics into account

Page 21: CS 416 Artificial Intelligence

Bringing it together

Each transformation is pretty inaccurateEach transformation is pretty inaccurate

• Lots of choicesLots of choices

• User “error” – stutters, bad grammarUser “error” – stutters, bad grammar

• Subsequent steps can rule out choices from previous stepsSubsequent steps can rule out choices from previous steps

– DisambiguationDisambiguation

Each transformation is pretty inaccurateEach transformation is pretty inaccurate

• Lots of choicesLots of choices

• User “error” – stutters, bad grammarUser “error” – stutters, bad grammar

• Subsequent steps can rule out choices from previous stepsSubsequent steps can rule out choices from previous steps

– DisambiguationDisambiguation

Page 22: CS 416 Artificial Intelligence

Bringing it together

Continuous speechContinuous speech

• Words composed of Words composed of pp 3-state phones 3-state phones

• W words in vocabularyW words in vocabulary

• 3pW states in HMM3pW states in HMM

– 10 words, 4 phones each, 3 states per phone = 120 states10 words, 4 phones each, 3 states per phone = 120 states

• Compute likelihood of all words in sequenceCompute likelihood of all words in sequence

– Viterbi algorithm from 15.2Viterbi algorithm from 15.2

Continuous speechContinuous speech

• Words composed of Words composed of pp 3-state phones 3-state phones

• W words in vocabularyW words in vocabulary

• 3pW states in HMM3pW states in HMM

– 10 words, 4 phones each, 3 states per phone = 120 states10 words, 4 phones each, 3 states per phone = 120 states

• Compute likelihood of all words in sequenceCompute likelihood of all words in sequence

– Viterbi algorithm from 15.2Viterbi algorithm from 15.2

Page 23: CS 416 Artificial Intelligence

A final note

Where do all the transition tables come from?Where do all the transition tables come from?

• Word probabilities from text analysisWord probabilities from text analysis

• Pronunciation models have been manually constructed for Pronunciation models have been manually constructed for many hours of speakingmany hours of speaking

– Some have multiple-state phones identifiedSome have multiple-state phones identified

• Because this Because this annotationannotation is so expensive to perform, can we is so expensive to perform, can we annotate or annotate or label label the waveforms automatically?the waveforms automatically?

Where do all the transition tables come from?Where do all the transition tables come from?

• Word probabilities from text analysisWord probabilities from text analysis

• Pronunciation models have been manually constructed for Pronunciation models have been manually constructed for many hours of speakingmany hours of speaking

– Some have multiple-state phones identifiedSome have multiple-state phones identified

• Because this Because this annotationannotation is so expensive to perform, can we is so expensive to perform, can we annotate or annotate or label label the waveforms automatically?the waveforms automatically?

Page 24: CS 416 Artificial Intelligence

Expectation Maximization (EM)

Learn HMM transition and sensor models sans Learn HMM transition and sensor models sans labeled datalabeled data

• Initialize models with hand-labeled dataInitialize models with hand-labeled data

• Use these models to predict states at multiple times tUse these models to predict states at multiple times t

• Use these predictions as if they were “fact” and update HMM Use these predictions as if they were “fact” and update HMM transition table and sensor modelstransition table and sensor models

• RepeatRepeat

Learn HMM transition and sensor models sans Learn HMM transition and sensor models sans labeled datalabeled data

• Initialize models with hand-labeled dataInitialize models with hand-labeled data

• Use these models to predict states at multiple times tUse these models to predict states at multiple times t

• Use these predictions as if they were “fact” and update HMM Use these predictions as if they were “fact” and update HMM transition table and sensor modelstransition table and sensor models

• RepeatRepeat