13
Application of Viterbi’s Algorithm Viterbi algorithm is a part of graph theory its implementation in graphs is made possible with the help of Hidden Markov Models or HMM’s. The Viterbi algorithm is a dynamic programming algorithm for finding the most likely sequence of “hidden states” called the Viterbi Path. The Algorithm Originally proposed by Andrew Viterbi in 1967 as a decoding algorithm for convolutional codes over noisy digital communication links. Viterbi path has become a term for the application of this algorithm for maximization problems that involve probabilities. The Algorithm is defined as: Suppose we are given a Hidden Markov Model (HMM) with state space , initial probabilities of being in state and transition probabilities of transitioning from state to state . Say we observe outputs . The most likely state sequence that produces the observations is given by the recurrence relations: Here is the probability of the most probable state sequence responsible for the first observations that has as its final state. The Viterbi path can be retrieved by saving back pointers that remember which state was used in the second equation. Let

Discrete Math Project

Embed Size (px)

DESCRIPTION

Discrete Math Project

Citation preview

Application of Viterbis Algorithm Viterbi algorithm is a part of graph theory its implementation in graphs is made possible with the help of Hidden Markov Models or HMMs. The Viterbi algorithm is a dynamic programming algorithm for finding the most likely sequence of hidden states called the Viterbi Path.The AlgorithmOriginally proposed by Andrew Viterbi in 1967 as a decoding algorithm for convolutional codes over noisy digital communication links. Viterbi path has become a term for the application of this algorithm for maximization problems that involve probabilities.The Algorithm is defined as:Suppose we are given aHidden Markov Model(HMM) with state space, initial probabilitiesof being in stateand transition probabilitiesof transitioning from stateto state. Say we observe outputs. The most likely state sequencethat produces the observations is given by the recurrence relations:

Hereis the probability of the most probable state sequence responsible for the firstobservations that hasas its final state. The Viterbi path can be retrieved by saving back pointers that remember which statewas used in the second equation. Letbe the function that returns the value ofused to computeif, orif. Then:

Standard definition of argmax is used.The complexity of this algorithm is.

Viterbi Algorithm for Speech and Handwriting Recognition

Viterbi Algorithm (VA) has a wide range of applications. One area where VA could be applied is that for character and word recognition of printed and handwritten words. This has been useful for so many purposes such as post code and address recognition, document analysis, car license plate recognition and even direct input into a computer using a pen. The idea of using the VA for Optical Character Reading (OCR) was suggested by G. DAVID FORNEY, JR. in his IEEE paper, "Viterbi Algorithm". In the paper, Forney mentioned that the algorithm may be a useful adjunct to sophisticated character-recognition systems for resolving ambiguities when confidence levels for different characters are available. One of the main advantages of this model is that if a word produced by the VA is not in the system dictionary then the VA can produce the other less likely sequence of letters, along with their metrics, so that a higher syntactic/semantic model could determine the word produced. It can be easily seen that a similar method would apply to the determination of a sequence of letters of a printed character as in OCR. In fact the VA can be used to recognize the individual characters or letters that make up a word. This is dealt with in "Optical Chinese Character Recognition with a Hidden Markov Model Classifier - A Novel Approach ", by Jeng, B-S., Chang, M.W., Sun, S.W., Shih, C.H., and Wu, T.M.; for Chinese character recognition, though a similar method could be used to recognize English letters.

Connected-Word (Continuous) Speech Recognition Utterance boundaries are unknown Number of words spoken in audio is unknown Exact position of word-boundaries are often unclear and difficult to determine Cannot exhaustively search for all possibilities (M= num words, V=length of utterance MV possible word sequences). Number of words spoken in audio is unknown Exact position of word-boundaries are often unclear and difficult to determine Cannot exhaustively search for all possibilities (M= num words, V=length of utterance MV possible word sequences.)

It was noted by the authors of the "Optical Chinese Character Recognition with a Hidden Markov Model Classifier - A Novel Approach ", that the use of the VA and HMM (Hidden Markov model) in character recognition has not been widely investigated thus making this area an interesting one for further research.

Another recognition problem is that of speech recognition, which unlike character and word recognition, the VA along with HMM's have been used widely. The VA has also been used with neural networks to recognize continuous speech.

Speech Recognition Architecture

Depth First Search (DFS)#include#include

char stack[20];int top=-1, n;char arr[20];char dfs(int );char ajMat[20][20];char b[20];void display();int p=0;

int main(){ char v; int l=0; printf("Enter the number of nodes in a graph"); scanf("%d",&n); printf("Enter the value of node of graph"); for(int i=0; i