19
Chapter 14 Speaker Recognition • 14.1 Introduction to speaker recognition • 14.2 The basic problems for speaker recognition • 14.3 Approaches and systems • 14.4 Language Identification

Spearker Recog

Embed Size (px)

DESCRIPTION

Spearker Recogition

Citation preview

Page 1: Spearker Recog

Chapter 14 Speaker Recognition

• 14.1 Introduction to speaker recognition

• 14.2 The basic problems for speaker recognition

• 14.3 Approaches and systems

• 14.4 Language Identification

Page 2: Spearker Recog

14.1 Introduction to speaker recognition (1)

• Speaker recognition tries to extract the personal factors of speakers, and speech recognition – common factors of speakers.

• Speaker Verification : is the speaker as claimed or not

• Speaker Identification : who is the speaker in a name list

Page 3: Spearker Recog

Introduction to speaker recognition (2)

• Text dependent speaker recognition and text independent speaker recognition

• Application of speaker recognition– Business Systems Legal Systems – Military Systems Security Systems

• Hard problems Which features are effective and reliable ?

Page 4: Spearker Recog

14.2 The basic problems of speaker recognition (1)

• System diagram

• Training : Create utterances and model parameters for all speakers

• Verification : Compare the real parameters with that of claimed speaker. If the difference is less than some threshold the speaker is verified otherwise rejected

Page 5: Spearker Recog

The basic problems of speaker recognition (2)

• Recognition : Compare the extracted parameters with the reference parameters of all speakers and chose the minimum distance decision to identify the speaker.

• Three basic problems : parameter selection; specifying the similarity measure to make the calculation simple and reliable; updating the reference parameters to adapt to users.

Page 6: Spearker Recog

The basic problems of speaker recognition (3)

• Design compromising.• For speaker verification system, two

important parameters are False Rejection Rate(FR) and False Acceptance Rate(FA). They have some relation with acceptance threshold. In different cases they have different influence.

• Performance vs number of speakers

Page 7: Spearker Recog

The basic problems of speaker recognition (4)

• Updating of Reference Templates

• Performance Evaluation

• Basic Characteristics of Speakers

• Ideally these features should effectively distinguish different speakers and keep relative stability when speech changes; they should be easy to extract; not easy to mimic.

Page 8: Spearker Recog

The basic problems of speaker recognition (5)

• Evaluation approaches to parameter’s effectiveness

• F = <(μi-μ)i2 >/<(xk

(i)-μi)2>k,i.

• where xk(i)is the parameters for k-th utterance of i-th

speaker. < >i is averaging for speakers; < >k is averaging for different utterances of a speaker; μi=<xk

(i)>k is the mean estimation of i-th speaker; μ=<μi>i

• For multi-dimension, B=< (μi-μ) (μi-μ)T>i,

• W=< (xk(i)-μi) (xk

(i)-μi)T>k,i.

• Divergence D = < (μi-μj)W-1(μi-μj)T>i,j,=Tr(W-1B)

Page 9: Spearker Recog

The basic problems of speaker recognition (6)

• Feature Examples

• (1) LPC and its derived parameters

• (2) Parameters deducted from speech spectrum

• (3) Mixture parameters

• Approaches to Speaker Recognition

• (1) Template Matching Method

Page 10: Spearker Recog

The basic problems of speaker recognition (7)

• (2) Probability Model Method

• (3) Text independent speaker recognition system based on VQ

• (4) Neural Network

• We have done a speaker recognition system using BP network. It was text dependent.

Page 11: Spearker Recog

14.3 Approaches and systems (1)

• GMM(Gaussian Mixture Model)• It is a kind of probability model. • Every speaker corresponds a GMM model.

• P(x|λ)=ΣPibi(x), i=1~M

• It means P(x|λ) is the weighted sum of M normal density bi. x is n-dimensional observation vector; Pi are the weighting coefficients; bi are n-dimensional gaussian functions.

Page 12: Spearker Recog

Approaches and systems (2)

• bi(x) = {1/[(2π)n/2|Ci|1/2]}* exp{-(x-μi)tCi-1(x-

μi)/2} where μi is the mean vector, Ci is the covariance matrix.

• λi ={Pi, μi, Ci}, i=1~M• MLE of GMM parameters• Assume X = {xt} t=1~T are training feature

vectors. The likelyhood of model λis• P(X|λ)=ΠP(xt|λ), t=1~T

Page 13: Spearker Recog

Approaches and systems (3)

• The goal for training is to find a λ0 such that P(X|λ0) get maximum:

• λ0 =argmax P(X|λ) for all λ• P(X|λ) is non linear function of λ. So EM is used

to find the optimal λ.• Define Q(λ,λ’)=ΣP(X,i|λ)log P(X,i|λ’), i=1~M, i is

the sequence number of gaussian components.• Q(λ,λ’)= Σ Σγt(i)logPi’bi’(xt), i=1~M,t=1~T

λ

Page 14: Spearker Recog

Approaches and systems (4)

• where γt(i)= P(x|λ)P(it=i|xt,λ)

• P(it=i|xt,λ)=Pibi(xt)/ΣPmbm(xt) for m=1~M

• Let the partial derivation of Q over Pi,μi,Ci, i=1~M the following iteration could be :

• P’i=(1/T) Σ P(it=i|xt,λ)

• μ’i= Σ P(it=i|xt,λ)xt /Σ P(it=i|xt,λ), i=1~M

Page 15: Spearker Recog

• And σi2’=Σ P(it=i|xt,λ)xt

2/Σ P(it=i|xt,λ), i=1~M

• Recognition Algorithm

• As soon as we have models for all speakers then we will calculate maximum posteriori probability to find the speaker:

• S =argmax ΣlogP(xt|λk), t=1~T, k=1~M

Approaches and systems (5)

Page 16: Spearker Recog

14.4 Language Identification (1)

• Principles• Different level to recognize and utilize :

phonemes, syllable structure, prosodic features, lexicon categories, syntax and semantic network.

• Structure of Language Identification System• Different systems : HMM based, phoneme

based.

Page 17: Spearker Recog

Language Identification (2)

• Our experimental system• OGI corpus• System now contains four languages :• English, Chinese, Spanish, Japanese• It is phoneme based. Every language has a set of

HMM models for its phonemes. The models are constructed such that every phoneme could be followed by any phoneme. These models are trained by the corpus with the label files.

Page 18: Spearker Recog

Language Identification (3)

• The system structure is similar with above. Every language has a network of HMM models.

• The incoming utterance (with different length) is feed into every language network and every network will output a probability value for the incoming utterance. By comparing these values the decision could be made, the language will be identified. By our experiments the accuracy could be more than 95%. Because only thing is to decide

Page 19: Spearker Recog

Language Identification (4)

• (continued)• which language it is. So if you use two or three

utterances to test, you will definitely get a correct answer with very high probability.

• The way is simpler than the large vocabulary word system based on HMM.