Upload
hatuong
View
223
Download
3
Embed Size (px)
Citation preview
All rights reserved by www.ijaresm.net ISSN : 2394-1766 1
A SURVEY: SPEECH EMOTION IDENTIFICATION Sejal Patel1, Salman Bombaywala2
M.E. Students, Department Of EC, SNPIT & RC, Umrakh, Gujarat, India1
Assistant Professor, Department Of EC, SNPIT & RC , Umrakh, Gujarat, India2
Abstract: In recent years a great deal of research has been done to automatically recognize
emotions from human speech. Emotion is a symbol to express the feelings of one’s
expression. It is also important to identify the emotion of the user to understand the feeling
of an utterance. There are main three goals in speech emotion detection. First is to select
appropriate features like energy, the pitch, LPC, LPCC, voice quality, MFCC, and delta
MFCC for emotional speech recognition. Second goal is to provide data-base which
includes different languages, number of persons, and number of emotion. Third goal is to
classifying speech into emotional states. It utilizes Hidden Markov Model (HMM),
Artificial Neural Network (ANN), and Support Vector Machine (SVM), correlation,
Dynamic Time Wrapping (DTW). The paper surveys different techniques developed by
keeping the above goals into consideration. The emotions considered in this study are
anger, fear, happy, neutral and sad. Speech emotion recognition is useful for applications
which require man–machine interaction such as web movies and computer tutorial where
the response of those systems to the user depends on the detected emotion. It is also useful
for in-car board system where information of the mental state of the driver may be
provided to the system to initiate his/her safety.
Key Words: ANN, correlation, DTW, End point, MFCC, Start point.
I. INTRODUCTION
Speech is communication or expression of thoughts in spoken words. Humans also
express their emotion via written and spoken language. Emotion is a symbol to express
the feelings of one’s expression. We are still Far from having a natural interaction
between man and machine because the machine does not understand the emotional state
of the speaker. There are several methods of feature extraction like MFCC, energy,
pitch, LPC (Linear Predictive Coding), voice quality etc., several classification
techniques like ANN, HMM, SVM, correlation, DTW etc. which can be effectively
utilized for analysis of voice signal. The acoustic variability introduced by the existence
of different sentences, speakers, speaking styles, and speaking rates adds another
obstacle because these properties directly affect most of the common extracted speech
features such as frequency, pitch, and energy. First human voice is converted into
digital signal form to produce digital data representing each level of signal at every
discrete time step. After that digitized Speech samples are processed using combination
of features like start point, end point to produce voice features. Then after these voice
IJARESM
All rights reserved by www.ijaresm.net ISSN : 2394-1766 2
features can go through to select the pattern that matches the database and find
correlation between each reference database and test input file in order to minimize the
resulting error between them depending on feature.
Speech emotion recognition is useful for application in car board system which gives the
information of the mental state of the driver and provide to the system to initiate his /her
safety. It is also useful for natural men-machine interaction such as computer tutorial
application where the response of this system to the user depends on the detected
emotion. It may be also useful in call center application and mobile communication [8].
It is useful in storytelling, interactive movies, to check the person’s behavior, e-tutoring,
call analysis in emergency services like ambulance and fire brigade.
II. BLOCK DAIGRAM
The basic block diagram of speech emotion detection is as shown in Figure 1. Figure 1 shows
the overall process of speech recognition system [4]:
Figure 1 Architecture of speech emotion detection [4]
A. Input Speech: Speech contains information about textual messages speak identity and
intended emotion. Speech is a complex signal produce from a time varying vocal tract system
exited by a time varying excitation source. Speech is the fastest and efficient method of
interaction between human. Speech is communication or expression of thoughts in spoken
words.
B. Preprocessing: In the pre-processing, work with sampling, frames, windows, start point-
end point detection. The input signals are changed to suit the speaker recognition system by
these works. We can Say that, it is a data compression processing, in that we are selecting the
useful information from signal data available. The use of energy threshold is to remove the
low energy frame which is usually the silence period of the speech signal. The silence frame
will influence the value of features in the feature extraction process. We will compare the
results when using different energy thresholds.
1. Energy of a signal
The energy is a set of samples and it is related by the sum of the square of the samples. Find
information about the time-dependent properties of the speech signal. It is very useful to select
the threshold for start-point & end-point detection. To calculate STE the speech signal is
sampled using a rectangular window function of width 𝜔 samples, where. Within each
window, energy is computed as follow [6]:
IJARESM
All rights reserved by www.ijaresm.net ISSN : 2394-1766 3
(1)
Where, e = Energy of a particular window
𝑥𝑖 = Indicating ith sample
2. Start and End Point Detection
It gives a more beneficial as they are used to remove background noise and made speech
Signal better than previous signal. Start point of any voice signal provide the exact starting
location of voice sample based on STE values, so that all previous unwanted samples would
be removed and new voice signal would be created. [6] This same process is also applied to
detect End points of any speech signal. [9]
C. Feature Extraction Technique: Feature Extraction Technique is a convert speech signal
into a sequence of feature vector.The important part of feature extraction is to extract
characteristics from the signal that are unique to each individual. There are different feature
extraction techniques like Pitch, Energy, Vocal track cross section area, Speech rate,
Formants, Mel Frequency Cepstral Coefficients (MFCC), Linear Prediction coefficient (LPC),
Linear Prediction Cepstral Coefficients (LPCC), are used for emotion based speaker
recognition system.
1. MFCC
MFCC is the most popular feature extraction technique for speech recognition.
Figure 2 Block diagram of MFCC [3]
Step 1: Pre-Emphasis: Pre-Emphasis is a technique used in speech processing to enhance
high frequencies of the signal. It is done by using FIR high-pass filter.[4]
1. The speech signal generally contains more speaker specific information in high frequencies
than the lower frequencies.
2. Pre-emphasis removes some of the glottal effects from the vocal tract parameter.
Step 2: Framing: The continuous voice signal is divided into frames of N samples. Adjacent
frames are being separated by M (M<N). [3] Generally the frame size is equal to power of two
in order to facilitate the use of FFT. Successive frames are overlapping with each other by M
samples. Overlapped frame is used for maintaining continuity between frames. Overlapping
ensure high correlation between coefficient of successive frames. [6]
Step 3: Windowing: Window is used as window shape by considering the next block in
feature extraction processing chain and integrates all the closest frequency lines. [6] To
IJARESM
All rights reserved by www.ijaresm.net ISSN : 2394-1766 4
window each individual frame means to minimize the signal discontinuities at the beginning
and end of each frame. To minimize the spectral distortion by using the window to taper the
signal to zero at the beginning and end of each frame. If window is defined as,
W (n), 0 ≤ n ≤ N-1 (2)
Where, N = number of samples in each frame
W(n) = Hamming window
Then the result of windowing signal is
Y[𝑛] =X[𝑛] ×W [𝑛] ; 0 ≤ 𝑛 ≤ 𝐿 – 1 (3)
Where, W (n) = 0.54 − 0.46 cos ; 0 ≤ n ≤ N-1 (4)
= 0 ℎ𝑒𝑟𝑤𝑖𝑠𝑒
W [𝑛] = Window operation
Y[n] = Output signal
X [n] = Input signal
L = Number of samples per frame
Step 4: Fast Fourier Transform (FFT): FFT is converting each frame of L samples from
time domain into frequency domain. FFT reduces computation time required to compute a
DFT and improve performance. The Fourier Transform is to convert the convolution of the
glottal pulse U[n] and the vocal tract impulse response H[n] in the time domain. This
statement supports the equation below:
y(𝑤) = FFT[ℎ(t) ∗ x(t)] = H(w) ∗X(w) (5)
Where, H(w), x(w) and y(w) are the Fourier transform of h(t), x(t) and y(t) respectively. [6]
Step 5: Mel Filter Bank: we can say that the frequencies range in FFT spectrum is very wide
and voice signal does not follow the linear scale. Thus for each tone with an actual frequency
f, Measured in Hz, a subjective pitch is measured on a scale called the ‘mel’ scale [3]. Set of
triangular filters that are used to compute a weighted sum of filter spectral components so that
the output of process approximates to a Mel scale. Use the following approximate formula to
compute the mel for a given frequency fin Hz: [3]
(𝑓) = 2595 ∗ log10(1 + 𝑓(𝐻𝑧) ⁄700) (6)
Figure 3 Mel scale filter bank [3][6]
Above figure depicts a set of triangular filters that are used to compute a weighted sum of
filter spectral components so that the output of process approximates to a Mel scale. Each
filter’s magnitude frequency is triangular in shape and equal to unity at the center frequency
and decline linearly to zero at center frequency of two adjacent filters. Then, each filter output
is the sum of its filtered spectral Components.
IJARESM
All rights reserved by www.ijaresm.net ISSN : 2394-1766 5
Step 6: Log and Discrete Cosine Transform (DCT): Output of the mel filter bank is
spectral components which is given to the Log block. This log Mel spectrum is converted into
time domain using Discrete Cosine Transform (DCT). The result of the reconstruction is
called Mel-Frequency Cepstrum Coefficient. The collection of coefficient is called acoustic
vectors. So that, each input utterance is transformed into a sequence of acoustic vector. [6]
2. LPC:
LPC Coefficients can closely approximate current speech sample as a linear combination of
past samples, the LPC coefficients cannot totally represent the current speech sample.
Because the human speech production system is not linear while the LPC model is, the LPC
coefficients only represent the linear property and loss the nonlinear part. Therefore, the
LPCC coefficients were proposed for represent the nonlinear property.
D. Classifier: Different types of classifier are used, such as, Polynomial classifier, Hidden
Markov Model (HMM), Gaussian Mixture Model (GMM),correlation, Artificial Neural
Network (ANN), Support Vector Machine (SVM), Dynamic Time Wrapping (DTW) is
available. Here, MFCC is used as basic feature extraction techniques with a classifier
Artificial Neural Network (ANN).
1. Correlation:
The most familiar measure of dependence between two quantities is the person product
moment correlation coefficient, or “person’s correlation coefficient”, commonly called simply
“the correlation coefficient”. It is obtained by dividing the covariance of the two variables by
the product of their standard deviations. Karl person developed the coefficient. Person
correlation coefficient can be the correlation coefficient is symmetric:
Corr (X,Y)= corr (Y,X) (7)
The person correlation is +1 in the case of a perfect direct (increasing) linear relationship
(correlation), -1 in the case of a perfect decreasing (inverse) linear relationship (anti-
correlation), and some value between -1 and 1 in all other cases, indicating the degree of
linear dependence between the variable. As it approaches zero there is less of a relationship.
The closer the coefficient is to either 1 or 1, the stronger the correlation between the variables.
[12]
2. ANN:
Artificial Neural Network (ANN) is a system which operates in a very similar manner as that
of the human brain, and solves problems by self learning.[1] NN is referred to a network of
biological neurons system that process and transmit information. An ANN is configured for a
specific application such as pattern recognition or data classification through learning process.
Learning in biological system involves adjustments to the synaptic connections that exist
between the neurons. Artificial neural network (ANN) possesses excellent discriminates
power and learning capabilities and represents implicit knowledge. Therefore, ANN is used in
emotion recognition fields. An ANN is configured for a specific application, such as pattern
recognition or data classification, through a learning process. Learning in biological systems
involves adjustments to the synaptic connections that exist between the neurons. Artificial
neural networks are the result of academic investigation that uses mathematical formulations
to model nervous system operations. The resulting techniques are being successfully applied
in a variety of everyday business applications.
3. Dynamic Time Warping (DTW):
IJARESM
All rights reserved by www.ijaresm.net ISSN : 2394-1766 6
DTW algorithm is based on measuring similarity among two time series which may vary in
time or speed. The similarity is evaluated in terms of alignment between two times series if
one time series may be “warped” nonlinearly by stretching or shrinking it along its time axis
[6][ 9]. This warping between two time series can then be used to find analogues regions
between two time series or to determine similarity between the two time series.
Mathematically, the DTW contrast two dynamic patterns and evaluates similarity by
calculating a minimum distance between them. To realize this, consider two time series Q and
C, which has length n and m respectively. Where,
Q = q1, q2, qi,.... qn
C = c1, c2, cj,.... cm
To align two sequences using DTW, an n -by- m matrix where the (ith, jth) element of the
matrix involves the distance d (qi, cj) among the two points qi and cj is established. Then, the
absolute distance between the values of two sequences is calculated using the Euclidean
distance computation [4]:
d(qi,cj)= 2 (qi - cj) (9)
Each matrix element (i, j) corresponds to the alignment between the points qi and cj. Then,
accumulated distance is obtained by:
D (i, j) = min (D (i -1, j -1), D (i -1, j), D (i, j -1)] + D (i, j)
This is done as follows [6]:
1. Start with the calculation of g (1, 1) = d (1, 1). Calculate the first row g (i, 1) =g (i–1, 1) +
d(i, 1).
2. Calculate the first column g (1, j) = g (1, j) + d (1, j).
3. Move to the second row g (i, 2) = min (g (i, 1), g (I 1, 1), g (i – 1, 2)) + d (i, 2). Book keep
for each cell the index of this neighboring cell, which contributes the minimum score.
4. Carry on from left to right and from bottom to top with the rest of the grid
g (i, j) = min (g (i, j–1), g (i–1, j–1), g (i – 1, j)) + d (i, j).
5. Trace back the best path through the grid starting from g (n, m) & moving towards g (1,1).
Hence the path which gives minimum distance after testing with the feature vectors stored in
the database is the identified speaker emotions.
COMPARISON TABLE OF SURVEY PAPER
Sr.No. Features Classifier Remarks
1 MFCC, DWT, LPC,
f0, voice energy ANN
ANN is better but it is language
dependant
2 MFCC, formants BP neural
Network, Low accuracy
3 MFCC - MFCC gives better result
4 MFCC, pitch,
GMM, HMM
GMM gives the better performance for
anger and surprise , HMM is for disgust,
fear and neutral energy
5 MFCC,STE ZCR Correlation,
DTW
DTW & correlation are finding the best
match proves to be an effective method
for speech recognition
IJARESM
All rights reserved by www.ijaresm.net ISSN : 2394-1766 7
III. CONCLUDING REMARKS
Speech emotion recognition systems based on the several classifiers is illustrated. The
important issues in speech emotion recognition system are the signal processing unit in which
appropriate features are extracted from available speech signal and another is a classifier
which recognizes emotions from the speech signal. Different types of feature and
classification techniques are studied. By the analysis it can be said that MFCC is the best
feature extraction technique for emotion recognition. There are many applications like: To
check the persons behavior, E-tutoring, Call analyses in emergency services like ambulance
and fire brigade, Emotion Recognition in Call Center, Storytelling, In-Car Board System,
Computer Games, and In Robots.
REFERENCES
[01] Firoz Shah.A, Raji Sukumar.A, Babu Anto.P ,“Automatic Emotion Recognition from
Speech Using Artificial Neural Networks with Gender- Dependent Databases”,
International Conference on Asdvances in Computing, Control, and Telecommunication
Technologies, IEEE, 2009
[02] Ying SHI And Weihua SONG , “Speech Emotion Recognition Based on Data Mining
Technology”, International Conference on Natural Computation (ICNC 2010), IEEE,
2010
[03] Anurag Jain, Nupur Prakash, “Evaluation of MFCC for Emotion Identification in Hindi
Speech”, IEEE, 2011
[04] Tsang-Long Pao, Chun-Hsiang Wang, and Yu-Ji Li, “A Study on the Search of the Most
Discriminative Speech Features in the Speaker Dependent Speech Emotion
Recognition”, International Symposium on Parallel Architectures, Algorithms and
Programming, IEEE, 2012
[05] Manav Bhaykar, Jainath Yadav, and K. Sreenivasa Rao, “Speaker Dependent, Speaker
Independent and Cross Language Emotion Recognition from Speech Using GMM and
HMM”, IEEE, 2013
[06] Nidhi Desai, Kinnal Dhameliya and Vijayendra Desai, “Recognizing voice commands
for robot using MFCC and DTW”, International Journal of Advanced Research in
Computer and Communication Engineering, May 2014
[07] Vinay , Shilpi Gupta , Anu Mehra , “Gender Specific Emotion Recognition Through
Speech Signals”, International Conference on Signal Processing and Integrated Networks
(SPIN), 2014
[08] Tatjana Liogien, Gintautas Tamuleviius, “Minimal Cross-correlation Criterion for
Speech Emotion Multi-level Feature Selection”, IEEE, 2015
[09] S.Lalitha, Anoop Mudupu, Bala Visali Nandyala, Renuka Munagala , “Speech Emotion
Recognition using DWT”, IEEE, 2015
[10] Ritu D.Shah, Dr. Anil.C.Suthar, “Speech Emotion Recognition Based on SVM Using
MATLAB”, International Journal of Innovative Research in Computer and
Communication Engineering, 2016
[11] Lawrence R. Rabiner and Ronald W. Schafer, Digital Processing of Speech Signals; 9th
Edn; Dorling Kindersley Pvt. Ltd., licePearson Education in South Asia, 2012
[12] https://statistics.laerd.com/statistical-guides/pearson-correlation-coefficient-statistical-
guide.php (accessed on: 24 march 2017)