27
Joseph Picone, PhD Human and Systems Engineering Professor, Electrical and Computer Engineering URL: http://www.isip.msstate.edu/publications/seminars/external/2004/dod/ Bridging the Gap in Human and Machine Performance HUMAN AND SYSTEMS ENGINEERING:

HUMAN AND SYSTEMS ENGINEERING:

  • Upload
    tameka

  • View
    58

  • Download
    1

Embed Size (px)

DESCRIPTION

HUMAN AND SYSTEMS ENGINEERING:. Bridging the Gap in Human and Machine Performance. Joseph Picone, PhD Human and Systems Engineering Professor, Electrical and Computer Engineering URL: http://www.isip.msstate.edu/publications/seminars/external/2004/dod /. 10 years at MS State - PowerPoint PPT Presentation

Citation preview

Page 1: HUMAN AND SYSTEMS ENGINEERING:

Joseph Picone, PhDHuman and Systems Engineering

Professor, Electrical and Computer Engineering

URL: http://www.isip.msstate.edu/publications/seminars/external/2004/dod/

Bridging the Gap in Human and Machine Performance

HUMAN AND SYSTEMS ENGINEERING:

Page 2: HUMAN AND SYSTEMS ENGINEERING:

Evolution For Better Infrastructure

• 10 years at MS State

• Public Domain Speech Recognition

• Jumpstarted in 1997 by a DoD grant

• Center for Advanced Vehicular Systems

• State funded to support Nissan

• Three Complementary Thrusts

• Extension center colocated with Nissan in Canton, Mississippi

• Statewide economic development

• Assist first-tier suppliers

Introduction to Human and Systems Engineering Page 2 of 7

Page 3: HUMAN AND SYSTEMS ENGINEERING:

A Virtual Tour of CAVS at Mississippi State University

Page 3 of 7Introduction to Human and Systems Engineering

Page 4: HUMAN AND SYSTEMS ENGINEERING:

Intelligent Electronic Systems At A Glance

Computer Networking:• Wireless Communications• Intelligent Sensors• Collaborative Vehicles

Intelligent Systems:• Speech Processing• Machine Learning• Dialog Systems• Human Factors and Ergonomics

Integrative Activities:• Challenge X• Capstone Design Experiences

Introduction to Human and Systems Engineering Page 4 of 7

Page 5: HUMAN AND SYSTEMS ENGINEERING:

Phase I Testbed: Campus Bus Networking

• Instrument the campus bus system to collect real-time data

• Modular architecture to support a variety of sensors and high speed data communications

Introduction to Human and Systems Engineering Page 5 of 7

Page 6: HUMAN AND SYSTEMS ENGINEERING:

Dialog Systems Applications in Automotive

• Noise robustness in both environments to improve recognition performance

• Advanced statistical models and machine learning technology

• In-vehicle dialog systems improve information access.

• Advanced user interfaces enhance workforce training and increase manufacturing efficiency.

Introduction to Human and Systems Engineering Page 6 of 7

Page 7: HUMAN AND SYSTEMS ENGINEERING:

Speaker Verification Via Metadata Extraction

• Recognition of emotion, stress, fatigue, and other voice qualities are possible from enhanced descriptions of the speech signal

• Fundamentally the same statistical modeling problem as other speech applications

• Fatigue analysis from voice under development under an SBIR (from Shriberg, et al., IEEE Spectrum, April 2003)

Introduction to Human and Systems Engineering Page 7 of 7

Page 8: HUMAN AND SYSTEMS ENGINEERING:

The Challenge X Program

• Competition created by automotive industry, government, and academic partners

• Challenges university-level engineering students to decrease total energy consumption and emissions

• Maintain or exceed vehicle utility and performance

• Cooperative venture between industry and universities

• Faculty Advisor:G. Marshall Molen

Introduction to Human and Systems Engineering Page 8 of 7

Page 9: HUMAN AND SYSTEMS ENGINEERING:

APPLICATIONS OF RISK MINIMIZATIONTO SPEECH RECOGNITION

Jon Hamaker, Aravind Ganapathiraju and Joseph PiconeIntelligent Electronic Systems

Human and Systems Engineering

URL: http://www.isip.msstate.edu/publications/seminars/external/2004/dod/

Page 10: HUMAN AND SYSTEMS ENGINEERING:

ABSTRACT: Statistical techniques based on Hidden Markov models (HMMs) with Gaussian emission densities have dominated the signal processing and pattern recognition literature for the past 20 years. However, HMMs suffer from an inability to learn discriminative information and are prone to overfitting and over‑parameterization. In this presentation, we will review our attempts to apply notions of risk minimization into pattern recognition problems such as speech recognition. New approaches based on probabilistic Bayesian learning are shown to provide an order of magnitude reduction in complexity over comparable approaches based on HMMs and Support Vector Machines.

BIOGRAPHY: Joseph Picone is currently a Professor in the Department of Electrical and Computer Engineering at Mississippi State University and an Academic Thrust Leader at the Center for Advanced Vehicular Systems. For the past 15 years he has been promoting open source speech technology. He has previously been employed by Texas Instruments and AT&T Bell Laboratories. Dr. Picone received his Ph.D. in Electrical Engineering from Illinois Institute of Technology in 1983. He is a Senior Member of the IEEE and a registered Professional Engineer.

Abstract and Biography

Applications of Risk Minimization Page 1 of 10

Page 11: HUMAN AND SYSTEMS ENGINEERING:

• Optimal decision surface is obviously a line

• Introduce two more data points

• How much can we trust isolated data points?

• Can we integrate prior knowledge about data, confidence, or willingness to take risk?

Generalization and Risk

• Optimal decision surface is still a line (good generalizaton)

• Optimal decision surface changes abruptly

Applications of Risk Minimization Page 2 of 10

Page 12: HUMAN AND SYSTEMS ENGINEERING:

• Deterding Vowel Data: 11 vowels spoken in “h*d” context; 10 log area parameters; 528 train, 462 SI test

Approach % Error # Parameters

SVM: Polynomial Kernels 49%

K-Nearest Neighbor 44%

Gaussian Node Network 44%

SVM: RBF Kernels 35% 83 SVs

Separable Mixture Models 30%

RVM: RBF Kernels 30% 13 RVs

Static Pattern Classification With SVMs

Applications of Risk Minimization Page 3 of 10

Page 13: HUMAN AND SYSTEMS ENGINEERING:

Applications of SVMs to Conversational Speech

Information Source HMM Hybrid

Transcription Segmentation AD SWB AD SWB

N-Best Hypothesis 11.9 41.6 11.0 40.6

N-Best N-Best 12.0 42.3 11.8 42.1

N-Best + Ref. Reference 6.6 — 3.3 5.8

N-Best + Ref. N-Best + Ref. 11.9 38.6 9.1 38.1

Notes:

• SVMs not exposed to alternative segmentations during training (closed-loop)

• SVM performance is high when there is no mismatch between the training and evaluation conditions

• Complexity (parameter count) approaches HMMs

Applications of Risk Minimization Page 4 of 10

Page 14: HUMAN AND SYSTEMS ENGINEERING:

• A kernel-based learning machine

• Incorporates an automatic relevance determination (ARD) prior over each weight (MacKay)

• A flat (non-informative) prior over completes the Bayesian specification

)),(|w(N)|w(PN

i iii

0

10

N

iii )x,x(Kww)w;x(y

10

)w;ix(yi

e)w;x|t(P

1

11

Relevance Vector Machines

Applications of Risk Minimization Page 5 of 10

Page 15: HUMAN AND SYSTEMS ENGINEERING:

• The goal in training becomes finding:

• Estimation of the “sparsity” parameters is inherent in the optimization – no need for a held-out set!

• A closed-form solution to this maximization problem is not available. Iteratively reestimate

)X|t(p

)X|,w(p)X,,w|t(p),w(p

where)X,t|,w(p,wmaxargˆ,w

ˆ andw

Iterative Reestimation of Hyperparameters

Applications of Risk Minimization Page 6 of 10

Page 16: HUMAN AND SYSTEMS ENGINEERING:

• Deterding Vowel Data: 11 vowels spoken in “h*d” context; 10 log area parameters; 528 train, 462 SI test

Approach % Error # Parameters

SVM: Polynomial Kernels 49%

K-Nearest Neighbor 44%

Gaussian Node Network 44%

SVM: RBF Kernels 35% 83 SVs

Separable Mixture Models 30%

RVM: RBF Kernels 30% 13 RVs

RVM and SVM Comparison — Static Patterns

Applications of Risk Minimization Page 7 of 10

Page 17: HUMAN AND SYSTEMS ENGINEERING:

• RVMs yield a large reduction in the parameter count while attaining superior performance

• Computational costs mainly in training for RVMs but is still prohibitive for larger sets – O(N3) vs. O(N2) for SVMs and O(N) for HMMs

Approach Error

Rate

Avg. # Parameters

Training Time

Testing Time

SVM 16.4% 257 0.5 hours 30 mins

RVM 16.2% 12 30 days 1 min

RVM and SVM Comparison — Alphadigits

Applications of Risk Minimization Page 8 of 10

Page 18: HUMAN AND SYSTEMS ENGINEERING:

ApproachError

RateAvg. #

ParametersTraining

TimeTesting

Time

SVM 15.5% 994 3 hours 1.5 hoursRVM

Constructive 14.8% 72 5 days 5 mins

RVMReduction 14.8% 74 6 days 5 mins

• Data increased to 10,000 training vectors

• Reduction method has been trained up to 100k vectors (on toy task). Not possible for Constructive method

Preliminary Results on Learning

Applications of Risk Minimization Page 9 of 10

Page 19: HUMAN AND SYSTEMS ENGINEERING:

• Reduction of complexity at the same level of performance is interesting:

• Results hold across tasks

• RVMs have been trained on 100,000 vectors

• Results suggest integrated training is critical

• Risk minimization provides a family of solutions:

• Is there a better solution than minimum risk?

• What is the impact on complexity and robustness?

• Applications to other problems?

• Speech/Non-speech classification?

• Speaker adaptation?

• Language modeling?

Summary — Practical Risk Minimization?

Applications of Risk Minimization Page 10 of 10

Page 20: HUMAN AND SYSTEMS ENGINEERING:

Applications to Speech Recognition:

1. J. Hamaker and J. Picone, “Advances in Speech Recognition Using Sparse Bayesian Methods,” submitted to the IEEE Transactions on Speech and Audio Processing, January 2003 (in revision).

2. A. Ganapathiraju, J. Hamaker and J. Picone, “Applications of Risk Minimization to Speech Recognition,” to appear in the IEEE Transactions on Signal Processing, August 2004.

3. J. Hamaker, J. Picone, and A. Ganapathiraju, “A Sparse Modeling Approach to Speech Recognition Based on Relevance Vector Machines,” Proceedings of the International Conference of Spoken Language Processing, vol. 2, pp. 1001-1004, Denver, Colorado, USA, September 2002.

4. J. Hamaker, Sparse Bayesian Methods for Continuous Speech Recognition, Ph.D. Dissertation, Department of Electrical and Computer Engineering, Mississippi State University, December 2003.

5. A. Ganapathiraju, Support Vector Machines for Speech Recognition, Ph.D. Dissertation, Department of Electrical and Computer Engineering, Mississippi State University, January 2002.

Influential work:

6. M. Tipping, “Sparse Bayesian Learning and the Relevance Vector Machine,” Journal of Machine Learning, vol. 1, pp. 211-244, June 2001.

7. D. J. C. MacKay, “Probable networks and plausible predictions --- a review of practical Bayesian methods for supervised neural networks,” Network: Computation in Neural Systems, 6, pp. 469-505, 1995.

8. D. J. C. MacKay, Bayesian Methods for Adaptive Models, Ph. D. thesis, California Institute of Technology, Pasadena, California, USA, 1991.

9. E. T. Jaynes, “Bayesian Methods: General Background,” Maximum Entropy and Bayesian Methods in Applied Statistics, J. H. Justice, ed., pp. 1-25, Cambridge Univ. Press, Cambridge, UK, 1986.

10. V.N. Vapnik, Statistical Learning Theory, John Wiley, New York, NY, USA, 1998.

11. V.N. Vapnik, The Nature of Statistical Learning Theory, Springer-Verlag, New York, NY, USA, 1995.

12. C.J.C. Burges, “A Tutorial on Support Vector Machines for Pattern Recognition,” AT&T Bell Laboratories, November 1999.

Applications of Risk Minimization

Brief Bibliography

Page 21: HUMAN AND SYSTEMS ENGINEERING:

Ram Sundaram Joseph PiconeBBN Technologies Mississippi State University

Cambridge, Massachusetts Mississippi State, Mississippi

URL: http://www.isip.msstate.edu/publications/seminars/external/2004/dod/

Effects of Transcriptions Errors on Supervised Learning in Speech Recognition

Page 22: HUMAN AND SYSTEMS ENGINEERING:

ABSTRACT: Hidden Markov model-based speech recognition systems use supervised learning to train acoustic models. On difficult tasks such as conversational speech there has been concern over the impact erroneous transcriptions have on the parameter estimation process. This work analyzes the effects of mislabeled data on recognition accuracy. Training is performed using manually corrupted transcriptions, and results are presented on three tasks: TIDigits, Alphadigits and Switchboard. For Alphadigits, with 16% of the training data mislabeled, the performance of the system degrades by 12% relative to the baseline. On Switchboard, at 16% mislabeled training data, the performance of the system degrades by 8.5% relative to the baseline. An analysis of these results revealed that the Gaussian mixture model contributes significantly to the robustness of the supervised learning training process.

MOTIVATION: Recover an investment of three and a half long years spent retranscribing and resegmenting Switchboard

Abstract and Motivation

Transcription Errors Page 1 of 5

Page 23: HUMAN AND SYSTEMS ENGINEERING:

Robustness to Transcription Errors — TIDigits

• Introduced random transcription word errors in a controlled fashion on TIDigits

• Observed no significant degradation in performance until the TER was artificially high (16%).

• What makes an HMM-based speech recognition system so robust to such errors?

Transcription Errors Page 2 of 5

Page 24: HUMAN AND SYSTEMS ENGINEERING:

Robustness to Transcription Errors — Comparison

• No significant degradation with transcription errors (including Switchboard!)

• Context-dependent phone models are more robust than word models

Transcription Errors Page 3 of 5

Page 25: HUMAN AND SYSTEMS ENGINEERING:

Analyze State Occupancies Through Training

• Study maximum likelihood estimates of the mean and variance for a Gaussian estimator

• Analyze how much does an incorrect model learn from the erroneous data by examining state occupancies

• Analyze how much the correct model is influenced by the erroneous transcriptions

Transcription Errors Page 4 of 5

Page 26: HUMAN AND SYSTEMS ENGINEERING:

• Transcription errors do not corrupt the acoustic models significantly

• Alphadigits — at 16% TER, WER degrades only by 12%

• SWB — at 16% TER, WER degrades only by 8.5%

• Robustness to erroneous data mainly due to Gaussian distribution

• State-tying helps in decreasing the TER during the context-dependent modeling stage

• Mixture training adds more robustness by modeling other variations in the correct portion of the data

Summary

Transcription Errors Page 5 of 5

Page 27: HUMAN AND SYSTEMS ENGINEERING:

1. R. Sundaram and J. Picone, “Effects of Transcription Errors on Supervised Learning in Speech Recognition,” submitted to the International Conference on Acoustics, Speech, and Signal Processing, Montreal, Quebec, Canada, May 2004.

2. R. Sundaram, Effects of Transcription Errors on Supervised Learning in Speech Recognition, M.S. Thesis, Department of Electrical and Computer Engineering, Mississippi State University, August 2003.

3. R. Sundaram and J. Picone, “The Effects of Transcription Errors,” Proceedings of the Speech Transcription Workshop, Linthicum Heights, Maryland, USA, May 2001.

4. L. Lamel, J. L. Gauvain, G. Adda, “Lightly Supervised Acoustic Model Training,” Proceedings of the ISCA ITRW ASR2000, Paris, France, September 2001.

5. G. Zavaliagkos, T. Colthurst, “Utilizing Untranscribed Training Data to Improve Performance,” Proceedings of the DARPA Broadcast News Transcription and Understanding Workshop, Landsdowne, Virginia, February 1998.

6. P. Placeway, J. Lafferty, “Cheating with Imperfect Transcripts,” Proceedings of the International Conference on Speech and Language Processing, Philadelphia, Pennsylvania, USA, pp. 2115-2118, September 1996.

7. T. Kemp, A. Waibel, “Unsupervised Training of a Speech Recognizer Recent Experiments,” Proceedings of ESCA Eurospeech’99, pp. 2725-2728, Budapest, Hungary, September 1999.

Brief Bibliography

Transcription Errors