17
SPEECH PERCEPTION 2 DAY 17 – OCT 4, 2013 Brain & Language LING 4110-4890-5110-7960 NSCI 4110-4891-6110 Harry Howard Tulane University

Speech Perception 2 DAY 17 – Oct 4, 2013

  • Upload
    dunne

  • View
    34

  • Download
    0

Embed Size (px)

DESCRIPTION

Speech Perception 2 DAY 17 – Oct 4, 2013. Brain & Language LING 4110-4890-5110-7960 NSCI 4110-4891-6110 Harry Howard Tulane University. Course organization. The syllabus, these slides and my recordings are available at http://www.tulane.edu/~howard/LING4110/ . - PowerPoint PPT Presentation

Citation preview

Brain & Language slides

Speech Perception 2DAY 17 Oct 4, 2013Brain & LanguageLING 4110-4890-5110-7960NSCI 4110-4891-6110Harry HowardTulane UniversityCourse organizationThe syllabus, these slides and my recordings are available at http://www.tulane.edu/~howard/LING4110/.If you want to learn more about EEG and neurolinguistics, you are welcome to participate in my lab. This is also a good way to get started on an honor's thesis.The grades are posted to Blackboard.10/04/13Brain & Language, Harry Howard, Tulane University2ReviewThe quiz was the review.10/04/13Brain & Language, Harry Howard, Tulane University3Linguistic model, Fig. 2.1 p. 3710/04/13Brain & Language, Harry Howard, Tulane University4Discourse modelSyntaxSentence prosodyMorphologyWord prosodySegmental phonologyperceptionAcoustic phonetics Feature extractionSegmental phonologyproductionArticulatory phonetics Speech motor controlINPUTSemanticsSentence levelWord levelCategorical perception

10/04/13Brain & Language, Harry Howard, Tulane University5

Chinchillas do this too!

The Clinton-Kennedy continuumhttp://www.sciencedirect.com.libproxy.tulane.edu:2048/science/article/pii/S001002770200104Xhttp://infolific.com/pets/chinchillas/chinchilla-basics/5speech PerceptionIngram 610/04/13Brain & Language, Harry Howard, Tulane University6Category boundary shiftsThus the phonetic feature detectors must compensate for the context because they know how speech is produced?10/04/13Brain & Language, Harry Howard, Tulane University7

But Japanese quail do this too.

The shift in VOT is from bin to pin:student pointed out the phontactic constraint spin vs. *sbin7Duplex speech (or perception)

10/04/13Brain & Language, Harry Howard, Tulane University8A and B refer to either ear; B is also called the baseahttp://sail.usc.edu/~lgoldste/ArtPhon/Motor_Theory/Duplex_Perception.html8ResultsListeners hear a syllable in the ear that gets the base (B), but it is not ambiguous. Its identification is determined by which of the nine F3 transitions are presented to the other ear (A). Listeners also hear a non-speech "chirp" in the ear that gets the isolated transition (A).10/04/13Brain & Language, Harry Howard, Tulane University9http://sail.usc.edu/~lgoldste/ArtPhon/Motor_Theory/Duplex_Perception.html9ImplicationsThe fact that the same stimulus is simultaneously part of two quite distinct types of percepts argues that the percepts are produced by separate mechanisms that are both sensitive to the same range of stimuli.The discrimination of the isolated "chirp" and the speech percept are quite different, despite the fact that the acoustic event responsible for both is the same. The speech percept exhibits categorical perception; the chirp percept exhibits continuous perception. If the intensity of the isolated transition is lowered below the threshold of hearing, so that listeners cannot tell reliably whether or not it is there on a given trial, it is still capable of disambiguating the speech percept. [HH: hold that thought]10/04/13Brain & Language, Harry Howard, Tulane University10Posterior researchTried to control for the potential temporal delay of dichotic listening by manipulating the intensity (loudness) of the chirp with respect to the base.Only if the chirp and the base have the same intensity are they perceived as a single speech sound.10/04/13Brain & Language, Harry Howard, Tulane University11Gokcen & Fox (2001)

10/04/13Brain & Language, Harry Howard, Tulane University12DiscussionEven if the explanation for the latency differences is simply because linguistic and nonlinguistic components have two different areas in the brain to which they must go for processing, and coordinating these two processing sources in order to make an identification of a stimulus takes longer, the data would be consistent with the contention of separate modules for phonetic and auditory stimuli. We would argue that these data do not support the claim that there is only a single unified cognitive module that processes all auditory information because the speech-only and duplex stimuli contained identical components and were equal in complexity.10/04/13Brain & Language, Harry Howard, Tulane University13Back to sine-wave speechWhat is this?

It is this.

10/04/13Brain & Language, Harry Howard, Tulane University14

http://www.columbia.edu/~remez/Site/Musical%20Sinewave%20Speech.html14Dehaene-Lambertz et al. (2005) used ERP and fMRI to investigate sine-wave [ba]-[da] sounds.For the EEG, the subjects had to be trained to hear the sound as speech.In the MRI, most subjects heard the sound as speech immediately.Switching to the speech mode significantly enhanced activation in the posterior parts of the left superior temporal sulcus.

10/04/13Brain & Language, Harry Howard, Tulane University15http://www.nofearmri.com/mri%20sound%20sample.html15SummaryMethodologySupport strong SMH?dichotic listeningyes, but Morse code shows same response (p. 127)categorical perceptionno, because animals have same responseduplex perceptionno, because animals have same responsesine-wave speechyes10/04/13Brain & Language, Harry Howard, Tulane University16NEXT TIMEP5Finish Ingram 6; start 7. Go over questions at end of chapter.10/04/13Brain & Language, Harry Howard, Tulane University17Diffusion WeightingSiemens Symphony 1.5 TeslaBrain MRI2008Noise68073.93eng - Thanks to Aaron B.