prev

next

of 18

View

29Download

0

Embed Size (px)

DESCRIPTION

Authors: Peter W. Battaglia, Robert A. Jacobs, and Richard N. Aslin. Bayesian integration of visual and auditory signals for spatial localization. COGS 272, Spring 2010 Instructor: Prof. Angela YuPresenter: Vikram Gupta. Outline. Introduction Background Methods Procedure Results - PowerPoint PPT Presentation

Authors: Peter W. Battaglia, Robert A. Jacobs, and Richard N. AslinCOGS 272, Spring 2010 Instructor: Prof. Angela YuPresenter: Vikram Gupta

IntroductionBackgroundMethodsProcedureResultsDiscussion

Integration of multiple sensory and motor signalsSensory: binaural time, phase, intensity differenceMotor: orientation of the head

Typically, we receive consistent spatial cuesWhat if this is not true?Ex: Movie theater, televisionVisual captureVision dominates over conflicting auditory cue.Ex: recalibration in juvenile owlOptimal?

Winner Take All (ex. vision capture)Dominant signal exclusively decidesBlend information from sensory sourcesIs blending statistically optimal?Example: Maximum Likelihood EstimateAssumption independent sensory signals, normal dist.

Impact of reliability on MLE estimate

Is Normal distribution a good estimate of neural coding of sensory input?Does this integration always occur? Or are there qualifying conditions?Does it make sense to integrate if Lv* and La* are far apart? v and a are temporally separated?

Ernst, 2006 (MLE integration for haptic and visual input

Vision capture or MLE match empirical data?Method summary:Noise is produced at 1 of 7 locations 1.50 apartVisual stimulus has noise at 5 levels10%, 23%, 36%, 49%, 62%Single sensory modality trial (Audio / noisy Visual ) MLE parameters predict performance for Audio + noisy Visual compare with Empirical data

Single-modalityStandard stimuli followed by comparisonIs C Left / Right of S?BimodalStandard stimuli has Audio and Visual apart from centerAudio and visual Comparison stimuli are co-located.Only 1 subject aware of spatial discrepancy in SSC

Cumulative normal distribution fits to dataMean and variance are used for MLE modelWv receives high value when visual noise is lowWa receives high value when visual noise is high

rt = 1 comparison to the right of standardpt = , probability of rt, given mean and varianceR = set of responses to the independent trialsAssuming normal distribution, MLE estimate of mean and variance parametersml = 1/T * ( rt) 2ml = 1/T * (rt - ml) 2

Mean is calculated according to above weighted averageVariance is smaller than either P(L|v) or P(L|a)

MLE estimate for wv and wa are found by maximizing RHS of (3) and using (6)tau is scale parameter or slope

Standard stimulusVisual -1.50 Audio 1.50 Point of Subjective Equality-1.10 for low visual noise0.10 for high noiseVisual input dominates at low noiseEqual weight at high noise

MLE estimates for visual weight are significantly lower than the empirical results.A Bayesian model with a prior that reduces variance in visual-only trials provides a good regression fit for the data.

For visual only trials, instead of using MLE for mean and variance, we multiply the RHS above with the probability of the occurrence of the normal distributionmean is assumed to have a uniform distribution.variance is assumed to have inverse gamma distribution with parameters biased for small variance.

Bayesian approach is a hybrid of MLE and visual capture models.How are variances encoded?How are priors encoded?How does temporal separation in cues impact sensory integration?Biological basis for Bayesian cue integration?