The BCM theory of synaptic plasticity. The BCM BCM theory of synaptic plasticity. The BCM theory of cortical plasticity BCM stands for Bienestock Cooper and Munro, it dates back to

  • View
    212

  • Download
    0

Embed Size (px)

Text of The BCM theory of synaptic plasticity. The BCM BCM theory of synaptic plasticity. The BCM theory of...

  • The BCM theory of synaptic plasticity.

    The BCM theory of cortical plasticity

    BCM stands for Bienestock Cooper and Munro, it dates back to 1982. It was designed in order to account for experiments which demonstrated that the development of orientation selective cells depends on rearing in a patterned environment.

  • BCM Theory (Bienenstock, Cooper, Munro 1982; Intrator, Cooper 1992)

    LTD LTP

    dw jdt

    =x j(y,m )

    (y,m )

    m

    y

    y

    w1 w2 w3 w4 w5

    x1

    x2

    x3

    x4

    x5

    y = w jj x j

    For simplicity linear neuron

    1) The learning rule:

  • BCM Theory

    Bidirectional synaptic modification LTP/LTD Sliding modification threshold The fixed points depend on the environment, and in a patterned environment only selective fixed points are stable.

    LTD LTP

    Requires

    dw jdt

    =x j(y,m )

    (y,m )

    m E y2[ ] = 1

    y 2

    t

    ( t )e( t t ) /d t

    m

    y

    2) The sliding threshold

    1)

  • Is equivalent to this differential form: The integral form of the average:

    Note, it is essential that m is a superlinear function of the history of C, that is:

    with p>1

    Note also that in the original BCM formulation (1982) rather then

    m E y2[ ] = 1

    y 2

    t

    ( t )e( t t ) /d t

    dmdt

    =1(y 2 m )

    dmdt

    =1(y p m )

    m E y[ ]2

    m E y2[ ]

  • What is the outcome of the BCM theory?

    Assume a neuron with N inputs (N synapses), and an environment composed of N different input vectors.

    An N=2 example:

    What are the stable fixed points of W in this case?

    x1

    x2

    x1 =1.00.2

    x2 =

    0.10.9

  • (Notation: )

    What are the fixed points? What are the stable fixed points?

    Note:"Every time a new input is presented, W changes, and so does m"

    x1

    x2

    yi = wT xi

    (Show matlab)

  • Two examples with N= 5

    Note: The stable FP is such that for one pattern yi=wTxi=m while for the others"y(ij)=0.

    (note: here c=y)

    Show movie

  • BCM Theory Stability

    One dimension Quadratic form

    Instantaneous limit

    dwdt

    =y y M( )x

    dwdt

    = y y y 2( )x= y 2(1 y)x

  • BCM Theory Selectivity

    Two dimensions Two patterns

    Quadratic form Averaged threshold

    dwdt

    = yk yk M( )x k

    M = E y2[ ]patterns

    = pk (yk )2

    k=1

    2

    ,

    Fixed points

  • BCM Theory: Selectivity

    Learning Equation Four possible fixed points

    , , , , (unselective)

    (unselective) (Selective)* (Selective)

    Threshold*

    dwdt

    =yk yk M( )x k

  • Stability of Fixed point

    Assume w* is a fixed point:

    Assume that w is the deviation from this fixed point. w=w*+w.

    Then:

    If then

    dwdt

    =dw*

    dt+dwdt

    =dwdt

    dwdt

    = F(w)

    dwdt

    = F(w* + w) F w*( ) + w FW w*

    F(w*) = 0

  • Consider a selective F.P (w1) where:

    and

    So that

    for a small pertubation from the F.P such that

    The two inputs result in:

    m1 = E[y 2] = 1

    2(w1 x1)2 + (w1 x 2)2[ ] = 12 m

    1[ ]2

  • So that

    m =12(w1 x1 + w x1)2 + (w1 x 2 + w x 2 )2{ }

    m* (1+ w x1) +O(w2) = m

    * + 2w x1

  • 2y

    1 y m( )

    At y0 and at ym we make a linear approximation

    In order to examine whether a fixed point is stable we examine if the average norm of the perturbation ||w|| increases or decreases.

    Decrease Stable Increase Unstable

  • Remember:

    For the preferred input x1:

    Similarly, for the non preferred input x2:

    m = m* + 2w x1

    dwdt

    = 1(y m )x1 = 1(m

    * + w x1 m )x1

    1(m* + w x1 m

    * 2w x1)x = 1(w x1)x1

    w

    = 2y x2 = 2(w x

    2)x 2

  • Use trick:

    And average over two input patterns

    Insert previous result to show that:

    For the non selective F.P we get:

    ddt

    w[ ]2 = ddt

    w w[ ] = 2w w

    E ddt

    w[ ]2

    =12w w

    x1

    +12w w

    x 2

    E ddt

    w[ ]2

    = 1(w x

    1)2 + 2(w x2)2[ ] < 0

    E ddt

    w[ ]2

    0

  • Phase plane analysis of BCM in 1D

    Previous analysis assumed that m=E[y2] exactly. If we use instead the dynamical equation

    Will the stability be altered?

    Look at 1D example

    dmdt

    =1(y 2 m )

  • Phase plane analysis of BCM in 1D

    Assume x=1 and therefore y=w. Get the two BCM equations:

    There are two fixed points y=0, m=0, and y=1, m=1. The previous analysis shows that the second one is stable, what would be the case here? How can we do this? (supplementary homework problem)

    0 0.5 1 m

    y 1

    0.5

    0

    ?

    dydt

    =y(y m )

    dmdt

    =1(y 2 m )

  • Linear stability analysis:

  • Summary

    The BCM rule is based on two differential equations, what are they?

    When there are two linearly independent inputs, what will be the BCM stable fixed points? What will be?

    When there are K independent inputs, what are the stable fixed points? What will be?

  • Homework 2: due on Feb 1

    1. Code a single BCM neuron, apply to case with 2 linearly independent inputs with equal probability

    2. Apply to 2 inputs with different probabilities, what is different?

    3. Apply to 4 linearly indep. Inputs with same prob.

    Extra credit 25 pt 4. a. Analyze the f.p in 1D case, what are the stable

    f.p as a function of the systems parameters. b. Use simulations to plot dynamics of y(t), (t) and their trajectories in the m plane for different parameters. Compare stability to analytical results (Key parameters, )

  • Natural Images, Noise, and Learning

    image retinal activity

    present patches update weights

    Patches from retinal activity image

    Patches from noise

    Retina

    LGN

    Cortex

  • Resulting receptive fields Corresponding tuning curves (polar representation)

  • BCM neurons can develop both orientation selectivity and varying degrees of Ocular Dominance

    Shouval et. al., Neural Computation, 1996

    Left Eye

    Right Eye

    Right Synapses Left

    Synapses

    0 50 100

    0 50 100

    0 20 40

    1 2 3 4 5 0 50 100

    Bin

    No. o

    f Cel

    ls

  • The distinction between homosynaptic and heterosynaptic models

    Examples:

    BCM homosynaptic

    PCA (Oja) - heterosynaptic

    dw jdt

    =x j(y,m )

  • Monocular Deprivation Homosynaptic model (BCM)

    High noise

    Low noise

  • Monocular Deprivation Heterosynaptic model (K2)

    High noise

    Low noise

  • Noise Dependence of MD Two families of synaptic plasticity rules

    Hom

    osyn

    aptic

    He

    tero

    syna

    ptic

    Blais, Shouval, Cooper. PNAS, 1999

    QBCM K1 S1

    Noise std Noise std Noise std

    S2 K2 PCA

    Noise std Noise std Noise std

    Nor

    mali

    zed

    Tim

    e

    N

    orm

    alize

    d T

    ime

  • IntraocularinjectionofTTXreducesactivityofthe"deprived-eye"LGNinputstocortex

    051015202530

    0 21

    Eyeopen

    051015

    0 21

    Sp

    ike

    s /s

    ec

    051015

    0 21

    Lidclosed

    Time(sec)

    iLGN

    TTX

  • Experiment design

    Blind injection of TTX and lid- suture (P49-61)

    Dark rearing to allow TTX to wear off

    Quantitative measurements of ocular dominance

  • Res

    pons

    e ra

    te (s

    pike

    s/se

    c)

  • Rittenhouse, Shouval, Paradiso, Bear - Nature 1999

    MS= Monocular lid suture MI= Monocular inactivation (TTX)

    Cumulative distribution Of OD

  • Why is Binocular Deprivation slower than Monocular Deprivation?

    Monocular Deprivation Binocular Deprivation

  • What did we learn up to here?

View more >