35
Try to save computation time in large-scale neural network modeling with population density methods, or just fuhgeddaboudit? Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University Courant Institute of Mathematical Sciences Department of Biology Center for Neural Science Supported by NSF grant BNS0090159

Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

  • Upload
    rosie

  • View
    37

  • Download
    0

Embed Size (px)

DESCRIPTION

Try to save computation time in large-scale neural network modeling with population density methods, or just fuhgeddaboudit ?. Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University Courant Institute of Mathematical Sciences Department of Biology Center for Neural Science - PowerPoint PPT Presentation

Citation preview

Page 1: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

Try to save computation time in large-scale neural network

modeling with population density

methods, or just fuhgeddaboudit?

Daniel Tranchina, Felix Apfaltrer & Cheng Ly

New York University

Courant Institute of Mathematical Sciences

Department of Biology

Center for Neural Science

Supported by NSF grant BNS0090159

Page 2: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

SUMMARY

• Can the population density function (PDF) method be made into a practical time-saving computational tool in large-scale neural network modeling?

• Motivation for thinking about PDF methods

• General theoretical and practical issues

• The dimension problem in realistic single-neuron models

• Two dimension reduction methods

1) Moving eigenfunction basis (Knigh, 2000)

only good news

2) Moment closure method (Cai et al., 2004)

good news; bad news; worse news; good news

• Example of a model neuron with a 2-D state space

• Future directions

Page 3: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

Why Consider PDF Methods in Network Modeling?

• Synaptic noise makes neurons behave stochastically: synaptic failure; random sizes of unitary events; random synaptic delay.

• Important physiological role: mechanism for modulation of gain/kinetics of population responses to synaptic input; prevents synchrony in spiking neuron models as in the brain.

• Important to capture somehow the properties of noise in realistic models.

• Large number of neurons required for modeling physiological phenomena of interest, e.g. working memory; orientation tuning in primary visual cortex.

Page 4: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

• Number of neurons is determined by the functional subunit; e.g. an orientation hypercolumn in V1:

• TYPICAL MODELS: ~ (1000 neurons)/(orientation hypercolumn) for input layer V1 ( 0.5 X 0.5 mm2 or roughly 0.25 X 0.25 deg2 ).

• REALITY: ~ 34,000 neurons, 75 million synapses

• Many hypercolumns are required to study some problems, e.g. dependence of spatial integration area on stimulus contrast.

Why Consider PDF Methods (continued)

Page 5: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

Why PDF? (continued)

•Tracking by computation the activity of ~103--104 neurons and ~104--106 synapses taxes computational resources: time and memory.

e.g. 8 X 8 hypercolumn-model for V1 with 64,000 neurons (Jim Wielaard and collaborators, Columbia):

1 day to simulate 4 seconds real time•But stunning recent progress by Adi Rangan & David Cai

What to do?

Quest for the Holy Grail: a low-dimensional system of equations that approximates the behavior of a truly high-dimensional system.

Firing rate model (Dyan & Abbott, 2001): system of ODEs or

PDF model (system of PDEs or integro-PDEs)?

Page 6: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

The PDF Approach

•Large number of interacting stochastic units suggests a statistical mechanical approach.

•Lump similar neurons into discrete groups.

•Example: V1 hypercolumn. Course graining over: position, orientation preference; receptive-field structure (spatial-phase); simple--complex; E vs. I may give ~ 50 neurons/population (~ tens OK for PDF methods).

•Each neuron has a set of dynamical variables that determines its state; e.g.

rX ≡ V,Ge,Gi( ), for a leaky I&F neuron.

•Track the distribution of neurons over state space and firing rate for each population.

Page 7: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

Rich History of PDF Methods in Computational Neuroscience

•Wilbur & Rinzel 1983• Kuramoto 1991• Abbott & van Vreeswijk 1993• Gerstner 1995

• Knight et al., 1996• Omurtage et al., 2000• Sirovich et al., 2000• Casti et al. 2002• Cai et al., 2004• Huertas & Smith, 2006

PDF Methods Recently Espoused and Tested as a Faster Alternative to Monte Carlo Simulations

Page 8: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

PDF Theory

• rXk ≡ state variables for population k

• ρrx1,

rx2 ,...,

rxn , t( ) ≡ joint probability density function for the

state variables of populations 1,2,...,n

ρrx1,

rx2 ,...,

rxn , t( )

Ω∫ d

rx1 ⋅⋅⋅

rxn = Pr

rX1,

rX2 ,...,

rXn( ) ∈Ω at time t{ }

• If populations that are not too densely coupled,

ρrx1,

rx2 ,...,

rxn , t( ) factors into ρ1

rx, t( )ρ2

rx, t( ) ⋅⋅⋅ρn

rx, t( )

• Evolution equations for the individual population density functions,

∂tρ k

rx, t( ) = −∇g

rJk

rx, t( ), are coupled via population firing rates.

• rk t( ) = total flux of probability across a surface in phase space,

a surface integral of probability flux.

• Can only capture behavior characterized by population firing rates.

Page 9: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

•Most applications of PDF methods as a computational tool have involved single-neuron models with a 1-D state space: instantaneous synaptic kinetics; V jumps abruptly up/down with each unitary excitatory/inhibitory synaptic input event.

•Synaptic kinetics play an enormously important role in determining neural network dynamics.

•Bite the bullet and include realistic synaptic kinetics.

•Problem with PDF methods: as underlying neuron model is made more realistic, dimension of the state space increases, so does the computation time to solve the PDF equations.

•Time saving advantage of PDF over (direct) MC vanishes.

•Minimal I&F model with synaptic kinetics has 3 state variables:

rX ≡ V,Ge,Gi( ), voltage, excitatory and inhibitory conductances.

Minimal I&F Model: How Many State Variables?

Page 10: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

Take Baby Steps: Introduce Dimensions One at a Time and See What We Can Do

• Consider an inetgrate & fire (I&F) neuron receiving excitatory

synaptic input only.

• Unitary excitatory-postsynaptic conductance event

γ e (t) has a single-exponetial time course:

γ e(t) =Aτ e

exp −t

τ e

⎝⎜⎞

⎠⎟, for t ≥ 0.

Page 11: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

dV

dt=−

1τm

V −εr( ) +Ge(t) V −εe( )⎡⎣ ⎤⎦; dGe

dt=−

1τe

Ge +Ak

τek∑ δ t−Tk( )

v nullcline

Page 12: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

PDF EQUATIONSv ≡ mebrane voltageg≡ excitatory postsynaptic conductancet≡ timeνe(t) ≡ total rate ofsynaptic input (from same and/or other populoations)∂∂t

ρ(v,g,t) =−∇grJ (v,g,t)

rJ (v,g,t) = J V (v,g,t) , J G(v,g,t)( )

J V (v,g,t) =−1τm

v−εr( ) + g v−εe( )⎡⎣ ⎤⎦ρ(v,g,t)

J G(v,g,t) =−1τe

g⋅ρ(v,g,t) +νe(t) %FA0

g

∫ τe(g−g')( )ρ(v,g',t)dg'

r(t) = J V (vth,g,t)0

∫ dg. Boundary Condition: J V (εr ,g,t) =J V (vth,g,t)

ddt

rρ =L νe(t)( )

rρ, for the discretized problem.

Page 13: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

PDF vs. MC and Mean-Field for 2-D Problem.

PDF cpu time is ~ 400 single uncoupled neurons

τ e = 5ms; τ m = 20ms; μ EPSP = 0.3mV; ε r = −65mV; vth = −55 mV.cpu time: 0.8 s for PDF; 2 s per 1000 neurons for MC.

1000 neurons

100,000 neurons

PDF

mean-field

MC

Page 14: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

Computation Time Comparison: PDF vs. Monte Carlo (MC):

PDF grows linearly; MC grows quadratically

50 neurons per population;1 run; 25% connectivity

Page 15: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

•PDF Method is plenty fast for model neurons with a 2-D state space.

•More realistic models (e.g. with E and I input) require additional state variables

•Explore dimension reduction methods.

•Use the 2-D problem as a test problem

Page 16: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

1) Discretize state-space variables:

d

dt

rρ =L νe(t)( )

rρ.

2) Approximate νe(t) as piecewise constant: tk ≡k⋅Δt, but t is still continuous; νe

k ≡νe(tk+

12

)

ddt

rρ =L νe

k( )rρ, for tk < t < tk+1.

3) For each fixed νe and tk < t < tk+1, use eigenvectors rψ j of L

as a basis set. Eigenvectors of L and LT are bi-orthogonal.4) Equations for the coefficients ak(t) of

rψ j are uncoupled first-order

ODEs, with analytic exponential slutions.5) Compute only the first couple of coefficients corresponding to eigenvalues with smallest real parts; throw the rest away.

Dimension Reduction by Moving Eigenvector Basis:Bowdlerization of Bruce Knight’s (2000) Idea

Page 17: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

Dimension Reduction by Moving Eigenvector Basis

Example with 1-D state space, instantaneous synaptic kinetics

•Only 3 eigenvectors for low, and 7 for high synaptic input rates.

•Large time steps

•Eigen-method is 60 times faster than full 1-D solution

Suggested by Knight, 2000.

Page 18: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

Dimension Reduction by Moving Eigenvector Basis

Example with 2-D state space: state variables V & Ge

•Out of 625 eigenvectors: 10 for high, 30 for medium, and 60 for low synaptic input rates.

•Large time steps

•Eigen-method is 60 times faster than full 1-D solution

Page 19: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

• r(t) = JV (vth ,g, t)0

∫ dg; JV (v, g, t) = −1

τ m

(v − ε r ) + g ⋅(v − ε e )[ ]ρ(v, g, t)

• r(t) = −1

τ m

(vth − ε r ) + μG |V (vth , t) ⋅(vth − ε e )[ ] f0 (vth , t)

Dimension Reduction by Moment Closure

• The firing rate is determined by two functions of t and v only,

evaluated at threshold voltage: fV(0)(v, t) & μG |V (v, t).

• No need to know the full PDF, ρ (v, t) .

• ∂

∂tρ (v, t)

⎛⎝⎜

⎞⎠⎟

0

∫ dg → r.h.s involving fV(0)(v, t) & μG |V (v, t)

• g∂

∂tρ (v, t)

⎛⎝⎜

⎞⎠⎟

0

∫ dg → r.h.s involving fV(0)(v, t), μG |V (v, t) & μ

G2 |V(v, t)

• Close the system by ansatz: σ G |V2 (v, t) = σ G

2 (t)

⇒ μG2 |V

(v, t) = σ G2 (t) + μG |V

2 (v, t)

Page 20: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

Dimension Reduction by Moment Closure:

2nd Moment

Stimulus Firing Rate Response

Near perfect agreement between results from dimension reduction by moment closure, and full 2-D PDF method.τ e = 5ms; τ m = 20ms; μ EPSP = 0.5mV; ε r = −65 mV; vth = −55 mV.

Page 21: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

Dimension Reduction by Moment Closure:

3rd Moment

Response to Square-Wave Modulation of Synaptic Input Rate

3rd-moment closure performs better than 2nd at high input rates.

ZOOM

τ e = 5ms; τ m = 20ms; μ EPSP = 0.5mV; ε r = −65 mV; vth = −55 mV.

Page 22: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

•Dynamical solutions “breakdown” when synaptic input rates drop below ~ 1240 Hz, where actual firing rate (determined by MC and full 2-D solution) ~ 60 spikes/s.

•Numerical problem or theoretical problem?

• Is moment closure problem ill-posed for some physiological parameters?

•Examine the more tractable steady-state problem

Trouble with Moment Closure and Troubleshooting

Page 23: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

• A coupled pair of ODEs for fV(0)(v) and μG |V (v) can be reduced to

a single ODE with a boundary condition (B.C.).

• Firing rate, r > 0 ⇒ −1

τ m

(v − ε r ) + μG |V (v) ⋅(ε e − v)[ ] ⋅ fV(0)(v) > 0

⇒ μG |V (v) >v − ε r

ε e − v

q(v) ≡ μG |V (v) −v − ε r

ε e − v> 0; fV

(0)(v) =τ mr

q(v) ⋅(ε e − v)

dq

dv=

−qτ m

τ e

q 1 +τ e

τ m

ε e − ε r

ε e − v

⎣⎢

⎦⎥− μG −

v − ε r

ε e − v

⎣⎢

⎦⎥

ε e − v( )

⎨⎪⎪

⎩⎪⎪

⎬⎪⎪

⎭⎪⎪

q2 − σ G2( )

+ B.C.

Steady-State Moment Closure Problem: Existence Study

Page 24: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

• Introduce an auxiliary variable, s, and study the dynamical system:

dq

ds= −q

τ m

τ e

1

(ε e − v)q 1 +

τ e

τ m

ε e − ε r

ε e − v

⎣⎢

⎦⎥− μG −

v − ε r

ε e − v

⎝⎜⎞

⎠⎟⎧⎨⎪

⎩⎪

⎫⎬⎪

⎭⎪

dv

ds= q2 − σ G

2

• Requirements for a v(s), q(s)( ) trajectory to be a solution:

1) q ≠ σ G at any point along trajectory.

2) q = μG −v − ε r

ε e − v at least once.

3) q(v) satisfies its boundary condition.

Phase Plane Analysis of Steady-State Moment Closure Problem to Study Existence/Nonexistence of Solutions

Page 25: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

Phase Plane and Solution at High Synaptic Input Rate

υe = 1341Hz; τ e = 5ms; τ m = 20ms; μ EPSP = 0.5mV

solution trajectory

must intersect

must not intersect

Page 26: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

must intersect

must not intersect

trajectory 1

trajectory 2

Steady-State Solution Doesn’t Exist for Low Synaptic Input Rate

Page 27: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

Promise of a New Reduced Kinetic Theory with Wider Applicability, Using Moment Closure

A numerical method on a fixed voltage grid that introduces a boundary layer with numerical diffusion finds solutions in good agreement with direct simulations.(Cai, Tao, Shelley, McLaughlin, 2004)

Page 28: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

SUMMARY

• PDF methods show promise

• Small population size OK, but connectivity cannot be dense

• Realistic synaptic kinetics introduce state-space variables

• Time saving benefit lost when state space dimension is high

• Dimension reduction methods could maintain efficiency:

• Moving eigenvector basis speeds up 2-D PDF method 60 X

• Moment closure method (unmodified) has existence problems

• Numerical implementations suggest moment closure can work well

• Challenge is to find methods that work for >= 3 dimensions

Page 29: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

THANKS

• Bruce Knight • Charles Peskin• David McLaughlin• David Cai• Adi Rangan• Louis Tao• E. Shea-Brown• B. Doiron• Larry Sirovich

Page 30: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

Edges of parameter space:

Minimal r from 2D PDM

5 ms 1804.11 Hz 51.35 Hz

2 ms 2377.84 Hz 74.22 Hz

1 ms 2584.88 Hz 85.82 Hz

0.5 ms 2760.47 Hz 109.16 Hz

0.2 ms 539.05 Hz <0.01 Hz

0.1 ms 535.19 Hz <0.01 Hz

Minimalinput rate:

eτ eυ

mVmV EPSPr 5.0,70 =−= με

Min EPSP r from 2D PDM

5 ms 7.47 mV 61.l5 Hz

2 ms 4.89 mV 65.70 Hz

1 ms 3.40 mV 69.24 Hz

,70mVr −=ε fix at mean-field threshold,increase EPSP ( ) until solution exists

AeG μυμ =

Minimal EPSPwith fixed mean G:

Page 31: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University
Page 32: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University
Page 33: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

• Voltage values are: v∈[ε r ,vth ].

• With enough excitatory input, a neuron’s voltage will cross

threshold voltage, vth .

• Upon crossing threshold, a spike is said to occur,

and the membrane voltage is reset (instantaneously for sake of

simplicity) to ε r .

• Typical parameter values (our values):

ε r = −65mV

vth = −55mV

τ m = 20ms

τ e = 5ms

μ A = 2.44 ×10−4 s, giving μ EPSP = 0.5mV

Parameter Values

Page 34: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University
Page 35: Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University

fV(k )(v, t) ≡ gkρ(v,g,t)dg

0

∫ = gk fG|V (v,g,t) fV(0)(v,t)dg

0

∫=μ

Gk |V(v,t) fV

(0)(v,t)