24
Course G: Basic neurosciences 2 Computational neuroscience Tutorial Emil RatkoDehnert Emil Ratko Dehnert Summer term 2011 About me Studied Mathematics and Psychology (LMU) Doctoral student in experimental psychology since 2009, working for Research Interests: Visual attention and memory Visual attention and memory Formal modelling and systems theory Philosophy of mind, Epistemology 2

Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

Course G: Basic neurosciences 2

Computational neuroscience

TutorialEmil Ratko‐DehnertEmil Ratko DehnertSummer term 2011

About me

• Studied Mathematics and Psychology (LMU)

• Doctoral student in experimental psychology 

since 2009, working for

• Research Interests:

Visual attention and memory– Visual attention and memory

– Formal modelling and systems theoryg y y

– Philosophy of mind, Epistemology

2

Page 2: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

Organisational Infos

• Room: CIP‐Pool (Leopoldstr 11b)• Room: CIP‐Pool (Leopoldstr. 11b)

• Dates: 01 06 15 06 and 22 06 (WED)Dates: 01.06, 15.06 and 22.06. (WED)

• Time : 16:00 – 18:00Time : 16:00  18:00

• Slides, Materials and further information onSlides, Materials and further information on 

my homepage ( google „ratko‐dehnert“)

3

Tutorial concept and aims

• Introducing some basic mathematical conceptsIntroducing some basic mathematical concepts 

relevant for computational neuroscience

• Trying to take away the terror of mathematical 

formalizations and notations

• Gaining hands‐on experience with  , simbrainan elaborate tool for simulating  neural networks 

and their dynamics4

Page 3: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

Outline

01 06 : ODEs and Matrix Calculus01.06.: ODEs and Matrix Calculus

15.06.: Introducing Simbrain, labs on propagation, 

node rules and vectors in neural networks

22.06: Pattern association, labs on Hebbian Learning 

and Sensori‐motor control; Evaluation;

5

Grading of tutorial

• 1st Session

– Handout with exercises to be solved in pairs

– Will be collected by the start of the 2nd Session

• 2nd and 3rd Session

– Solving simbrain labs in pairs; commenting on 

h d t i ihandout in pairs

6

Page 4: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

LiteratureLiterature

1. Essential Mathematics for Neuroscience (Fabian Sinz)

2. Mathematics for Computational Neuroscience & 

Imaging (John Porrill)

3. Simbrain: A visual framework for neural network 

analysis and education (Jeff Yoshimi)

7

Session 1.1

DIFFERENTIALEQUATIONSQ

8

Page 5: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

Calculus

I C l l i i t t d i• In Calculus one is interested in

solutions of functions– solutions of functions

– derivatives of functions

– min/ max points

– limits of series

– integrals of functions, etc.

9

Calculus

10

Page 6: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

Differential Equationsq

A diff i l i i h i l i• A differential equation is a mathematical equation 

for an unknown function of one or severalfor an unknown function of one or several 

variables that relates the values of the functionvariables that relates the values of the function 

itself and its derivatives of various orders, e.g. 

2du 2xcudx

du+= ?)(xu

11

dx

Applicationspp

• Which function‘s slope matches the direction of the 

flow field (defined by the ODE) at every point? 

• Has furthered most parts of physics

l i l h i– classical mechanics

– statistical mechanics

– dynamical systems

• But also applied fields like financial, biological and 

ineuro sciences12

Page 7: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

13

Classification

• Ordinary Differential Equations (ODEs)

– Growth processes (linear, 1st order, constant coefficients)

– Newtons 2nd law of motion (linear, 2nd order, constant coeff.)

• Partial Differential Equations (PDEs)• Partial Differential Equations (PDEs)

– Heat Equation (linear, 2nd order)2uu ∂∂

– Wave Equation (linear, 2nd order)  2xt ∂=

∂α

• Stochastic Differential Equations (SDEs)

Bl k S h l F l (li d li 2 d d )– Black‐Scholes Formula (linear and non‐linear, 2nd order)

14

Page 8: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

ODE PDE SDEODEs PDEs SDEs

Solutions?

Unique?

Analytic?/ 

Explicit

Stability/

Linearity?

15

Linearity?

THE HODGKIN‐HUXLEY MODEL

16

Page 9: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

Hodgkin‐Huxley‘s model of a neurong y

u(t)

d)()( tImtuk

dt

du⋅+⋅−=

)(1

)(1

tItududt

+17

)()( tIC

tuRCdt

⋅+⋅−=

Deriving the modelDeriving the model

• This is well‐described by a very simple differential 

equation called the leaky integrator equation

B t b i thi ti• Because every system obeying this equation can 

be seen as a model for this, one can use a morebe seen as a model for this, one can use a more 

concrete example to illustrate the derivation and 

system behaviour...

18

Page 10: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

Filling a bath, when the plughole is openg p g p

19

Flow in

• u(t) = volume of water at time t

• I(t) = rate of flow in (volume/time)

20

Page 11: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

Flow out

• Now more physics know‐how is neededNow more physics know how is needed

volumedepthpressureoutflow ∝∝∝ volumedepthpressureoutflow ∝∝∝

• flow out = k*u(t)

• k constant scalar parameter

k fl ( )• k>0: water flowin out (not in)

21

Evaluate change rateg

)()(outflowinflow tuktIdu

== )()(outflowinflow tuktIdt

⋅−=−=

Now we know:

l ( l d )• Dynamical system (evolves aroundt t)

• Differential equation (relates du to u)Differential equation (relates du to u)

• Linear in u

• First order (highest derivative is du)

22• Constant coefficients (1, K)

Page 12: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

Going back to Hodgkin‐Huxleyg g y

1d 1)()(

CtuktI

d

du⋅↵⋅−=

11

)()(

kdCdt

1)()(

1

Rktu

C

ktI

Cdt

du=↵⋅−=

11duRCCdt

)(1

)(1

tuRC

tICdt

du⋅−=

23

RCCdt

Exact integratorg

• Special case: k = 0 (i.e. no leakage)Special case: k   0 (i.e. no leakage)

du t)(tI

dt

du= FTC

00)()( udItu

t+= ∫ ττ

dt

[ ]tt

∫ [ ]ttIudIutu 0max00 max0)( +=+= ∫ τ

[ ] tIuItIu ⋅+=⋅−⋅+= max0maxmax0 024

[ ] max0maxmax0

Page 13: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

Sanity checksSanity checks

• When will the bath be full?

0 ?utIu =⋅+ maxmax0 ?utIu +

0max uutt f ll

−==

maxItt full

25

max

Transient behaviour

• Special case: setting I(t) = 0• Special case: setting I(t) = 0

du kt)(tuk

dt

du⋅−= ktAtu −⋅= exp)(Intuition

dt

26

Page 14: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

27

Neo: What is the Matrix?Neo: What is the Matrix?

Trinity: The answer is out there, Neo, 

and it’s looking for you, and it will find 

you if you want it toyou if you want it to.

(The Matrix; 1999)

Session 1.2

MATRIX CALCULUSMATRIX CALCULUS

28

Page 15: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

What will we doWhat will we do

• Examples where vector and matrix calculus 

plays a role

• Reprise matrices and their operations

l l k• Apply matrix notation to neural networks

29

McCulloch‐Pitts neuronMcCulloch Pitts neuron

⎫⎧ ∑⎪⎬

⎫⎪⎨

⎧ ≥+

∑i

ii txwif

ty

θ)(,1

)1(⎪⎭

⎬⎪⎩

⎨<

=+∑ ii txwif

tyθ)(,0

)1(

30

⎭⎩ i

Page 16: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

Inner product of vectorsp

⎞⎛⎞⎛

⎟⎟⎞

⎜⎜⎛

⎟⎟⎞

⎜⎜⎛

w

w

x

x 11

∑⎟⎟⎟

⎜⎜⎜

⎟⎟⎟

⎜⎜⎜ wx

MM

22

∑=⎟⎟⎟

⎜⎜⎜⋅

⎟⎟⎟

⎜⎜⎜=

iii

ii

xwwx

wxMM

,

⎟⎟⎟

⎜⎜⎜

⎟⎟⎟

⎜⎜⎜

MM

31

⎟⎠

⎜⎝

⎟⎠

⎜⎝ nn wx

Linear associator

H ld h• Here one would have 

to compute innerto compute inner 

products for n*mu1 u2 um

...p

pairs of inputs and  w11 wvm

outputs v1 v2 vn...

w12

• Alternatively ...

32

Page 17: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

Matrix interpretationp

• Regard the weights as a g g

coefficient matrix and u1 u2 um...

the inputs and outputs  w11

w12

wvm

as vectors v1 v2 vn...

12

• But what does this 

vWu ⋅=mean and how does 

one compute that?

vWu ⋅=one compute that?

33

What is a matrix?What is a matrix?

• Def: A matrix A = (ai j) is an array of numbers orDef: A matrix A  (ai,j) is an array of numbers or 

variables

• It has m rows and n columns (dim = (m, n) )

⎥⎤

⎢⎡ naaa ,12,11,1 LLL

⎥⎥⎥

⎢⎢⎢

=aa

A 2,21,2

MM

Mm

⎥⎥

⎦⎢⎢

⎣ nmm aa 1 LLLL

MM

34

⎦⎣ nmm ,1,

n

Page 18: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

Connection to vectors

⎞⎛• Trivially, every vector can be 

i d i ⎟⎟⎞

⎜⎜⎛x

x1

interpreted as a matrix

i ( 1) t i ⎟⎟⎟

⎜⎜⎜ x

M

2

• e.g. x is an (n, 1)‐matrix

Th ll th f ll i t t t ⎟⎟⎟

⎜⎜⎜=

ixx

• Thus all the following statements 

hold true for them as well ⎟⎟⎟

⎠⎜⎜⎜

Mhold true for them as well ⎟

⎠⎜⎝ nx

35

Equality of matricesEquality of matrices

• Two matrices are equal, if and only if (iff) the 

h h di i ( ) d ll hhave the same dimension (m, n) and all the 

elements are identicalelements are identical

jiBABA jiji ,, ∀=⇔= jiBABA jiji ,,,, ∀⇔

36

Page 19: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

How are matrices added up?How are matrices added up?

• Addition of two (2 2) matrices A B performed• Addition of two (2, 2)‐matrices A, B performed 

component‐wise:p

⎤⎡⎤⎡⎤⎡ 331241⎥⎦

⎤⎢⎣

⎡=⎥

⎤⎢⎣

⎡ −+⎥

⎤⎢⎣

⎡11

33

11

12

20

41⎥⎦

⎢⎣ −⎥

⎦⎢⎣

⎥⎦

⎢⎣ − 111120

• Note that +“ is comm tati e i e A+B B+A

A B A+B

• Note that „+“ is commutative, i.e. A+B = B+A37

Scalar MultiplicationScalar Multiplication

l l l f ( ) h• Scalar Multiplication of a (2, 2)‐matrix A with a 

scalar cscalar c

⎤⎡⎤⎡ 8241⎥⎤

⎢⎡

=⎥⎤

⎢⎡⋅

82412 ⎥

⎦⎢⎣ −⎥

⎦⎢⎣ − 4020

• Again commutativity, i.e. c*A = A*c

c A cA

Again commutativity, i.e. c A   A c

38

Page 20: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

Matrix multiplicationp

• Matrix multiplication of matrices C (2‐by‐3) 

and D (3‐by‐2) to E (2‐by‐2):

⎤⎡⎥⎤

⎢⎡

⎤⎡ 1513

201⎥⎦

⎤⎢⎣

⎡=⎥

⎥⎢⎢×⎥

⎤⎢⎣

⎡ 1512

201⎥⎦

⎢⎣⎥

⎥⎦⎢

⎢⎣

⎥⎦

⎢⎣− 24

01131

⎥⎦⎢⎣ 01C

D E

39512203111 =⋅+⋅+⋅=E

Falk‐Schema

131211 bbb

232221

131211

bbb

bbb

CBA =⋅ 232221

bbb

bbbCBA

333231

cccaaa

bbb

131211131211

cccaaa

cccaaa

232221232221

cccaaa

cccaaa

40

333231333231 cccaaa

Page 21: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

Matrix specificsp

!Warning!!Warning!

One can only multiply matrices if their dimensionsOne can only multiply matrices if their dimensions 

correspond, i.e.    (m,n) * (n, k)  (m, k)p ( ) ( ) ( )

• And generally: if A*B exists, B*A need not

• Furthermore: if A*B, B*A exists, they need not be 

equal!

41

Transpositionp

T iti f 2 b 3 t i A AT• Transposition of a 2‐by‐3 matrix A  AT

⎤⎡⎥⎤

⎢⎡

⎤⎡01

421T

⎥⎥

⎢⎢ −=⎥

⎤⎢⎣

⎡62

960

421

⎥⎥⎦⎢

⎢⎣

⎦⎣ −94

960A

• It holds that ATT= A

⎦⎣AAT

• It holds, that A = A.42

Page 22: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

Matrix inversion

Square case, i.e. dim = (n, n)

• If A is regular (determinant is non‐zero), then A‐1 exists, 

with A * A‐1 = A‐1 * A = In

Non‐square matrices ( dim = (n, m) )

• A with dim = (n m) has a right inverse B with dim (m n)• A with dim = (n, m) has a right inverse B with dim (m, n), 

if the rank of A is n and a left inverse B‘ with dim (n, m), 

if the rank of A is m.

• It holds that A* B = In and B‘ * A = Im43

Methods of inversion

• Gauss‐Jordan elimination algorithmGauss Jordan elimination algorithm

• Cramer‘s Rule

• For dim = (2, 2) there exists a short and explicit ( , ) p

formula

⎤⎡ −⎤⎡−

bdbaA

11

1⎥⎦

⎤⎢⎣

⎡−

⋅−

=⎥⎦

⎤⎢⎣

⎡=−

acbcaddcA

11

44

⎦⎣⎦⎣ bcad1/ det(A)

Page 23: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

Significance of matricesg

• Matrix calculus is relevant for• Matrix calculus is relevant for

Algebra: Solving systems linear equations (Ax = b)– Algebra: Solving systems linear equations (Ax = b)

– Statistics: LLS covariance matrices of random variables– Statistics: LLS, covariance matrices of random variables

– Calculus: differentiation of multidimensional functionsCalculus: differentiation of multidimensional functions

– Physics: mechanics, linear combinations of quantumPhysics: mechanics, linear combinations of quantum 

states and many more

45

Back to linear associatorsBack to linear associators

vWu ⋅=

u1 u2 um...

w11

w12

wvm

v1 v2 vn...

w12

46

Page 24: Course G: Basic neurosciences 2 Computational neuroscience€¦ · Tutorial Emil Ratko‐Dehnert Summer term 2011 About me • Studied Mathematics and Psychology (LMU) • Doctoral

Back to linear associatorsBack to linear associators

• We need different operations to address issues like

h ill b h f i i h– What will be the output u of a given input v, when we 

know the configuration of the weightsW? ‐‐> u=W*vknow the configuration of the weights W?   u W v

– For a given output u and weight matrix W, what was the 

input v? ‐‐> W‐1*u = v

– Compute the weight matrix W for desired input/output 

i ti / > * ‐1 Wassociations v/u. ‐‐> u*v‐1 = W47

Thank you!

48