4
Extending the Neural Engineering Framework for Nonideal Silicon Synapses Aaron R. Voelker * , Ben V. Benjamin , Terrence C. Stewart * , Kwabena Boahen and Chris Eliasmith * {arvoelke, tcstewar, celiasmith}@uwaterloo.ca {benvb, boahen}@stanford.edu * Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada. Bioengineering and Electrical Engineering, Stanford University, Stanford, CA, U.S.A. Abstract—The Neural Engineering Framework (NEF) is a theory for mapping computations onto biologically plausible networks of spiking neurons. This theory has been applied to a number of neuromorphic chips. However, within both silicon and real biological systems, synapses exhibit higher-order dynamics and heterogeneity. To date, the NEF has not explicitly addressed how to account for either feature. Here, we analytically extend the NEF to directly harness the dynamics provided by heteroge- neous mixed-analog-digital synapses. This theory is successfully validated by simulating two fundamental dynamical systems in Nengo using circuit models validated in SPICE. Thus, our work reveals the potential to engineer robust neuromorphic systems with well-defined high-level behaviour that harness the low- level heterogeneous properties of their physical primitives with millisecond resolution. I. THE NEURAL ENGINEERING FRAMEWORK The field of neuromorphic engineering is concerned with building specialized hardware to emulate the functioning of the nervous system [1]. The Neural Engineering Frame- work (NEF; [2]) compliments this goal with a theory for “compiling” dynamical systems onto spiking neural networks, and has been used to develop the largest functioning model of the human brain, capable of performing various perceptual, cognitive, and motor tasks [3]. This theory allows one to map an algorithm, expressed in software [4], onto some neural substrate realized in silicon [5]. The NEF has been applied to neuromorphic chips including Neurogrid [5], [6] and a VLSI prototype from ETH Zurich [7]. However, the NEF assumes that the postsynaptic current (PSC) induced by a presynaptic spike is modelled by a first-order lowpass filter (LPF). That is, by convolving an impulse representing the incoming spike with an exponentially decaying impulse-response. Furthermore, the exponential time- constant is assumed to be the same for all synapses within the same population. In silicon, synapses are neither first- order nor homogeneous and spikes are not represented by impulses. 1 Synapse circuits have parasitic elements that re- sult in higher-order dynamics, transistor mismatch introduces variability from circuit to circuit, and spikes are represented by pulses with finite width and height. Previously, these features restricted the overall accuracy of the NEF within neuromorphic hardware (e.g., in [5], [6], [7]). The silicon synapses that we study here are mixed-analog- digital designs that implement a pulse-extender [9] and a first- order LPF [10], modelled as a second-order LPF to account 1 These statements also hold for real biological systems [8]. for parasitic capacitances. We also account for the variability (i.e., heterogeneity) introduced by transistor mismatch in the extended pulse’s width and height, and in the LPF’s two time- constants. In §II, we demonstrate how to extend the NEF to directly harness these features for system-level computation. This extension is tested by software simulation in §IV using the circuit models described in §III. II. EXTENDING THE NEURAL ENGINEERING FRAMEWORK The NEF consists of three principles for describing neural computation: representation, transformation, and dynamics [2]. This framework enables the mapping of dynamical systems onto recurrently connected networks of spiking neurons. We begin by providing a self-contained overview of these three principles using an ideal first-order LPF. We then extend these principles to the heterogeneous pulse-extended second-order LPF, and show how this maps onto a target neuromorphic architecture. A. Principle 1 – Representation The first NEF principle states that a vector x(t) R k may be encoded into the spike-trains δ i of n neurons with rates: r i (x)= G i [α i e i · x(t)+ β i ] , i =1 ...n, (1) where G i is a neuron model whose input current to the soma is the linear encoding α i e i · x(t)+ β i with gain α i > 0, unit- length encoding vector e i (row-vectors of E R n×k ), and bias current β i . The state x(t) is typically decoded from spike- trains (see Principle 2) by convolving them with a first-order LPF that models the PSC triggered by spikes arriving at the synaptic cleft. We denote this filter as h(t) in the time-domain and as H(s) in the Laplace domain: h(t)= 1 τ e - t τ ⇐⇒ H(s)= 1 τs +1 . (2) Traditionally the same time-constant is used for all synapses projecting to a given population. 2 B. Principle 2 – Transformation The second principle is concerned with decoding some desired vector function f : S R k of the represented vector. Here, S is the domain of the vector x(t) represented via Principle 1—typically the unit k-cube or the unit k-ball. Let r i (x) denote the expected firing-rate of the i th neuron in 2 See [11] for a recent exception.

Extending the Neural Engineering Framework for Nonideal ...web.stanford.edu/group/brainsinsilicon/documents/avoelker.pdf · A. Principle 1 – Representation The first NEF principle

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Extending the Neural Engineering Framework for Nonideal ...web.stanford.edu/group/brainsinsilicon/documents/avoelker.pdf · A. Principle 1 – Representation The first NEF principle

Extending the Neural Engineering Frameworkfor Nonideal Silicon Synapses

Aaron R. Voelker∗, Ben V. Benjamin†, Terrence C. Stewart∗, Kwabena Boahen† and Chris Eliasmith∗{arvoelke, tcstewar, celiasmith}@uwaterloo.ca {benvb, boahen}@stanford.edu

∗Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada.†Bioengineering and Electrical Engineering, Stanford University, Stanford, CA, U.S.A.

Abstract—The Neural Engineering Framework (NEF) is atheory for mapping computations onto biologically plausiblenetworks of spiking neurons. This theory has been applied to anumber of neuromorphic chips. However, within both silicon andreal biological systems, synapses exhibit higher-order dynamicsand heterogeneity. To date, the NEF has not explicitly addressedhow to account for either feature. Here, we analytically extendthe NEF to directly harness the dynamics provided by heteroge-neous mixed-analog-digital synapses. This theory is successfullyvalidated by simulating two fundamental dynamical systems inNengo using circuit models validated in SPICE. Thus, our workreveals the potential to engineer robust neuromorphic systemswith well-defined high-level behaviour that harness the low-level heterogeneous properties of their physical primitives withmillisecond resolution.

I. THE NEURAL ENGINEERING FRAMEWORK

The field of neuromorphic engineering is concerned withbuilding specialized hardware to emulate the functioning ofthe nervous system [1]. The Neural Engineering Frame-work (NEF; [2]) compliments this goal with a theory for“compiling” dynamical systems onto spiking neural networks,and has been used to develop the largest functioning modelof the human brain, capable of performing various perceptual,cognitive, and motor tasks [3]. This theory allows one to mapan algorithm, expressed in software [4], onto some neuralsubstrate realized in silicon [5]. The NEF has been applied toneuromorphic chips including Neurogrid [5], [6] and a VLSIprototype from ETH Zurich [7].

However, the NEF assumes that the postsynaptic current(PSC) induced by a presynaptic spike is modelled by afirst-order lowpass filter (LPF). That is, by convolving animpulse representing the incoming spike with an exponentiallydecaying impulse-response. Furthermore, the exponential time-constant is assumed to be the same for all synapses withinthe same population. In silicon, synapses are neither first-order nor homogeneous and spikes are not represented byimpulses.1 Synapse circuits have parasitic elements that re-sult in higher-order dynamics, transistor mismatch introducesvariability from circuit to circuit, and spikes are represented bypulses with finite width and height. Previously, these featuresrestricted the overall accuracy of the NEF within neuromorphichardware (e.g., in [5], [6], [7]).

The silicon synapses that we study here are mixed-analog-digital designs that implement a pulse-extender [9] and a first-order LPF [10], modelled as a second-order LPF to account

1These statements also hold for real biological systems [8].

for parasitic capacitances. We also account for the variability(i.e., heterogeneity) introduced by transistor mismatch in theextended pulse’s width and height, and in the LPF’s two time-constants. In §II, we demonstrate how to extend the NEF todirectly harness these features for system-level computation.This extension is tested by software simulation in §IV usingthe circuit models described in §III.

II. EXTENDING THE NEURAL ENGINEERINGFRAMEWORK

The NEF consists of three principles for describing neuralcomputation: representation, transformation, and dynamics [2].This framework enables the mapping of dynamical systemsonto recurrently connected networks of spiking neurons. Webegin by providing a self-contained overview of these threeprinciples using an ideal first-order LPF. We then extend theseprinciples to the heterogeneous pulse-extended second-orderLPF, and show how this maps onto a target neuromorphicarchitecture.

A. Principle 1 – Representation

The first NEF principle states that a vector x(t) ∈ Rk maybe encoded into the spike-trains δi of n neurons with rates:

ri(x) = Gi [αiei · x(t) + βi] , i = 1 . . . n, (1)

where Gi is a neuron model whose input current to the somais the linear encoding αiei · x(t) + βi with gain αi > 0, unit-length encoding vector ei (row-vectors of E ∈ Rn×k), andbias current βi. The state x(t) is typically decoded from spike-trains (see Principle 2) by convolving them with a first-orderLPF that models the PSC triggered by spikes arriving at thesynaptic cleft. We denote this filter as h(t) in the time-domainand as H(s) in the Laplace domain:

h(t) =1

τe−

tτ ⇐⇒ H(s) =

1

τs+ 1. (2)

Traditionally the same time-constant is used for all synapsesprojecting to a given population.2

B. Principle 2 – Transformation

The second principle is concerned with decoding somedesired vector function f : S → Rk of the representedvector. Here, S is the domain of the vector x(t) representedvia Principle 1—typically the unit k-cube or the unit k-ball.Let ri(x) denote the expected firing-rate of the ith neuron in

2See [11] for a recent exception.

Page 2: Extending the Neural Engineering Framework for Nonideal ...web.stanford.edu/group/brainsinsilicon/documents/avoelker.pdf · A. Principle 1 – Representation The first NEF principle

τ +

1τs+1 G [·] Dg(x)

u

w x

Dτf(x)+x

δ y

Fig. 1. Standard Principle 3 (see (6)) mapped onto an ideal architecture toimplement a general nonlinear dynamical system (see (5)). The state-vectorx is encoded in a population of neurons via Principle 1. The required signalw is approximated by τu plus the recurrent decoders for τ f(x) + x appliedto δ, such that the first-order LPF correctly outputs x. The output vector y isapproximated using the decoders Dg(x).

response to a constant input x encoded via (1). To accountfor noise from spiking and extrinsic sources of variability,we introduce the noise term η ∼ N (0, σ2). Then the matrixDf(x) ∈ Rn×k that optimally decodes f(x) from the spike-trains δ encoding x is obtained by solving the followingproblem (via regularized least-squares):

Df(x) = argminD∈Rn×k

∫S

∥∥∥∥∥f(x)−n∑i=1

(ri(x) + η)di

∥∥∥∥∥2

dkx (3)

=⇒n∑i=1

(δi ∗ h)(t)df(x)i ≈ (f(x) ∗ h)(t). (4)

The quantity in (4) may then be encoded via Principle 1 tocomplete the connection between two populations of neurons.3

C. Principle 3 – Dynamics

The third principle addresses the problem of implementingthe following nonlinear dynamical system:

x = f(x) + u, u(t) ∈ Rk

y = g(x). (5)

Since we take the synapse (2) to be the dominant sourceof dynamics for the represented vector [2, p. 327], we mustessentially “convert” (5) into an equivalent system where theintegrator is replaced by a first-order LPF. This transformationis accomplished by driving the filter h(t) with:

w := τ x + x = (τ f(x) + x) + (τu) (6)=⇒ (w ∗ h)(t) = x(t), (7)

so that convolution with h(t) achieves the desired integration.Therefore, the problem reduces to representing x(t) in apopulation of neurons using Principle 1, while recurrentlydecoding w(t) using the methods of Principle 2 (Fig. 1).

D. Extensions to Silicon Synapses

Consider an array of m heterogeneous pulse-extendedsecond-order LPFs (in the Laplace domain):

Hj(s) =γj (1− e−εjs) s−1

(τj,1s+ 1) (τj,2s+ 1), j = 1 . . .m, (8)

where εj is the width of the extended pulse, γj is theheight of the extended pulse, and τj,1, τj,2 are the two time-constants of the LPF. Hj(s), whose circuit is described in

3The effective weight-matrix in this case is W = E(Df(x)

)T.

Γj

...

Γ1

...

Γm

Hj

...

H1

...

Hm

G [·] Dg(x)

u

Φ

w1

wj

wm

x

x

x

δ y

Fig. 2. Using extended Principle 3 (see (11)) to implement a generalnonlinear dynamical system (see (5)) on a neuromorphic architecture. Thematrix representation Φ is linearly transformed by Γj to drive the jth synapse.Dashed lines surround the silicon-neuron array that filters and encodes x intospike-trains (see Fig. 3).

§III, is an extended pulse γj (1− e−εjs) s−1 convolved witha second-order LPF ((τj,1s+ 1) (τj,2s+ 1))

−1. These higher-order effects result in incorrect dynamics when the NEF isapplied using the standard Principle 3 (e.g., in [5], [6], [7]),as shown in §IV.

From (7), observe that we must drive the jth synapse withsome signal wj(t) that satisfies the following (W(s) denotesthe Laplace transform of w(t) and we omit j for clarity):

X(s)

W(s)= H(s)

⇐⇒W(s)(1− e−εs

)s−1

= γ−1(1 + (τ1 + τ2)s+ τ1τ2s2)X(s). (9)

To solve for w in practice, we first substitute 1 − e−εs =εs − (ε2s2)/2 + O(ε3s3), and then convert back to the time-domain to obtain the following approximation:

w = (εγ)−1(x + (τ1 + τ2)x + τ1τ2x) +ε

2w. (10)

Next, we differentiate both sides of (10):

w = (εγ)−1(x + (τ1 + τ2)x + τ1τ2...x) +

ε

2w,

and substitute this w back into (10) to obtain:

w = (εγ)−1(x + (τ1 + τ2 + ε/2)x +

(τ1τ2 + (ε/2)(τ1 + τ2))x) +

(εγ)−1(ε/2)τ1τ2...x + (ε2/4)w.

Finally, we make the approximation (εγ)−1(ε/2)τ1τ2...x +

(ε2/4)w� w, which yields the following solution to (9):

wj = ΦΓj , (11)Φ := [x x x] ,

Γj := (εjγj)−1[

1τj,1 + τj,2 + εj/2

τj,1τj,2 + (εj/2)(τj,1 + τj,2)

],

where εjγj is the area of the extended pulse.4 We computethe time-varying matrix representation Φ in the recurrent con-nection (plus an input transformation) via Principle 2 (similar

4This form for (6) is Φ = [x x] and Γj = [1, τ ]T . Further generalizationsare explored in [12].

Page 3: Extending the Neural Engineering Framework for Nonideal ...web.stanford.edu/group/brainsinsilicon/documents/avoelker.pdf · A. Principle 1 – Representation The first NEF principle

MP2

MP1CP

ML1

ML2

ML6ML5

CS

Vspk

Iє Iγ

ML3 ML4

IPE

є γ

CL1

CL2

Iτ1+Iτ2

Iτ2

MS5MS4

MS3

MS2

MS1

Vspk

Vrst

Isyn

єτ

Fig. 3. Silicon synapse and soma. The synapse consists of a pulse-extender(MP1,2) and a LPF (ML1-6). The pulse-extender converts subnanosecond digital-pulses—representing input spikes—into submillisecond current-pulses (IPE).Then the LPF filters these current-pulses to produce the synapse’s output (Isyn).The soma integrates this output current on a capacitor (CS) and generates asubnanosecond digital-pulse—representing an output spike—through positive-feedback (MS2-5). This pulse is followed by a reset pulse, which dischargesthe capacitor. This schematic has been simplified for clarity.

to Fig. 1). We then drive the jth synapse with its time-invariantlinear transformation Γj (Fig. 2).

A more refined solution may be found by expanding1 − e−εs to the third order, which adds ε2/12 to the thirdcoordinate of Γj .5 However, this refinement does not improveour results. Some remaining details are addressed in §IV.

III. CIRCUIT DESCRIPTION

We now describe the silicon synapse and soma circuits(Fig. 3) that we use to validate our extensions to the NEF.An incoming pulse discharges CP, which Iε subsequentlycharges [9]. As a result, MP2 turns on momentarily, producingan output current-pulse IPE with width and height:

ε = CPVγ / Iε, γ = Iγ,

where Vγ is the gate-voltage at which MP2 can no longer passIγ. This current-pulse is filtered to obtain the synapse’s output,Isyn, whose dynamics obeys:

τ1dIsyn

dt+ Isyn = A IPE,

where τ1 = CL1UT / Iτ1, A = Iτ2 / Iτ1, and UT is the thermalvoltage. The above assumes all transistors operate in thesubthreshold region and ignores all parasitic capacitances [10].If we include the parasitic capacitance CL2, a small-signalanalysis reveals second-order dynamics:

τ1τ2d2Isyn

dt+ (τ1 + τ2)

dIsyn

dt+ Isyn = A IPE,

where τ2 = CL2UT / κIτ2, and κ is the subthreshold-slopecoefficient. This second-order LPF and the aforementionedpulse-extender are modelled together in the Laplace domainby (8) after scaling by A.

The dynamics of the soma are described by:

CSUT

κIm

dIm

dt= Im + Isyn,

where Im is the current in the positive-feedback loop, assumingall transistors operate in the subthreshold region [13], [14].

5In general, the coefficients 1, ε/2, ε2/12, and so on, correspond to the[0/q] Pade approximants of

∑qi=0

(−ε)i(i+1)!

.

10 30 90

Time (ms)

τ1

0.4 0.8 1.6

Time (ms)

τ2

0.2 0.4 0.8

Width (ms)

ε

0.2 1 5

Height (ms−1)

γ

Fig. 4. Log-normally distributed parameters for the silicon synapses. τ1 (µ ±σ = 31 ± 6.4ms) and τ2 (0.8 ± 0.11ms) are the two time-constants ofthe second-order LPF; ε (0.4 ± 0.06ms) and γ (1.0 ± 0.29ms−1) are thewidths and heights of the extended pulse, respectively (see (8)).

This equation may be solved to obtain the trajectory of Im,and hence the steady-state spike-rate:

r(Isyn) =κIsyn

CSUTln

(Isyn / I0 + 1

Isyn / Ithr + 1

)−1,

where I0 and Ithr are the values of Im at reset and threshold,respectively. These values correspond to the leakage current ofthe transistors and the peak short-circuit current of the inverter,respectively.

IV. VALIDATION WITH CIRCUIT MODELS

We proceed by validating the extended NEF neuromorphicarchitecture (see Fig. 2) implemented using the circuit models(see §III) on two fundamental dynamical systems: an integratorand a controlled oscillator. We use Nengo 2.3.1 [4] to simulatethis neuromorphic system with a time-step of 50µs. Test datais sampled independently of the training data used to optimize(3) via regularized least-squares. For comparison with (5)—simulated via Euler’s method—spike-trains are filtered using(2) with τ = 10ms. For each trial, the somatic parametersand synaptic parameters (Fig. 4) are randomly sampled fromdistributions generated using a model of transistor mismatch(validated in SPICE). These parameters determine each lineartransformation Γj , as defined in (11).

A. Integrator

Consider the one-dimensional integrator, x = u, y = x.We represent the scalar x(t) in a population of 512 modelledsilicon neurons. To compute the recurrent function, we use(11), and assume x = u is given. To be specific, we optimize(3) for Dx, and then add u and u to the second and thirdcoordinates of the representation, respectively, to decode:

Φ = [x, u, u] .

To evaluate the impact of each extension to the NEF, wesimulate our network under five conditions: using standardPrinciple 3, accounting for second-order dynamics, accountingfor the pulse-extender, accounting for the transistor mismatchin τ1, and our full extension (Fig. 5). We find that the lastreduces the error by 63%, relative to the first, across a widerange of input frequencies (5–50Hz).

B. Controlled 2D Oscillator

Consider a controllable two-dimensional oscillator,

x = f(x) + u, y = x,

f(x1, x2, x3) = [−ωx3x2, ωx3x1, 0]T ,

Page 4: Extending the Neural Engineering Framework for Nonideal ...web.stanford.edu/group/brainsinsilicon/documents/avoelker.pdf · A. Principle 1 – Representation The first NEF principle

5 10 15 20 25 30 35 40 45 50

Frequency (Hz)

0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

Nor

mal

ized

RM

SE

Principle 3

2nd-Order

Pulse-Extender

Mismatch

Full

Fig. 5. Effect of each NEF extension, applied to a simulated inte-grator given frequencies ranging from 5–50Hz (mean spike-rate 143Hz).The standard approach (Principle 3) achieves a normalized RMSE of0.203 with 95% CI of [0.189, 0.216] compared to the ideal, while ourextension (Full) achieves 0.073 with 95% CI of [0.067, 0.080]—a 63%reduction in error—averaged across 25 trials and 10 frequencies. The largestimprovement comes from accounting for the second-order dynamics, while asmaller improvement comes from accounting for the pulse-extended dynamics.Accounting for the transistor mismatch on its own is counter-productive.

0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00

Time (s)

−0.75

−0.50

−0.25

0.00

0.25

0.50

0.75

y(t

)

y

y

y

Fig. 6. Output of controlled 2D oscillator with ω = 5Hz (mean spike-rate140Hz). The control (x∗3) is changed from 0.5 to −0.5 at 1 s to reverse thedirection of oscillation. The standard Principle 3 (y) achieves a normalizedRMSE of 0.188 with 95% CI of [0.166, 0.210] compared to the ideal (y),while our extension (y) achieves 0.050 with 95% CI of [0.040, 0.063]—a73% reduction in error—averaged across 25 trials.

where ω is the angular frequency in radians per second, x3controls this frequency multiplicatively, and x∗3 is the fixed-point target supplied via input u3 = x∗3−x3. The inputs u1 andu2 initiate the oscillation with a brief impulse. We representthe three-dimensional state-vector x(t) in a population of 2048modelled silicon neurons.6 To compute the recurrent function,we again use (11). For this example, u(t) ≈ 0 for most t(apart from initial transients and changes to the target x∗3),and so x = Jf (x) · x + u ≈ Jf (x) · f(x) (where Jf denotesthe Jacobian of f ). We then optimize (3) for Dx, Df(x), andDJf (x)·f(x), and add u to the second column of the matrixrepresentation to decode:

Φ = [x, f(x) + u, Jf (x) · f(x)] .

We find that this solution reduces the error by 73% relative tothe standard Principle 3 solution (Fig. 6).

V. SUMMARY

We have provided a novel extension to the NEF that di-rectly harnesses the dynamics of heterogeneous pulse-extendedsecond-order LPFs. This theory is validated by software simu-lation of a neuromorphic system, using circuit models with

6We use this many neurons in order to minimize the noise from spiking.

parameter variability validated in SPICE, for two fundamentalexamples: an integrator and a controlled oscillator. Whencompared to the previous standard approach, our extension isshown to reduce the error by 63% and 73% for the integratorand oscillator, respectively. Thus, our theory enables a moreaccurate mapping of nonlinear dynamical systems onto arecurrently connected neuromorphic architecture using non-ideal silicon synapses. Furthermore, we derive our theory in-dependently of the particular neuron model and encoding. Thisadvance helps pave the way toward understanding how non-ideal physical primitives may be systematically analyzed andthen subsequently exploited to support useful computations inneuromorphic hardware.

ACKNOWLEDGEMENTS

This work was supported by CFI and OIT infrastructure,the Canada Research Chairs program, NSERC Discovery grant261453, ONR grants N000141310419 and N0001415l2827,and NSERC CGS-D funding. The authors thank Wilten Nicolafor inspiring (9) with a derivation for the double-exponential.

REFERENCES

[1] C. Mead, Analog VLSI and Neural Systems. Boston, MA: Addison-Wesley, 1989.

[2] C. Eliasmith and C. H. Anderson, Neural engineering: Computation,representation, and dynamics in neurobiological systems. MIT Press,2003.

[3] C. Eliasmith, T. C. Stewart, X. Choo, T. Bekolay, T. DeWolf, Y. Tang, andD. Rasmussen, “A large-scale model of the functioning brain,” Science,vol. 338, no. 6111, pp. 1202–1205, 2012.

[4] T. Bekolay, J. Bergstra, E. Hunsberger, T. DeWolf, T. C. Stewart,D. Rasmussen, X. Choo, A. R. Voelker, and C. Eliasmith, “Nengo: APython tool for building large-scale functional brain models,” Frontiersin neuroinformatics, vol. 7, 2013.

[5] S. Choudhary, S. Sloan, S. Fok, A. Neckar, E. Trautmann, P. Gao,T. Stewart, C. Eliasmith, and K. Boahen, “Silicon neurons that compute,”in International Conference on Artificial Neural Networks, vol. 7552.Springer, 2012, pp. 121–128.

[6] S. Menon, S. Fok, A. Neckar, O. Khatib, and K. Boahen, “Controllingarticulated robots in task-space with spiking silicon neurons,” in 5thIEEE RAS/EMBS International Conference on Biomedical Robotics andBiomechatronics. IEEE, 2014, pp. 181–186.

[7] F. Corradi, C. Eliasmith, and G. Indiveri, “Mapping arbitrary mathemati-cal functions and dynamical systems to neuromorphic VLSI circuits forspike-based neural computation,” in 2014 IEEE International Symposiumon Circuits and Systems (ISCAS). IEEE, 2014, pp. 269–272.

[8] A. Destexhe, Z. F. Mainen, and T. J. Sejnowski, “Synaptic currents,neuromodulation and kinetic models,” The handbook of brain theoryand neural networks, vol. 66, pp. 617–648, 1995.

[9] J. V. Arthur and K. Boahen, “Recurrently connected silicon neurons withactive dendrites for one-shot learning,” in International Joint Conferenceon Neural Networks (IJCNN), vol. 3. IEEE, 2004, pp. 1699–1704.

[10] W. Himmelbauer and A. G. Andreou, “Log-domain circuits in subthres-hold MOS,” in Circuits and Systems, 1997. Proceedings of the 40thMidwest Symposium on, vol. 1. IEEE, 1997, pp. 26–30.

[11] K. E. Friedl, A. R. Voelker, A. Peer, and C. Eliasmith, “Human-inspiredneurorobotic system for classifying surface textures by touch,” Roboticsand Automation Letters, vol. 1, no. 1, pp. 516–523, 01 2016.

[12] A. R. Voelker and C. Eliasmith, “Improving spiking dynamical net-works: Accurate delays, higher-order synapses, and time cells,” 2017,Manuscript in preparation.

[13] E. Culurciello, R. Etienne-Cummings, and K. Boahen, “A biomorphicdigital image sensor,” IEEE Journal of Solid-State Circuits, vol. 38, no. 2,pp. 281–294, 2003.

[14] P. Gao, B. V. Benjamin, and K. Boahen, “Dynamical system guidedmapping of quantitative neuronal models onto neuromorphic hardware,”IEEE Transactions on Circuits and Systems, vol. 59, no. 10, pp. 2383–2394, 2012.