43
1 A Model of fast local homeostatic plasticity Hafsteinn Einarsson 1,* , Marcelo Matheus Gauy 1 , Johannes Lengler 1 and Angelika Steger 1,2 1 Institute of Theoretical Computer Science, Department of Computer Science, ETHZ, Z ¨ urich, Switzerland 2 Collegium Helveticum, Z ¨ urich, Switzerland Correspondence*: Hafsteinn Einarsson [email protected] ABSTRACT 2 Homeostasis is necessary to keep the level of activity in neural networks within a healthy regime. 3 Synaptic scaling and other experimentally observed mechanisms apply on a time scale of hours. 4 In contrast, learning happens on a time scale of seconds to minutes. Therefore it is conjectured 5 that some form of fast homeostasis is necessary. We propose a novel model realising fast acting 6 synaptic homeostasis based on a spike-timing dependent metaplasticity rule. The model operates 7 in the presence of oscillating input, and it effectively normalises the expected number of spikes in 8 an up phase of the input rhythm. We present the model in two settings: an abstract one which 9 allows rigorous analysis, and a more detailed one with leaky integrate-and-fire neurons. 10 Keywords: homeostasis, STDP, oscillations, synchrony, synapse memory, metaplasticity 11 1 INTRODUCTION Long term synaptic modifications are essential for learning and memory (Sj ¨ ostr ¨ om et al., 2008). The 12 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 13 together, wire together (Schatz, 1992), says that concurrently active neurons should strengthen their 14 synaptic connections. The postulate is by now widely accepted, but the biological mechanisms behind it 15 are still not completely understood. 16 Also, a straightforward implementation of Hebb’s postulate has undesirable effects as without any 17 negative feedback the weights continue to grow, which increases the rate of the postsynaptic neuron to 18 abnormal levels (Yger and Gilson, 2015). Consequently, a homosynaptic Hebbian learning rule requires 19 bounded synapse weights or some compensatory mechanism to stabilise neuron firing rates (Von der 20 Malsburg, 1973; Oja, 1982; Bienenstock et al., 1982; Miller and MacKay, 1994; Abbott and Nelson, 2000; 21 Turrigiano, 2008). 22 To some extent the aforementioned effect can be achieved by well-studied spike-timing dependent 23 plasticity (STDP) mechanisms. STDP has been known to sustain a stable background state in balanced 24 networks (Rubin et al., 2001; Van Rossum et al., 2000; Song et al., 2000; Kempter et al., 2001, 1999), and 25 it is known that STDP can regulate activity (Pfister et al., 2006). However, many of these mechanisms 26 rely on some form of global control of the postsynaptic neuron over short term dynamics of the incoming 27 synapses. For example a postsynaptic spike triggers a small negative weight change in all the incoming 28 1

A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

1

A Model of fast local homeostatic plasticityHafsteinn Einarsson 1,∗, Marcelo Matheus Gauy 1, Johannes Lengler 1 andAngelika Steger 1,2

1 Institute of Theoretical Computer Science, Department of Computer Science,ETHZ, Zurich, Switzerland2 Collegium Helveticum, Zurich, SwitzerlandCorrespondence*:Hafsteinn [email protected]

ABSTRACT2

Homeostasis is necessary to keep the level of activity in neural networks within a healthy regime.3Synaptic scaling and other experimentally observed mechanisms apply on a time scale of hours.4In contrast, learning happens on a time scale of seconds to minutes. Therefore it is conjectured5that some form of fast homeostasis is necessary. We propose a novel model realising fast acting6synaptic homeostasis based on a spike-timing dependent metaplasticity rule. The model operates7in the presence of oscillating input, and it effectively normalises the expected number of spikes in8an up phase of the input rhythm. We present the model in two settings: an abstract one which9allows rigorous analysis, and a more detailed one with leaky integrate-and-fire neurons.10

Keywords: homeostasis, STDP, oscillations, synchrony, synapse memory, metaplasticity11

1 INTRODUCTION

Long term synaptic modifications are essential for learning and memory (Sjostrom et al., 2008). The12postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire13together, wire together (Schatz, 1992), says that concurrently active neurons should strengthen their14synaptic connections. The postulate is by now widely accepted, but the biological mechanisms behind it15are still not completely understood.16

Also, a straightforward implementation of Hebb’s postulate has undesirable effects as without any17negative feedback the weights continue to grow, which increases the rate of the postsynaptic neuron to18abnormal levels (Yger and Gilson, 2015). Consequently, a homosynaptic Hebbian learning rule requires19bounded synapse weights or some compensatory mechanism to stabilise neuron firing rates (Von der20Malsburg, 1973; Oja, 1982; Bienenstock et al., 1982; Miller and MacKay, 1994; Abbott and Nelson, 2000;21Turrigiano, 2008).22

To some extent the aforementioned effect can be achieved by well-studied spike-timing dependent23plasticity (STDP) mechanisms. STDP has been known to sustain a stable background state in balanced24networks (Rubin et al., 2001; Van Rossum et al., 2000; Song et al., 2000; Kempter et al., 2001, 1999), and25it is known that STDP can regulate activity (Pfister et al., 2006). However, many of these mechanisms26rely on some form of global control of the postsynaptic neuron over short term dynamics of the incoming27synapses. For example a postsynaptic spike triggers a small negative weight change in all the incoming28

1

Page 2: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

synapses (Kempter et al., 1999, 2001), which effectively normalises the spiking rate. Other proposed29mechanisms require some form of fast (time scale 100 s) synaptic scaling to achieve competition between30the synapses (Van Rossum et al., 2000), which again normalises spiking rates. Another common assumption31of STDP models is that each pre-post spike pair induces a weight change on the synapse which, even32though it might be small, is perhaps unnecessarily unstable (Kempter et al., 1999, 2001; Song et al.,332000). Such behaviour does not match experimental results where weight changes are not observed to be34instantaneous (Yger and Gilson, 2015).35

It has been observed that there is a negative correlation between long-term potentiation (LTP) produced at36individual unitary synapses and the initial synaptic strength. These observations have led to the question of37whether the plasticity versus stability conundrum (Sjostrom et al., 2008) could be resolved at the synapse38level: “It remains an intriguing possibility, however, that metaplasticity or synaptic scaling, or both, can39act locally, perhaps by normalising synapse strength in parts of the dendritic tree”, from (Sjostrom et al.,402008). The existence of such local homeostatic mechanisms has been suggested based on experimental41evidence (Turrigiano, 2008; Watt and Desai, 2010; Zenke et al., 2013).42

In this paper we confirm that such a mechanism can be realised using a simple STDP rule under the43assumption that the input is oscillatory. This challenges the belief that regulation of neuronal excitability is44difficult if synapses are modified independently by Hebbian rules (Abbott and Nelson, 2000). Our rule45differs from previous mechanisms since it does not directly normalise the rate of the neuron. Instead46the rule controls when the neuron spikes within an up phase (a phase of increased input and membrane47depolarisation) of an oscillatory input signal. In this way, it normalises the average number of spikes per up48phase, rather than the output rate (since the rate also depends on the duration of the average up and down49phases). Still, the proposed mechanism keeps both synaptic input weights and spiking rate in balance: it50reliably prevents positive feedback loops. For a short summary of the mechanism see Figure 1.51

We realise the normalisation property by assuming that each synapse has a memory trace which allows it52to estimate how many of the most recent STDP-relevant spike pairs were pre-post events (the presynaptic53spike happened before the postsynaptic spike), and how many of them were post-pre events (the postsynaptic54spike happened earlier), see Figure 1 (c). We refer to these pairs as STDP events. Plasticity is modulated by55the state of the memory trace and allows individual synapses to sample whether the input weight needs to56increase or decrease. For a discussion on the tradeoff between plasticity and stability see Section 4.2. This57mechanism can be interpreted as a form of metaplasticity. It has a similar structure as previous models58which use dopamine to modulate plasticity (Izhikevich, 2007) or time-averaged voltage (Clopath et al.,592010), but the resulting effects are different.60

We present the mechanism in a setting with conductance based leaky integrate-and-fire (LIF) neu-61rons (Meffin et al., 2004). We model the input to the neurons as an inhomogeneous Poisson process where62the input alternatingly comes with a high rate (up phase) or a low rate (down phase). Note that this is63consistent with the observation that subthreshold potential of neurons oscillates between a state close to64their action potential threshold (up state) and a state close to their resting potential (down state) (Okun and65Lampl, 2008; Haider et al., 2006; Wilson, 2008; Ernst and Pawelzik, 2011). This type of rhythmic input66plays a key role in the communication by coherence hypothesis which posits that two populations in the67brain communicate when they are phase locked (Thut et al., 2012; Fell and Axmacher, 2011). Oscillations68in single neurons and neural populations have been studied extensively for neuron models based on ordinary69differential equations, see (Ashwin et al., 2016) for a review. Oscillations have even been observed in the70motor cortex during non-rhythmic activity (Churchland et al., 2012). See Section 4.1 for a discussion. For71each up phase in which the target neuron spikes the synapses receive a feedback signal of whether they72

This is a provisional file, not the final typeset article 2

Page 3: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

spiked before or after the postsynaptic neuron. The synapses store the feedback signals in a memory trace73which is an exponentially decaying unspecified substance which is updated and decays following STDP74events. The trace represents an estimate of whether the input weight needs to increase or decrease. The75synapses are binary. To prevent too many synapses from changing their weights within a short time the76success of a weight change attempt is stochastic.77

For a mathematical treatment, we analyse in the appendix a simplified limit case when the up phases78are very short. In this setting each input neuron can spike at most once per up phase and the membrane79potential is not leaky. We prove that the input weight converges with exponential speed to a stable state.80Consequently homeostasis is achieved within seconds to minutes for bioplausible parameter regimes. We81also discuss extensions which allow heterogeneous and multimodal weight distributions.82

Other mechanisms for short-term homeostasis have been proposed in the literature. We discuss the83alternatives and relate them to our model in Section 4.3.84

2 RESULTS

The key feature of our approach is a mechanism that allows a synapse to base its behaviour (increasing or85decreasing its weight) solely by observing the relative timing of pre- and postsynaptic spikes. To achieve86this each synapse is equipped with a memory trace, m(t), represented by a scalar. The memory is updated87similar to a standard STDP protocol: spikes of the pre-neuron that happen in a (short) window before the88spike of the post neuron add one to the memory, those after the post neuron add a minus one. This protocol89is executed only for the first spike of the post neuron in an up phase, which the synapse can identify by90not considering spikes that happen too quickly after a previous one. When the memory trace lies outside a91certain target interval, then a weight change of the synapse may be triggered which drives the timing of pre-92and postsynaptic spikes back to the target regime. Such changes are only executed with a certain (small)93probability in order to prevent too many synapses from changing their weight in a short time. The detail94description of the synapse is deferred to Section 2.1, cf. also Figure 1 (c) and (d).95

We illustrate the properties of such a synapse in two scenarios. In the first one we study a single post96neuron that receives input from an input population of, say, d neurons. Synapses have binary weights (we97refer to them as being either weak or strong). The synapses can quickly – in the order of seconds to minutes98– normalise the number of strong synapses, cf. Figure 1 (e). In fact, the synapse actually normalizes the99relative spike time of the first spike of the post neuron in an up phase. More precisely, the synapses will100regulate the number of strong synapses such that the first spike of the post neuron will occur roughly at101time r · λU , where λU denotes the length of the up phase and 0 < r < 1 is determined by the parameters of102the synapse. Figure 2 shows that this value r is invariant under variations of the input, in particular of the103length of the up and down phases and number of input neurons, cf. also Section 2.2. Similarly, the number104of output spikes per up phase only changes very mildly even when the input varies widely. The learning105paradigm is highly robust against changes of the input or in the parameters, where changing the latter leads106to a different point of attraction for r and for the number of output spikes per up phase, cf. Section B.107

In the second scenario we study a feed-forwarding setup with multiple layers (see Figure 1 (a)). The108results are similar as for the single neuron case, with the relative spike times having a sharper concentration109for each consequent layer (Figure 5 and 6). In particular the signal becomes more synchronized as it is110propagated through the feed-forward structure. In deep layers this leads to a multimodal spike rate during111each up phase, with the number of peaks being roughly 1/r. More details are given in Section 2.2.112

Frontiers 3

Page 4: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

2.1 Single neuron setup113

We start by giving some intuition behind the properties and features of the synapse. This intution comes114from studying the synapse in a more abstract model which is suitable for mathematical analysis. It can be115thought of as a limiting case of the main model where up phases are very short and each neuron spikes116exactly once. In the abstract model the target neuron spikes after it has received exactly θ spikes over117strong synapses within an up phase – weak synapses do not contribute. Each synapse perceives whether the118presynaptic neuron spikes before (pre-post) or after (post-pre) the postsynaptic one within the up phase,119and it keeps a memory trace which estimates the fraction of pre-post pairs.120

If this memory trace is outside a target interval, [τ1, τ0] (see Figure 3 (c)) then the synapse can change its121weight, otherwise the weight remains fixed. Weight changes can be attempted after observing a pre-post122or a post-pre spike pair and they are stochastic in the sense that they do not reliably lead to a weight123change. Weak synapses can increase their weight with probability pw→s (if the memory trace is larger than124τ0) and strong synapses can decrease their weight with probability ps→w (if the memory trace is smaller125than τ1). As the post neuron spikes after receiving input from the first θ strong synapses roughly θd/ds126spikes pairs are of type pre-post (and add a +1 to the memory), while the others are post-pre (and add a127-1 to the memory). After some time the memory value will thus be around 2θ/ds − 1. By choosing an128appropriate target interval we can thus influence the number of strong synapses which, in turn, determines129how quickly the post neuron spikes within an up-phase. Further details regarding the abstract setting are130given in Section A where we also provide a detailed quantitative analysis.131

For the synapse we generalise the above description so that it can also handle up-phases with more than132one spike. We also introduce a decay parameter γ < 1 for the memory so that it adapts more quickly to a133changing number of strong synapses. An update to the memory corresponds to a decay of γ and an addition134of the signal (+1 or −1). A detailed description of this synapse is given in Section 3.135

The number of synapses whose memory trace lies outside the target interval [τ1, τ0] along with the weight136change success parameters ps→w and pw→s determine how many weight changes we observe in every137up phase of a fixed input rate function as well as the time to converge to a stable state. This relation for138ps→w an pw→s fixed is shown in Figure 4 (c). It shows a tradeoff between plasticity and stability, which139we discuss in more detail in Section 4.2. We define out(ds, d) to be the expected number of synapses140whose memory trace lies outside the target interval [τ1, τ0] for ds strong input synapses. Figure 4 (b)141shows the dependence on ds for different widths of the target interval shown in Figure 4 (a). Note that142out(ds, d) decreases exponentially with increasing width of the target interval. Therefore we can easily143choose parameters such that a region around the stable state induces almost no weight changes at all, thus144avoiding unnecessary fluctuations of synaptic strengths. If the total input weight to the post neuron is145outside the desired steady state, the synapses ensure that in the order of seconds or minutes the total input146weight moves back to the steady state, cf. Fig 1 (e). The effect of choosing ps→w and pw→s too large in147combination with a small target interval can lead to plasticity overshooting and is shown in Figure 4 (d).148

2.2 Robustness against input variations149

Figure 13 illustrates the robustness of our mechanism. We vary the lengths of up- and down phases150λU , λD, the number d of inputs, and the input noise. In addition, we also study robustness against inputs151which are not completely in phase, i.e. for each input neuron we shift the rate function by a constant x152chosen uniformly at random from the interval [0, σ] for some σ > 0. The effects of parameter variations153are summarised below.154

This is a provisional file, not the final typeset article 4

Page 5: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

The rate of the output neuron is proportional to λD and λU (see Figures 13c and 13b). The relative spike155time r and the number of spikes per up phase stay invariant under changes to the input parameters except156for lower order effects from λU , d and noise strength. The effects are discussed in Section B.2. Lastly,157small random phase shifts of the rate function, independently for each input neuron, did not significantly158affect the process qualitatively or quantitatively.159

2.3 Feed-forward dynamics160

We study properties of our synapse in a layered network by feeding the output of the first layer as input161to a second layer. The setup is shown in Figure 1 (a) and we refer to the first three layers as populations I ,162A and B. All synapses have identical parameters. We compare convergence for two different parameter163setups and two different input rates, the result shown in Figure 5. For a fixed parameter setup the rates of164populations A and B after convergence is independent of variations in the input rate. The input weights do165depend on the input rate however to get the desired output rate. The weights are quickly adjusted even for166sharp changes of the input rate (see green curve in (c)).167

The rate function of each layer is shown in Figure 6 for a six layer deep network. With each consequent168layer the spikes become more synchronous. Increased synchrony has for example been observed for in-vitro169feed-forward populations Reyes (2003). Due to the time complexity of simulating a deep network of many170layers, we rather simulated each two consecutive layers individually. The input rate function of the i-th171layer was sampled by using the output spike train of the i− 1-th layer and for i = 1 we use our standard172input rate function. Note that the up phase shifts to the right by a small amount for every new layer due to173synaptic delays and input integration. We correct for the former term so the shift only accounts for time174until the first spike can be observed. The result of a deeper network structure is presented in Section B.4.175

2.4 All-to-all rule176

The spike pairs which constitute the potentiation and depression signals are with respect to the first177postsynaptic spike within an up phase. This choice is not necessary for the mechanism to work. The178principle of sampling the expected spike time of the postsynaptic neuron in an up phase also applies if we179consider all pre-post and post-pre spike pairs within an up phase to constitute potentiation and depression180signals. Figure 7 compares the memory value for these two rules in a static setting where synapses have181fixed weights. The figure shows that the distinguished spike rule has an advantage. It enables the memory182trace to more easily compare the number of strong input synapses.183

3 MATERIAL AND METHODS

In this section we present details of the model and its parameters. For reference, Table 1 (a) contains184parameter values which are common amongst all sections.185

3.1 Input and network structure186

We connect the neurons layer by layer in a feed-forward manner, see Figure 1 (a). The bottom layer187consists of the input population, denoted by I . The connections from layer I to the second layer, denoted188by A, are random and their density is controlled by the parameter pIA.189

The input population spikes according to an inhomogenous Poisson process, see Figure 1 (b). The process190is controlled by the input parameters H , L, λU and λD. The input rate oscillates between up phases of rate191H Hz and length λU ms, and down phases of rate L Hz and length λD ms. For most figures we set H = 40192

Frontiers 5

Page 6: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

and L = 0. In addition to the spikes from the previous layer each neuron receives a 1000 Hz noise input193with weight 1 nS.194

3.2 Neuron model195

The neurons are conductance-based leaky integrate-and-fire (LIF) neurons (Dayan and Abbott, 2003). We196use the NEST-simulator (Gewaltig and Diesmann, 2007) (NEST, RRID:SCR 002963) and the neuron model197iaf cond exp. The subthreshold membrane potential is given by the following differential equation198

CdV

dt= −gL(V (t)− EL)− ge(t)(V (t)− Ee).

C = 250 pF is the capacity of the membrane, EL = −70 mV is the leak reversal potential, gL = 16.67 nS199is the leak conductance, Ee = 0 mV is the excitatory reversal potential and ge(t) is the excitatory synaptic200conductance. The excitatory synaptic conductance is increased for every spike and decays exponentially.201That is, for neuron j which receives input from neurons in the set Ij , we have202

τsyndgedt

= −ge(t) +∑

i∈Ij

wij(t)1i(t− tdelay),

where τsyn = 0.2 ms is the synaptic time constant, wij(t) is the weight of a synapse (in nS), and 1i(t) is2031 if neuron i spiked at time t and 0 otherwise, and tdelay = 1 ms is the delay of the synapse. When the204membrane potential exceeds the threshold Vth = −55 mV the neuron fires, the membrane potential is reset205to Vreset = −60 mV and it enters a refractory period for τref = 2.5 ms where it ignores all incoming spikes.206Note that we only use excitatory neurons and we do not account for input from inhibitory sources.207

3.3 Synapse model208

Each synapse stores a memory trace which is the metaplastic component of the STDP rule. The synapse209interprets spike events of the pre- and postsynaptic neurons as one of three types of signals, potentiation,210depression or no action (see Figure 1 (c) and the next section). The trace is increased for a potentiation211signal and decreased for a depression signal. These two changes of the memory trace can trigger weight212changes of the synapse when the new memory trace is either too small or too large.213

More formally, for a synapse we denote by m(t) the value of the memory trace at time t. In what follows2140 < γ < 1, cearly and clate are the metaplasticity parameters of the synapse. We capture potentiation215(depression) signals in the variable 1↑(t) (1↓(t)) which is 1 if there is a potentiation (depression) signal at216time t and 0 otherwise. Depression signals are triggered by presynaptic spikes and lead to the following217memory trace update218

m(t)← m(t) + 1↓(t) · [clate −m(t) · (1− γ)]. (1)

Potentiation signals are triggered by postsynaptic spikes. For a potentiation signal at time t we apply the219following memory trace update once for every presynaptic spike which is within a short time window of220the postsynaptic spike221

m(t)← m(t) + 1↑(t) · [cearly −m(t) · (1− γ)]. (2)

This is a provisional file, not the final typeset article 6

Page 7: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

After every memory trace update the synapse attempts to update its weight if the new memory trace is too222small or too large. Since synapse weights are binary in our model the synapse can switch between two223states, weak or strong. Recall from the results section that the weight change only succeeds with some224probability (ps→w for strong synapses, and pw→s for weak synapses). The parameters τ0, τ1 which define225the target memory interval, and the parameters ps→w and pw→s, we refer to as the plasticity parameters226of the model. Now, let Bernoulli(p) be a Bernoulli random variable which is 1 with probability p and 0227otherwise. For depression signals we apply228

w(t)← ww if m(t) < τ1 and Bernoulli(ps→w) = 1 (3)

and for potentiation signals we apply229

w(t)← ws if m(t) > τ0 and Bernoulli(pw→s) = 1. (4)

3.4 Potentiation and depression signals230

The spike events which lead to potentiation and depression signals correspond to pairs of pre- and231postsynaptic spikes which are close in time, see Figure 1 (c). In an ideal setting the postsynaptic neuron232spikes once in every up phase and all presynaptic spikes can be partitioned in two groups depending on233whether they spiked before or after the postsynaptic spike. However, if the postsynaptic neuron spikes234more than once some presynaptic spikes can come in-between two postsynaptic spikes and in that case235they contribute to both increase and decrease of the memory trace. Such behaviour is not problematic (see236Section 2.4 and Figure 7) but it is easier to understand the ideal setting from above. To be in the ideal237setting we define the concept of a distinguished spike. A postsynaptic spike is distinguished if there was no238distinguished spike before it in a small time window (of the same order as up phase length). One can think239of a distinguished spike as being the first one in an up phase or a burst. A spike pair triggers a potentiation240or depression signal if they are close in time and the postsynaptic spike is distinguished. Tburst, Tearly and241Tlate are the parameters of the model.242

We now formalise the notion of distinguished spikes. We denote by Spre(t1, t2) and Spost(t1, t2) the set243of all pre- and postsynaptic spikes respectively in the interval [t1, t2). The following variable is one if there244was a distinguished spike in the last Tburst ms and 0 otherwise245

oburst(t) =∑

t′∈Spost(t−Tburst,t)

(1− oburst(t′)). (5)

The following variable is one for Tlate ms after a distinguished spike246

olate(t) =∑

t′∈Spost(t−Tlate,t)

(1− oburst(t′)). (6)

Frontiers 7

Page 8: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

For the presynaptic neuron the variable 1pre(t) is one if the presynaptic neuron spiked at time t and 0247otherwise (similarly the variable 1post(t) captures post spikes). Now, the following variable is one if the248synapse receives a depression signal at time t249

1↓(t) = 1pre(t) · olate(t). (7)

Similarly the following variable is one if the synapse receives a potentiation signal at time t250

1↑(t) = 1post(t) · (1− oburst(t)) ·min{1, |Spre(t− Tearly, t)|}. (8)

By choosing Tburst > λU we ensure that we have only one distinguished spike per up phase. The event251parameters Tearly and Tlate are chosen large enough such that every presynaptic spike in an up phase can252contribute to a potentiation or depression signal. We also assume they are small enough such that we do253not have spike events spanning across two up phases.254

4 DISCUSSION

We presented a local learning rule which allows the input weight to be normalised quickly. Excessive255input weights that are not needed to activate the target neuron are reduced. Moreover, the mechanism is256energy-efficient in the sense that for parameters favoring stability it changes only few synapses after the257process has converged. We now discuss these properties, the assumptions of the model and compare them258with other previously proposed homeostatic mechanisms.259

4.1 Rhythmic input260

Our homeostasis mechanism is designed for neural rhythms. Oscillations are ubiquitous in brain activ-261ity (Reyes, 2003; Buzsaki and Draguhn, 2004), but their exact functional role is still not fully understood.262The most prominent hypothesis of the functional role of oscillations in brain activity is the communication-263through-coherence hypothesis, cf. the reviews (Fell and Axmacher, 2011; Thut et al., 2012). It posits that264neural populations communicate through phase-locked oscillations. If the subthreshold potential of the265two populations are phase locked, the source population can activate the target if it is in an up state when266the signal arrives, which is considered to lead to long-term potentiation (LTP). Correspondingly, if the267target population is in a down state then the activation becomes harder and is believed to lead to long-term268depression (LTD). Recent evidence indicates that retrieval of information in the hippocampus is discretised269with respect to slow wave gamma oscillations and sharp wave ripple events which are recognised as the270most synchronous patterns in the brain (Pfeiffer and Foster, 2015).271

Besides the communication-through-coherence hypothesis, there are conceivably more functional roles272for oscillations. As an example, certain brain rhythms are thought to be necessary for the induction of273LTP (Nyhus and Curran, 2010; Buzsaki, 2006). Short time dynamics, and hence fast rhythms, predict better274the spike times of neurons. This is known from studies on the peer prediction method, where a neuron’s275spike train is predicted based on a subject’s behaviour and the spikes of neighbouring neurons. The method276predicts the spike trains significantly better than methods based solely on the subject’s behaviour. For277hippocampal neurons it achieves the best performance if the spikes of neighbouring neurons within a time278

This is a provisional file, not the final typeset article 8

Page 9: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

window 10-30 ms are used (Buzsaki, 2006). Up phase durations in the range of 20-30 ms thus correspond279to 33-50 Hz gamma oscillations (Hughes, 2008).280

4.2 Stability versus plasticity281

The proposed learning rule achieves normalisation quickly. Though speed is a desirable property282(see (Zenke et al., 2013) for a discussion on the necessity of a fast homeostatic mechanism), it is also283important to be energy efficient, that is, the total number of synaptic weight changes should be small.284One way to increase stability is to choose the weight transition probabilities ps→w and pw→s small. In285fact, they should be small since otherwise too many synapses can switch their weight simultaneously,286which can either lead to the input weight becoming too small to activate the target neuron or it can cause287oscillations around the stable state. Another way to control the trade-off between plasticity and stability288is through τ0 and τ1. We choose them such that in the stable state an ε fraction of the weak (strong)289synapses have their memory trace outside the target interval where ε > 0 is a parameter of the process290and a measure of the stability of the system1. For a larger ε the process converges more quickly but at the291cost of more synaptic changes in the stable state. See Figure 4 for an illustration of this effect. Note that292there we fix ps→w = pw→s instead of fixing the fraction of stable synapses in the stable state. We then293choose the thresholds, τ0 and τ1 such that the expected weight change in the stable state is zero. Finally,294stability is affected by γ which determines how persistent the memory is. The memory trace can range295from −(1− γ)−1 to (1− γ)−1 so the closer γ is to 1 the larger the range becomes. A larger γ leads to a296sharper concentration of the random variable (1− γ) ·m(t) which in turn yields a better estimator for the297relative spike time r but at the cost of being less responsive.298

Another option to increase stability is to allow ps→w, pw→s, τ0 and τ1 (and even γ) to be modified over a299longer time scale. A synapse which does not change its weight for a long time could thus become more300rigid via plasticity of these parameters.301

4.3 Other mechanisms of local or short-term homeostatic plasticity302

Our rule bears resemblance to classic doublet STDP rules given that both are a form of homosynaptic303plasticity (depending only on pre-post and post-pre spike pairs). One difference is that our synapse uses304sampling to estimate whether weight change is necessary instead of changing the weight for every spike305pair. It has been shown that standard doublet STDP rules stabilise output rates under some conditions (Watt306and Desai, 2010; Song and Abbott, 2001; Kempter et al., 1999, 2001). However, weight changes for each307spike pair is in disagreement with STDP experiments (Yger and Gilson, 2015). Further, synapse weights308need to be upper bounded to prevent unlimited increase in the cases when STDP rules induce competition309among synapses. To mitigate rapid incremental weight updates some models, such as the one in (Kempter310et al., 2001) discuss possible extensibility to delayed weight changes which accumulate and are applied311more infrequently. Lastly, many STDP rules which are stable for irregular spiking at low rates in the312balanced random network model are unable to re-stabilise if a subpopulation is stimulated synchronously313which can lead to a synfire explosion (Morrison et al., 2007).314

Other alternative stabilising mechanisms have been proposed which stabilise various parameters of315neurons or network activity. A form of synaptic scaling which controls the average membrane potential of316the postsynaptic neuron and is applied locally to dendritic compartments was suggested in (Rabinowitch317and Segev, 2006). The main difference is that their result requires a longer time scale than our model. The318

1 Recall that we choose ps→w and pw→s to locate the stable state, i.e., the point where the expected weight increase is the same as the expected weightdecrease

Frontiers 9

Page 10: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

rate based BCM rule (Bienenstock et al., 1982) is known to be stable. The rule has a form of memory in its319activity dependent learning threshold which decides the sign of a weight change. Experimental evidence320suggests that this threshold changes on a time scale of minutes to days (for a review see BCM section321in (Watt and Desai, 2010)). A difference of the BCM rule and our rule is that the BCM learning threshold322is a global varying parameter shared by all synapses and depending only on the neuron’s postsynaptic323rate while the memory trace in our model is computed locally at the synapse level. Other mechanisms324have acquired stabilisation through inhibitory plasticity. The work (Vogels et al., 2011) uses a doublet325STDP rule for inhibitory synapses which allows for balancing excitatory and inhibitory activity and326leads to asynchronous irregular network states. The main differences to our model are that we work in327a synchronous setting and we modify the weight from excitation to excitation. Intrinsic excitability has328also been considered as a means of stabilisation. A mechanism which regulates intrinsic excitability can329regulate the activity of a recurrent network, although the mechanism cannot compensate if a group of330synapses are potentiated too frequently (Remme and Wadman, 2012). In contrast our rule is well suited to331sense large synaptic input weights. Additionally intrinsic excitability applies on a larger timescale and it is332global in the sense that it is not applied at the synapse level but the neuron level.333

It is certainly plausible that homeostatic plasticity is achieved by more than a single mechanism (Tononi334et al., 1999; Watt and Desai, 2010) and it is conceivable that our mechanism maybe applies to a small part335of it. It is known that homeostatic plasticity is affected by a complex web of signalling processes many of336which are likely undiscovered (Pozo and Goda, 2010) and which might even be shared between, or belong337to, different homeostasis mechanisms.338

4.4 Conclusion339

There are two forms of homeostasis which have been prominently discussed in the literature. First340there is the concept of scaling i.e. normalising the total input weight. Second there is the concept of rate341normalisation which has been considered more plausible since scaling requires either global control of a342neuron over its synapses or the synapses decaying their weight in a weight dependent manner (Zenke et al.,3432013). In this paper we introduce a third option, a fast-acting homeostasis mechanism which normalises344the expected spike time of a neuron in an up phase of an oscillatory input rate function. Our rule is simple345with only the requirement of a metaplastic component with a few parameters. The parameters allow us to346effectively control the location of the stable state, and the rate of stabilisation. Two independent articles347have identified the order of the time scale for fast homeostasis to be seconds (El Boustani et al., 2012;348Zenke et al., 2013) which is well within our parameter regimes.349

CONFLICT OF INTEREST STATEMENT

The authors declare that the research was conducted in the absence of any commercial or financial350relationships that could be construed as a potential conflict of interest.351

AUTHOR CONTRIBUTIONS

The project idea came from A.S. and the analysis was performed in collaboration by all the authors. H.E.352did all simulations, prepared the figures and wrote a first draft of the manuscript. All authors helped revise353the manuscript and everyone approved the final version of it.354

This is a provisional file, not the final typeset article 10

Page 11: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

FUNDING

H. Einarsson was supported by grant no. 200021 143337 of the Swiss National Science Foundation.355Marcelo Matheus Gauy is supported by CNPq grant no. 248952/2013-7.356

ACKNOWLEDGMENTS

The authors would like to thank Felix Weissenberger for helpful discussions.357

REFERENCES

Abbott, L. F. and Nelson, S. B. (2000). Synaptic plasticity: taming the beast. Nature neuroscience 3,3581178–1183. doi:10.1038/81453359

Ashwin, P., Coombes, S., and Nicks, R. (2016). Mathematical frameworks for oscillatory network dynamics360in neuroscience. The Journal of Mathematical Neuroscience 6, 1–92. doi:10.1186/s13408-015-0033-6361

Barbour, B., Brunel, N., Hakim, V., and Nadal, J.-P. (2007). What can we learn from synaptic weight362distributions? TRENDS in Neurosciences 30, 622–629. doi:10.1016/j.tins.2007.09.005363

Bienenstock, E. L., Cooper, L. N., and Munro, P. W. (1982). Theory for the development of neuron364selectivity: orientation specificity and binocular interaction in visual cortex. The Journal of Neuroscience3652, 32–48366

Branco, T. and Staras, K. (2009). The probability of neurotransmitter release: variability and feedback367control at single synapses. Nature Reviews Neuroscience 10, 373–383. doi:10.1038/nrn2634368

Buzsaki, G. (2006). Rhythms of the Brain (Oxford University Press)369

Buzsaki, G. and Draguhn, A. (2004). Neuronal oscillations in cortical networks. science 304, 1926–1929.370doi:10.1126/science.1099745371

Chistiakova, M., Bannon, N. M., Chen, J.-Y., Bazhenov, M., and Volgushev, M. (2015). Homeostatic372role of heterosynaptic plasticity: models and experiments. Frontiers in computational neuroscience 9.373doi:10.3389/fncom.2015.00089374

Churchland, M. M., Cunningham, J. P., Kaufman, M. T., Foster, J. D., Nuyujukian, P., Ryu, S. I., et al.375(2012). Neural population dynamics during reaching. Nature 487, 51–56. doi:10.1038/nature11129376

Clopath, C., Busing, L., Vasilaki, E., and Gerstner, W. (2010). Connectivity reflects coding: a model of377voltage-based stdp with homeostasis. Nature neuroscience 13, 344–352. doi:10.1038/nn.2479378

Connors, B. W. and Gutnick, M. J. (1990). Intrinsic firing patterns of diverse neocortical neurons. Trends379in neurosciences 13, 99–104. doi:10.1016/0166-2236(90)90185-D380

Dayan, P. and Abbott, L. (2003). Theoretical neuroscience: computational and mathematical modeling of381neural systems. Journal of Cognitive Neuroscience 15, 154–155382

El Boustani, S., Yger, P., Fregnac, Y., and Destexhe, A. (2012). Stable learning in stochastic network states.383The Journal of Neuroscience 32, 194–214. doi:10.1523/JNEUROSCI.2496-11.2012384

Ernst, U. and Pawelzik, K. (2011). Sensible balance. Science 334, 1507–1508. doi:10.1126/science.3851216483386

Fell, J. and Axmacher, N. (2011). The role of phase synchronization in memory processes. Nature reviews387neuroscience 12, 105–118. doi:10.1038/nrn2979388

Gewaltig, M.-O. and Diesmann, M. (2007). Nest (neural simulation tool). Scholarpedia 2, 1430389

Haider, B., Duque, A., Hasenstaub, A. R., and McCormick, D. A. (2006). Neocortical network activity in390vivo is generated through a dynamic balance of excitation and inhibition. The Journal of neuroscience39126, 4535–4545. doi:10.1523/JNEUROSCI.5297-05.2006392

Frontiers 11

Page 12: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

Hebb, D. O. (1949). The organization of behavior: A neuropsychological approach (John Wiley & Sons)393

Hughes, J. R. (2008). Gamma, fast, and ultrafast waves of the brain: their relationships with epilepsy and394behavior. Epilepsy & Behavior 13, 25–31. doi:10.1016/j.yebeh.2008.01.011395

Izhikevich, E. M. (2006). Bursting. Scholarpedia 1, 1300. revision #150167396

Izhikevich, E. M. (2007). Solving the distal reward problem through linkage of stdp and dopamine397signaling. Cerebral cortex 17, 2443–2452. doi:10.1093/cercor/bhl152398

Johannsen, D. (2010). Random Combinatorial Structures and Randomized Search Heuristics. Ph.D. thesis,399Universitat des Saarlandes. Available online at http://scidok.sulb.uni-saarland.de/volltexte/2011/4003529/pdf/Dissertation 3166 Joha Dani 2010.pdf401

Kempter, R., Gerstner, W., and Van Hemmen, J. L. (1999). Hebbian learning and spiking neurons. Physical402Review E 59, 4498. doi:10.1103/PhysRevE.59.4498403

Kempter, R., Gerstner, W., and Van Hemmen, J. L. (2001). Intrinsic stabilization of output rates by404spike-based hebbian learning. Neural Computation 13, 2709–2741. doi:10.1162/089976601317098501405

Loewenstein, Y., Kuras, A., and Rumpel, S. (2011). Multiplicative dynamics underlie the emergence of406the log-normal distribution of spine sizes in the neocortex in vivo. The Journal of Neuroscience 31,4079481–9488. doi:10.1523/JNEUROSCI.6130-10.2011408

Meffin, H., Burkitt, A. N., and Grayden, D. B. (2004). An analytical model for the large, fluctuating409synaptic conductance statetypical of neocortical neurons in vivo. Journal of computational neuroscience41016, 159–175. doi:10.1023/B:JCNS.0000014108.03012.81411

Miller, K. D. and MacKay, D. J. (1994). The role of constraints in hebbian learning. Neural Computation4126, 100–126. doi:10.1162/neco.1994.6.1.100413

Mitzenmacher, M. and Upfal, E. (2005). Probability and computing: Randomized algorithms and414probabilistic analysis (Cambridge University Press)415

Morrison, A., Aertsen, A., and Diesmann, M. (2007). Spike-timing-dependent plasticity in balanced416random networks. Neural computation 19, 1437–1467. doi:10.1162/neco.2007.19.6.1437417

Nyhus, E. and Curran, T. (2010). Functional role of gamma and theta oscillations in episodic memory.418Neuroscience & Biobehavioral Reviews 34, 1023–1035. doi:10.1016/j.neubiorev.2009.12.014419

Oja, E. (1982). Simplified neuron model as a principal component analyzer. Journal of mathematical420biology 15, 267–273. doi:10.1007/BF00275687421

Okun, M. and Lampl, I. (2008). Instantaneous correlation of excitation and inhibition during ongoing and422sensory-evoked activities. Nature neuroscience 11, 535–537. doi:10.1038/nn.2105423

Pfeiffer, B. E. and Foster, D. J. (2015). Autoassociative dynamics in the generation of sequences of424hippocampal place cells. Science 349, 180–183. doi:10.1126/science.aaa9633425

Pfister, J.-P., Toyoizumi, T., Barber, D., and Gerstner, W. (2006). Optimal spike-timing-dependent426plasticity for precise action potential firing in supervised learning. Neural computation 18, 1318–1348.427doi:10.1162/neco.2006.18.6.1318428

Pozo, K. and Goda, Y. (2010). Unraveling mechanisms of homeostatic synaptic plasticity. Neuron 66,429337–351. doi:10.1016/j.neuron.2010.04.028430

Rabinowitch, I. and Segev, I. (2006). The interplay between homeostatic synaptic plasticity and functional431dendritic compartments. Journal of neurophysiology 96, 276–283. doi:10.1152/jn.00074.2006432

Remme, M. W. and Wadman, W. J. (2012). Homeostatic scaling of excitability in recurrent neural networks.433PLoS computational biology doi:10.1371/journal.pcbi.1002494434

Reyes, A. D. (2003). Synchrony-dependent propagation of firing rate in iteratively constructed networks in435vitro. Nature neuroscience 6, 593–599. doi:10.1038/nn1056436

This is a provisional file, not the final typeset article 12

Page 13: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

Rubin, J., Lee, D. D., and Sompolinsky, H. (2001). Equilibrium properties of temporally asymmetric437hebbian plasticity. Physical review letters 86, 364. doi:10.1103/PhysRevLett.86.364438

Schatz, C. J. (1992). The developing brain. Scientific American 267, 60–67439

Sjostrom, P. J., Rancz, E. A., Roth, A., and Hausser, M. (2008). Dendritic excitability and synaptic440plasticity. Physiological reviews 88, 769–840. doi:10.1152/physrev.00016.2007441

Song, S. and Abbott, L. F. (2001). Cortical development and remapping through spike timing-dependent442plasticity. Neuron 32, 339–350. doi:10.1016/S0896-6273(01)00451-2443

Song, S., Miller, K. D., and Abbott, L. F. (2000). Competitive hebbian learning through spike-timing-444dependent synaptic plasticity. Nature neuroscience 3, 919–926. doi:10.1038/78829445

Thut, G., Miniussi, C., and Gross, J. (2012). The functional importance of rhythmic activity in the brain.446Current Biology 22, R658–R663. doi:10.1016/j.cub.2012.06.061447

Tononi, G. (2009). Slow wave homeostasis and synaptic plasticity. Journal of clinical sleep medicine:448JCSM: official publication of the American Academy of Sleep Medicine 5, S16. doi:10.1016/j.smrv.2005.44905.002450

Tononi, G. and Cirelli, C. (2006). Sleep function and synaptic homeostasis. Sleep medicine reviews 10,45149–62452

Tononi, G., Sporns, O., and Edelman, G. M. (1999). Measures of degeneracy and redundancy in biological453networks. Proceedings of the National Academy of Sciences 96, 3257–3262. doi:10.1073/pnas.96.6.3257454

Turrigiano, G. G. (2008). The self-tuning neuron: synaptic scaling of excitatory synapses. Cell 135,455422–435. doi:10.1016/j.cell.2008.10.008456

Van Rossum, M. C., Bi, G. Q., and Turrigiano, G. G. (2000). Stable hebbian learning from spike457timing-dependent plasticity. The Journal of Neuroscience 20, 8812–8821458

Vogels, T., Sprekeler, H., Zenke, F., Clopath, C., and Gerstner, W. (2011). Inhibitory plasticity balances459excitation and inhibition in sensory pathways and memory networks. Science 334, 1569–1573. doi:10.4601126/science.1211095461

Von der Malsburg, C. (1973). Self-organization of orientation sensitive cells in the striate cortex. Kybernetik46214, 85–100. doi:10.1007/BF00288907463

Watt, A. J. and Desai, N. S. (2010). Homeostatic plasticity and stdp: keeping a neuron’s cool in a fluctuating464world. Frontiers in synaptic neuroscience 2. doi:10.3389/fnsyn.2010.00005465

Wilson, C. (2008). Up and down states. Scholarpedia 3, 1410. revision #91903466

Yger, P. and Gilson, M. (2015). Models of metaplasticity: a review of concepts. Frontiers in computational467neuroscience 9. doi:10.3389/fncom.2015.00138468

Zenke, F., Hennequin, G., and Gerstner, W. (2013). Synaptic plasticity in neural networks needs homeostasis469with a fast rate detector. PLoS computational biology doi:10.1371/journal.pcbi.1003330470

The appendix is organised into two sections. The first section presents theoretical analysis of the simplified471limit case of our process where up phases are very short and each input neuron spikes exactly once. The472second section presents details of the main setting not discussed in the main part of the paper.473

A SIMPLIFIED LIMIT CASE – VOLLEY SETTING

In this setting we present the input as volleys: short up phases where each input neuron spikes exactly once.474The neurons are integrate and fire neurons with a refractory period which is longer than a volley. Their475potential is reset after each volley. This setting can be thought of as a limiting case of the Poisson setting476with very short up phases where we ignore the leak of the postsynaptic neuron. Figure 3 contains a short477

Frontiers 13

Page 14: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

explanation which summarises the principles behind the process. For a formal definition of the neuron and478synapse dynamics see the Model section A.2 of this Appendix.479

A.1 Results480

In the following sections we discuss the single target neuron dynamics of the volley setting. The main481difference of the synapse used here to the one in the main text is the implementation of the memory trace.482The memory trace in this setting refers to the fraction of the last M signals which corresponded to pre-post483(but not post-pre) events. This is a simplification of the memory trace in the main text but it makes analysis484easier. Additionally the synapses are binary as before but with weight 0 or 1 and the target neuron requires485θ spikes to fire. The other parameters of the synapse are the same as in the main text but with respect to this486specific memory trace.487

A.1.1 Single neuron setting488

We first study the rule in the context of a single neuron v, which receives input from a set I of d input489neurons (like the input layer and one neuron from the first layer in Figure 1 (a)). We denote by m(t) the490memory of the synapse which stores the last M potentiation and depression signals. We let mi(t) ∈ {0, 1}491denote the i-th last signal the synapse received. The average value of the memory, the memory trace, is492denoted by 〈m(t)〉 := M−1

∑Mi=1mi(t), and it is the quantity we compare against the thresholds τ0 and493

τ1. Weight update attemps, as in the main text, are only triggered after every M -th input volley to simplify494the analysis.495

Since the input spikes come in a random order we can compute the probability that a synapse receives a496potentiation or a depression signal. The probability depends mildly on whether the weight of the synapse,497w(t), is weak or strong and is given by498

Pr[potentiation signal] =

{θds

=: pearly(ds), if w(t) = 1, andθ

ds+1 =: p′early(ds), if w(t) = 0.(9)

The monotone dependency of pearly and p′early forms the principle behind our mechanism since the memory499

trace, during learning steps, is an unbiased estimator of pearly and p′early. Consequently θ/〈m(t)〉 is an500

estimator for ds, the input weight (note that pearly and p′early are independent of d).501

For fixed synapse parameters we define the stable state to be the input weight which maximises the502expected number of synapses whose memory trace lies within the target interval (see Figure 8). We denote503by d∗s the number of strong synapses in the stable state. We now show the stable state is determined by the504parameters.505

For a parameter δ > 0 we choose the thresholds τ1 and τ0 such that if ds = d∗s the synapses are stable506with probability at least 1− δ. More formally, we choose τ1 and τ0 and M such that507

if w(t) = 1 Pr [〈m(t)〉 ≤ τ1] = Pr[Bin(M, pearly(d∗s)) ≤ τ1M

]≤ δ and (10)

if w(t) = 0 Pr [〈m(t)〉 ≥ τ0] = Pr[Bin(M, p′early(d∗s)) ≥ τ0M

]≤ δ. (11)

We fix τ1 = (1− ε) · pearly and τ0 = (1 + ε) · p′early for ε > 0. By Theorem 1508

This is a provisional file, not the final typeset article 14

Page 15: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

Pr[Bin(M, pearly) ≤ τ1M

]≤ e−ε

2Mpearly(d∗s)/3 and

Pr[Bin(M, p′early) ≥ τ0M

]≤ e−ε

2Mp′early(d∗s)/3.

Since p′early(d∗s) < pearly(d∗s) it suffices to choose M such that e−ε2Mp′early(d

∗s)/3 = δ. Solving these509

inequalities for M yields510

M ≤ 3 log(δ−1)

ε2p′early(d∗s). (12)

To see how this upper bound scales with the size of the stable region for an absolute deviation from d∗s we511set ε = x/d∗s where x is the amount of absolute deviation. Plugging ε into Equation (12) now yields512

M = O

((d∗s)

3 log(δ−1)

x2θ

). (13)

The upper bound on M in (12) is independent of the values for pw→s and ps→w. However, in order for the513weights not to oscillate between states where they are above or below the stable state we need to upper514bound ps→w and pw→s. It suffices to only bound ps→w since overshooting would lead to convergence to515the stable state from above. Here we present one option to set both transition probabilities such that in516expectation the number of synapses we switch in a learning phase is less than dεd∗se if ds = (1± 2ε)d∗s.517We additionally need to bound how many synapses change their weight if the input weight is much too518large which is achieved with the bound ps→w < cθ

d for a sufficiently small constant c′, for our simulations519c′ = 2 sufficed. We can now set520

ps→w = Min(c′θ

d,

εd∗s(1 + 2ε)d∗s · Prwuv=0[〈muv〉 ≥ τ0|ds = (1 + 2ε)d∗s]

)(14)

pw→s =εd∗s

(d− (1− 2ε)d∗s) · Prwuv=1[〈muv〉 ≥ τ0|ds = (1− 2ε)d∗s]. (15)

Choosing the thresholds τ0 and τ1 as above additionally ensures that the input weight converges quickly to521a weight close to the stable state. To track convergence we denote by ∆w→s(ds) (∆s→w(ds)) the number522of synapses which turn from weak to strong (strong to weak) during the learning phase of the process523which is applied every M rounds. We show that in expectation the input weight quickly enters the interval524[(1− 2ε) · d∗s, (1 + 2ε) · d∗s] where ε is the same as in the definition of τ0 and τ1. First assume wuv = 1,525that ds ≥ (1 + 2ε) · d∗s and for convenience we define µ1 = θ

dsM then526

Frontiers 15

Page 16: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

Pr [〈muv〉 ≤ τ1] ≥ Pr

[Bin(M, θ/ds) ≤ (1− ε) θ

d∗sM

]

= Pr

[Bin(M, θ/ds) ≤ (1− ε)µ1

dsd∗s

]

(1)≥ Pr [Bin(M, θ/ds) ≤ (1 + ε/2)µ1]

(2)≥ 1− e−ε2µ1/12

=: f(ε,M),

where in (1) we used that ε < 1/4 and that dsd∗s≥ (1 + 2ε) and in (2) we used Theorem 1. Now let527

µ0 = θd∗s+1M . For wuv = 0 we get similarly528

Pr [〈muv〉 ≥ τ0] = Pr

[〈muv〉 ≥ (1 + ε)

θ

d∗s + 1M

]

= Pr

[〈muv〉 ≥ (1 + ε)µ0

ds + 1

d∗s + 1

]

(1)≤ Pr [〈muv〉 ≥ (1 + 2ε)µ0]

(2)≤ e−4ε

2µ0/3

=: g(ε,M),

where in (1) we used that ds+1d∗s+1 ≥ 1 + ε and in (2) we used Theorem 1. Now for wuv = 1 we have529

E [∆s→w(ds)] = Pr [〈muv〉 ≥ τ1] · ds · ps→w

and for wuv = 0530

E [∆w→s(ds)] = Pr [〈muv〉 ≥ τ0] · (d− ds) · pw→s.

Thus for ds ≥ (1 + 2ε)d∗s by the observations above531

E [∆s→w(ds)−∆w→s(ds)] ≥ f(ε,M) · ds · ps→w − g(ε,M) · (d− ds) · pw→s.For ds ≥ d∗s we will assume that pw→s is chosen such that (d − ds)pw→s = O(dsps→w). By increasing532M f(ε,M) can be made arbitrarily close to 1 and g(ε,M) arbitrarily close to 0. Therefore by choosing533M = M(ε, (d− d∗s)pw→s) large enough such that534

This is a provisional file, not the final typeset article 16

Page 17: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

f(ε,M) > 1− ε and g(ε,M) · (d− d∗s)pw→s < εd∗spw→s

we have535

E [∆s→w(ds)−∆w→s(ds)] ≥ (1− 2ε) · ps→wds.We denote by ds(t) the number of strong synapses after t learning rounds and set d′s(t) = max{1, ds(0)−536(1 + 2ε)d∗s}. We set T = mint{t | d′s(t) = 0} which is the time needed to hit the interval around d∗s in d′s,537the shifted process. By Theorem 2, the variable drift theorem, we now have that538

E[T ] ≤ 1

(1− 2ε) · ps→wd∗s+

∫ d′s(t)

1

1

(1− 2ε) · ps→wsds

=1

(1− 2ε) · ps→w·(

1

d∗s+ log(d′s(t))

)

The case for ds = (1− 2ε)d∗s follows an analogous argument.539

A.1.2 Unreliable synapses540

In the previous section every input neuron always participated in activating the target neuron. This is541a strong assumption given that neurotransmitter release at synapses is a stochastic process (Branco and542Staras, 2009). We therefore introduce unreliable synapses into the model. The synapses now fail to transmit543a spike with probability 1− pr independently of other synapses. For failed transmission we assume that544the synapse does not receive any feedback signals. However, we remark that similar results would remain545valid otherwise.546

The previous section thus corresponds to having perfect reliability. For unreliable synapses the input to547the target neuron is random and follows a binomial distribution which we denote by X ∼ Bin(ds, pr). We548require that Pr[X ≥ θ] > 1− ε, for some 0 < ε < 1/2, in order to somewhat reliably activate the target549neuron and make feedback to the synapses less sporadic when they are reliable. This condition implies that550E[X] > θ which leads to a lower bound condition on the stable state, that is d∗s > θ/pr.551

Recall that in the previous section we had a learning step after every M -th input volley. This makes the552analysis easier since the learning steps were independent of each other. With unreliable synapses we expect553that a synapse has received M new feedback signals after M/pr input volleys which successfully led to a554target neuron spike. We therefore only apply the learning step after every M ′ = 2M/pr successful volleys555(Figure 9c shows how M ′ depends on ds for different values of pr). This ensures that with high probability556the learning steps are independent of each other provided that either M is large enough or the number of557input neurons is not too large.558

Given the condition on d∗s and learning rate what remains is to recompute pearly and p′early. This leads to559similar results as in the perfect reliability setting. Assuming that a synapse is reliable in a volley we need to560condition on the event that the postsynaptic neuron spikes so that the synapse receives a feedback signal.561The probabilities pearly and p′early can then be expressed as562

Frontiers 17

Page 18: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

p′early(ds) =1

Pr[X ≥ θ]·ds∑

i=θ

Pr[X = i] · θ

i+ 1and (16)

pearly(ds) =1

Pr[X ≥ θ]·ds∑

i=θ

Pr[X = i] · θi. (17)

The same formulas as is Section A.1.1 apply in the setting with unreliable synapses by plugging in the563values of pearly and p′early in Equations (16) and (17). Figure 9a shows the dependency on ds which is564monotone as in the pr = 1 setting. Figure 9b shows that by choosing the parameters according to the565equations in the previous section we can create a range for ds in which the number of expected transitions566is close to zero. Consequently the weight converges to an equilibrium as before, see Figure 9d.567

Note that if the input weight is large enough then X is concentrated with standard deviation σ =568 √dspr(1− pr). The main contribution in the sum in Equations (16) and (17) comes from terms of the569

order(1± 1

σ

)· θdspr

. It is therefore no surprise that moving to a setting with unreliable synapses does not570significantly affect the behaviour of the process.571

A.1.3 The random order assumption572

So far we have made the simplifying assumption that input arrives in random order. This assumption573is based on the fact that neurons fire irregularly and their synapses are unreliable (Branco and Staras,5742009). We used the assumption in order to get explicit analytic formulas for the parameters and bounds for575the speed of convergence. In this section we show that the assumption is not needed for the homeostatic576properties of our mechanism. The only difference is that it is no longer possible to describe the solutions of577the corresponding equations by explicit formulas.578

To provide some intuition, we first consider the other extreme when the input always comes in the same579fixed order. The first θ synapses will always be early so eventually they turn strong. Once they are all strong580the remaining synapses are always late and thus they all turn weak. Therefore, the system still converges581quickly to a stable state where no further weights are changed.582

We now study the region between a deterministic and a uniformly random order. For each input volley583we assume that the time is reset to 0. We let the i-th input neuron spike at a normally distributed time584N (i/d, σ) where the standard deviation, σ ≥ 0, is a parameter. For σ = 0 the order is deterministic and the585order approaches a uniform permutation for σ →∞. We ran simulations for different values of σ where586we set the parameters as in Figure 8. We start with each synapse strong independently with probability5874θ/d and we show 250 input volleys which in this case brings the input weight to the stable state. The588dependency on σ is shown in Figure 10 (a). We observe that for σ close to 1 the stable state is already589close to the one for a uniform random order. Moreover for σ = 2 the first 20 indices are close to a uniform590permutation since Pr[X ≤ Y ] = 0.471814 . . . for X ∼ N (0.2, 2) and Y ∼ N (0, 2). Figure 10 (b) shows591how the mean value of the indices of neurons projecting strong synapses depends on σ. As is expected592synapses from input neurons with a low index have a higher tendency to become strong.593

In the remainder of this section we prove mathematically that the homeostasis mechamism works for any594probability distribution on the set of all orderings. More precisely, if in each volley we draw an ordering595from this distribution then the input weight remains bounded. This shows that the mechanism provides596negative feedback for arbitrary input distributions. Note that this is a vastly more general setting than the597

This is a provisional file, not the final typeset article 18

Page 19: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

one used in Figure 10 (a). This result essentially follows from the fact that only a few synapses can remain598strong because only θ of them are needed to activate the target neuron.599

We show that for large input we expect the number of strong synapses to decrease in each learning step.600We denote a synapse between neuron u and v by (u, v) and we denote their weight by wuv. In what follows601d∗s denotes as usual the stable state for the standard process when the input order is chosen uniformly at602random.603

To study this setting let E1 := {(u, v)|wuv = 1} be the set of strong synapses, and for euv ∈ E1 let Xuv604be an indicator random variable for the event that u spikes before v in this volley (Xuv is 1 if the event605occurs and 0 otherwise). Furthermore let puv = Pr[Xuv = 1] be the success probability. Then by linearity606of expectation and the fact that only θ of the strong synapses are early607

(u,v)∈E1

puv = E

(u,v)∈E1

1−Xuv

= θ. (18)

For 0 < ε < 1/2 assume ds > d∗s1−2ε , i.e. the input weight is too large. Strong synapses with puv >608

(1 − 2ε) θd∗s = p∗ have a chance to retain their weight while other strongs ones will have 〈m(t)〉 < τ1609with high probability by Theorem 1. To formalise this denote by Efast ⊆ E1 the synapses which have610puv > p∗ and by Eslow ⊆ E1 those who have puv ≤ p∗. Further let pfast := |Efast|−1

∑(u,v)∈Efast

puv and611

pslow := |Eslow|−1∑

(u,v)∈Eslowpuv. Then by Equation (18)612

|Efast|pfast + (ds − |Efast|)pslow = θ.

By using p∗ < pfast we get the following upper bound on the number of fast synapses613

|Efast| <θ − dspslow

p∗ − pslow.

We now take the derivative w.r.t. pslow to maximise the upper bound on |Efast|, that is614

d

dpslow

θ − dspslow

p∗ − pslow=

θ − dsp∗(p∗ − pslow)2

.

For ds > θp∗ = d∗s

1−2ε we now see that the upper bound for |Efast| is maximised for pslow = 0. Therefore615

|Efast| <d∗s

1− 2ε

and since ds > d∗s1−2ε there are with high probability at least ds − d∗s

1−2ε slow synapses which will turn weak616with probability ps→w in the next learning step. Once at most d∗s synapses (and at least θ) amongst the617fastest ones are strong the weight cannot increase further and it stays upper bounded with high probability.618See Figure 10 (a) for an example of this phenomena.619

A.1.4 Heterogeneous weights620

Previously we have studied the ideal case where the weight of synapses was either 0 or 1. In this section we621study a generalisation where the weights are still binary but the non-zero weight is independently sampled622

Frontiers 19

Page 20: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

from a distribution. This alternative has for example been proposed in (Barbour et al., 2007). We choose623the log-normal distribution where the normal distributed random variable has mean µ = 1.74 and variance624σ = 0.1002, since this distribution has been obtained as an optimal fit for bouton sizes (Loewenstein et al.,625

2011). The mean of the distribution is eµ+σ2/2 = 5.726 . . . so in order to compare it with previous results626

where we chose θ = 10 we set θ = 10 · eµ+σ2/2. The same parameters as we use in Section A.1.1 can627now be used to obtain a weight normalising process. For reference see Figure 11a where we use the same628parameters (except θ) as in Figure 8. For a plot of how the weight of the selected synapses at equilibrium629are distributed compared to the lognormal distribution see Figure 11b.630

A.1.5 Multimodal weights631

In this section we drop the assumption that synapses are binary. When the weight is potentiated it is632increased by ∆pw(t) and when it is depressed it is decreased by ∆dw(t). This setting is now a straightfor-633ward extension of the binary weight setting where instead of using the weight update rules (23) and (22)634we use635

w(t)← w(t) + ∆pw(t) if Be(pw→s) = 1 and 〈m(t)〉 > τ0, and (19)

w(t)← max(0, w(t)−∆dw(t)) if Be(ps→w) = 1 and 〈m(t)〉 < τ1. (20)

We explore three different setups. In the first setup we have ∆pw(t) = 1 and ∆dw(t) = −1. In the636second setup, for a parameter α, we use a multiplicative update rule with ∆pw(t) = (α − 1)w(t) and637∆dw = (1 − α−1)w(t) for some α > 1. Finally in the third setup, with parameters cp, cd, and ζ, we638use weight updates similar to the ones used in (Van Rossum et al., 2000) where for potentiation we set639∆w(t) = (cp+νw(t)) and for depression we set ∆w(t) = (−cd+ν)w(t). Here cp and cd are non-negative640constants and ν is a normally distributed random variable with mean 0 and standard deviation ζ > 0.641

Figure 11 (c-d) compares convergence and weight distribution for the three different setups. We use the642same parameters as in Figure 8. Setup 1 converges to a bimodal weight distribution while the other two643converge to a unimodal distribution. Note additionally that for all settings it is unnecessary to bound the644weights from above.645

A.1.6 Unilateral memory as a negative feedback mechanism646

In some theories of normalisation, for example the sleep homeostasis hypothesis, it is only required to647have negative feedback which decreases the number of strong synapses (Tononi and Cirelli, 2006; Tononi,6482009).649

If negative feedback is our only requirement then it suffices to only enhance strong synapses with memory.650Such a homeostasis mechanism can only decrease the input weight but not increase it. If the input to the651neuron increases too much, e.g. through short-term Hebbian learning, the negative feedback provided652by the memory mechanism of the strong synapses can quickly reduce the input weight back to a stable653state. As a proof of principle we show an example in the simplified limit setting presented in the appendix.654Figure 11 (e) shows how the weights normalise with unilateral memory where the parameters are set as in655Figure 8. Observe that the negative feedback mechanism does not dramatically undershoot the stable state.656

This is a provisional file, not the final typeset article 20

Page 21: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

A.2 Model and methods657

A.2.1 Tools658

To analyse the volley setting we use the two known results below. First we will use the following version659of the Chernoff bounds.660

THEOREM 1 (Chernoff-bounds, formulation from (Mitzenmacher and Upfal, 2005, Theo-661rems 4.4 and 4.5)). Let X1, . . . , Xn be independent Bernoulli-distributed random variables with Pr[Xi =6621] = p1 and Pr[Xi = 0] = 1 − pi. Then the following inequalities hold for X :=

∑ni=1Xi and663

µ := E[X] =∑n

i=1 pi.664

Pr[X ≥ (1 + δ)µ] ≤ e−µδ2/3, and

Pr[X ≤ (1− δ)µ] ≤ e−µδ2/2.

Second, to prove an upper bound on the expected time until the process becomes stable we use the665variable drift theorem.666

THEOREM 2 (Variable Drift (Johannsen, 2010, Theorem 4.6)). Let (Xt)t≥0 be a sequence of random667variables over a state space 0 ∈ S ⊆ R+

0 such that smin := inf{x ∈ S | x > 0} > 0. Assume further that668X0 = s0 for some s0 ∈ S. Let T be the random variable that denotes the earliest point in time t ≥ 0 such669that Xt = 0. Suppose there is an increasing function h : R+ → R+ such that for all x ∈ S \ {0},670

E[Xt −Xt+1 | Xt = x] ≥ h(x),

then, for all x,671

E[T ] ≤ smin

h(smin)+

∫ s0

smin

1

h(u)du.

A.2.2 Input672

For an input volley starting at time t of length 1 time unit every input neuron chooses a time uniformly673at random in the interval [t, t + 1] to spike. As a result the input neurons spike in a random order. Input674volleys are spaced apart by at least 1 time unit.675

A.2.3 Neuron model676

We use a simplified integrate and fire neuron model. We denote by v(t) the potential of a neuron at time t.677θ and tref are the neuron parameters. In the resting state v(t) = 0. A spike arriving at time t triggers the678following state update679

v(t)← v(t) + w(t)

where w(t) is the weight of the incoming synapse at time t. Generally for a synapse from neuron i to680neuron j we denote the weight of the synapse at time t as wij(t) and we drop the subscript if it is clear681from the context. The neuron fires when its state exceeds or matches the threshold parameter, that is when682

Frontiers 21

Page 22: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

v(t) ≥ θ. After it fires the neuron enters a refractory period for tref time units where it ignores all incoming683spikes. We set tref = 1 so in particular the neuron can only fire once for an input volley. After the refractory684period we reset v(t) = 0. Consequently the neuron is in a resting state at the start of each input volley.685

Every synapse which transmits a spike during the input volley receives a feedback signal. If the spike686arrived at the postsynaptic neuron before it spiked the synapse receives a potentiation signal and if the687spike arrived after the postsynaptic neuron spiked it receives a depression signal.688

A.2.4 Synapse model689

The memory of a synapse at time t is denoted by m(t). The memory captures the M latest depression690and potentiation signals where the most recent signal is denoted by m1(t) and the oldest signal by mM (t)691(see Figure 3 (b)). For a signal at time t we update the memory as follows692

mi(t)← mi−1(t) for i ∈ {2, . . . ,M} (21)

and we set m1(t) to 1 if it was a potentation signal and 0 if it was a depression signal. We denote by693〈m(t)〉 = M−1

∑Mi=1mi(t) the mean value of the memory at time t. A potentiation signal is triggered for694

a pre-post pair within in a volley and depression signal for a post-pre pair. Signal are triggered at the time695of the later spike.696

We denote their synapse weights by the parameters ww = 0 (weak) and ws = 1 (strong). Let Be(p) be a697Bernoulli random variable which is 1 with probability p and 0 otherwise. During a learning step we apply698the following weight update for a strong synapse699

w(t)← ww if 〈m(t)〉 < τ1 and Be(ps→w) = 1 (22)

and for a weak synapse700

w(t)← ws if 〈m(t)〉 > τ0 and Be(pw→s) = 1. (23)

B NOTES REGARDING THE MAIN MODEL

B.1 Basic properties701

For a setup with ds out of d strong input synapses, ps→w = pw→s = 0, and constant 40 Hz input, the702same as in an up phase, we let tfirst := tfirst(ds, d) be the spike time of the first postsynaptic spike. Then,703we denote by r(ds, d) := E[tfirst|tfirst < λU ]/λU the expected relative spike time of the first postsynaptic704spike, given there is one, within an up phase. Note that r = r(d∗s, d).705

The expected value of the memory trace depends linearly on r(ds, d). It is given by 2r(ds,d)−11−γ . This can706

be seen by modeling the STDP events as Bernoulli random variables with success probability r(ds, d). Let707Xi denote the result of the i-th STDP event, 1 if the synapse was early and 0 otherwise. The expected value708of the memory trace after T events is then given by709

This is a provisional file, not the final typeset article 22

Page 23: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

E[m(t)] =T−1∑

i=0

E[γi(2Xi − 1)] =T−1∑

i=0

γi(2r(ds, d)− 1)T→∞−−−−→ 2r(ds, d)− 1

1− γ .

Similarly since the trials are independent the variance of the distribution is given by710

Var[m(t)] =T−1∑

i=0

Var[γi(2Xi − 1)] =T−1∑

i=0

γ2i4r(ds, d)(1− r(ds, d))T→∞−−−−→ 4r(ds, d)(1− r(ds, d))

1− γ2 .

The distribution of the memory trace at equilibrium is shown in Figure 4b (b). Figure 7 (a) shows how711the relative spike time of the target neuron, r(ds, d), decreases monotonously. This monotone dependency712forms the basis of how our mechanism works.713

B.2 Weak dependency of relative spike time on input parameters714

For simplicity assume that τ0 and τ1 are chosen such that in the stable state out(d∗s, d) = εd where in715particular an ε fraction of the weak (strong) synapses have their weight outside the target interval. For the716corresponding input weight, d∗s, we then have that ps→w and pw→s satisfy717

(d− d∗s)εpw→s = d∗sεps→w. (24)

Note however that if we increase λU or d then d∗s will decrease to compensate (see Figure 13d). If d∗s needs718to decrease by x to obtain the correct value of r then we have that the left hands side in (24) increases by719xεpw→s while the right hand side decreases by xεps→w. Therefore the input weight will not converge to720exactly d∗s − x. Note however that since the tails of the memory distributions decrease exponentially the721stable state will not differ much from d∗s − x.722

λU also has a mild effect on the number of spikes per up phase (see Figure 2). This is due to the refractory723period (2.5 ms) of the postsynaptic neuron having less relative effect for longer up phases and the mild724dependency of r on λU . Lastly, the noise strength effectively corresponds to modifying the input weight.725The effect on r is similar to modifying λU since d∗s is changed and the argument in the previous paragraph726applies.727

B.3 Fixed rate input in the Poisson setting728

In this section we study how the synapses are affected by constant rate input. If the input rate is constant729and the integral of our STDP function is 0, i.e. Tearly = Tlate, then the ratio of potentiation and depression730signals will be one to one. Under these conditions the memory distribution resembles a normal distribution731with mean 0 and standard deviation 1√

1−γ2. For Tearly = Tlate the rule is only stable if the parameters are732

chosen such that for oscillating input the first postsynaptic spike should in expectation be in the middle of733an up phase, that is r = 0.5. Figure 12 illustrates this by showing how the system responds to constant734input for three different sets of parameters. For parameters which result in r < 1/2 the weights decrease735to a point where the target neuron shows almost no reaction to the input. Indeed, this seems a reasonable736reaction as such an input does not carry any information. On the other hand, for parameters which result737in r > 1/2 the weights and the firing rate of the target neuron grow strongly, which is undesirable. This738

Frontiers 23

Page 24: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

indicates that r should be at most 1/2, which corresponds to several spikes of a neuron in an up phase739(≈ 1/r), which is in good agreement with experimental findings (Connors and Gutnick, 1990). For a740review on bursting behaviour in neurons see (Izhikevich, 2006). Note however that the absolute bound for741r changes if Tearly and Tlate are not equal.742

We conclude that the area under the STDP function (which decides how the memory is updated) should743not have a positive integral (i.e. Tlate ≥ Tearly). Or to phrase it differently, for homogeneous random744input and parameters chosen for r = 1/2 the synapses should not expect more potentiation signals than745depression signals. This is in agreement with other results on homeostasis for homosynaptic learning746rules (Chistiakova et al., 2015).747

B.4 Long feed-forwarding748

We present the setting in Section 2.3 with a 15 layer deep network. In Figure 14 we compare two sets of749parameters (see Table 1 (c)) which correspond to two different points of attraction: r = 1/2 and r = 1/3.750We observe that for r = 1/2 the spike distribution becomes bimodal and for r = 1/3 it becomes trimodal.751This is in agreement with how many spikes we expect in each up phase for these sets of parameters. For752r = 1/2 and r = 1/3 we expect, rounded to nearest integer, 2 and 3 spikes respectively in every up phase.753

This is a provisional file, not the final typeset article 24

Page 25: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

TABLES

(a) d γ Tearly Tlate Tburst ww ws H L

100 0.95 35 ms 35 ms 35 ms 1 nS 40 nS 40 0

(b)d∗s 15 20

pw→s 0.015 0.02ps→w 0.1 0.1τ1 -2 -7τ0 12 7

(c)r∗ 1/3 1/2 2/3

pw→s 0.05 0.05 0.012ps→w 0.05 0.16 1.0τ1 -14.51 -8.42 1.72τ0 1.48 8.70 14.73

Table 1. Parameter values for various setups. (a) These are the default parameters used in the main bodyof the text, unless otherwise stated. (b) Parameter setups used in Figure 5. d∗s denotes the number ofstrong synapses the parameters are chosen for in the I → A synapses. (c) Parameters setups used inFigures 2, 6, 12, 13 and 14. The points of attraction, r∗, were chosen for an input rate function whereλU = 30 ms, λD = 50 ms, up phase rate is 40 Hz, down phase rate is 0 Hz and Tburst = 70 ms.

Frontiers 25

Page 26: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

FIGURES

Input layer, I

Feed-forwardconnections to a neuron in layer A

wwwsBinary synapses ws(strong, solid) and ww(weak, dashed)

b) Input modela) Network model

d) Synapse behaviour

r is the expected relativespike time of the post-synaptic neuron in an up phase.

Hz

t

40

Input rate function

λDλU

0

Up phaseDown phase

Each synapse compares the valueof its memory trace to two threshold values which modulate plasticity.

Strong synapsescan turn weak

Weak synapsescan turn strong

No action!(#) AB

A(

Post spikesPre spikes

Tburst

tTearly Tlate

Distinguished spikes

Potentiation signals (●)Depression signals (●)

●●

● ● ●

c) Spike events and memory

Feed-forward density, pAB

2nd layer, B

! # ⟵ 1+ 4! # (potentiation), ! # ⟵ −1 + 4! # (depression)

e) Input weight convergence

0 2000 4000 6000 8000 10000 12000 14000Time (ms)

30

40

50

60

70

80

90

Inpu

tw

eigh

t

0

− 1− 4 '(

1 − 4 '(

.

This is a provisional file, not the final typeset article 26

Page 27: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

Figure 1 (previous page). The principles behind the homeostasis mechanism. (a) Network connectivity.Our network is connected in a feed-forward manner with an input layer, I , at the bottom. We first studythe process from the perspective of a single neuron (highlighted) in the first layer. Later we consider layerstructured networks with non-trivial feed-forward density. The synapses in our model are binary and werefer to them as being either weak with weight ww or strong with weight ws. (b) Input model. We modelthe input as an inhomogeneous Poisson process. The input neurons oscillate between a high rate up phaseof length λU and a low rate down phase of length λD. The neurons in the model are leaky-integrate andfire neurons (LIF). (c) Spike events and memory. Each synapse has a memory trace which is modifiedfor certain spike-pair events. The spike pairs are always with respect to the first postsynaptic spike in anup phase which we refer to as being distinguished and can be thought of as the first spike in a burst. Adistinguished spike can only occur at most every Tburst ms. Presynaptic spikes at most Tearly ms before thedistinguished postsynaptic spike trigger potentation signals and at most Tlate ms after trigger depressionsignals. For each such signal the memory trace is updated, see Section 3.3 for details. (d) Synapse behaviour.The synapses compare their memory trace to threshold values, τ0 and τ1. If m(t) < τ1 strong synapsesattempt to turn weak and correspondingly if m(t) > τ0 weak synapses attempt to turn strong. Thesechanges are successful with a (low) probability. If m(t) ∈ [τ1, τ0] synapses are stable and cannot change.Weight changes are stochastic and follow updates of the memory trace. (e) Input weight convergence. Weinitialise the synapses in three ways using the single target neuron setup from (a), either such that the inputis too large, too small or stable.

Frontiers 27

Page 28: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7r

Setup 1Setup 2

0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

4.5

Spik

espe

rup

phas

e

0

10

20

30

40

50

Rat

e

!Ud!D

20 ms 30 40

100 200

50 ms 100

20 30 40 20 30 40 20 30 40

100 100 200 200 100 200100 100 200 200

50 50 50 50 50 100 100 100 100 100

a)

b)

c)

Figure 2. Robustness against varying input parameters. We show two setups which correspond to synapseparameters in column 1 and 2 in table 1 (c). We vary three parameters of the input. The length of an upphase, λU , the number of input neurons d, and the length of a down phase λD. (a) The top figure shows theexpected time of the first postsynaptic spike within an up phase scaled by the up phase length and denotedby r. The parameters of the rule control r. Varying different input parameters such as the length of an upphase, λU , the number of input neurons d or the length of a down phase λD have a negligible effect on r.(b) We observe that by varying the same parameters the rate varies significantly while r remains fixed. Thegreatest observable effect is due to varying λD but note that the rate is also affected by λU . (c) Variationsin the number of spikes per up phase. We observe that the number of spikes per up phase varies with thelength of the up phase, λU , but the variations are small. In all figures each data point is the mean of 40trials and the error bars represent a standard deviation estimate. The other parameters used to generate thisplot and the two setups can be found in Tables 1 (a) and (c).

This is a provisional file, not the final typeset article 28

Page 29: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

0 5 10 15 20 25 30 35Time (ms)

0

20

40

60

80

100

Neu

ron

id

10 20 30 40 50 60 70 80 90 100Input weight

0

5

10

15

20

25

30

35

40

PM i=

1M

i(t)

b) Spike events and memorya) Input model

c) Synapse behaviour d) Input weight convergence

In each volley every input neuron spikes exactly once; the order of the spikes is random.

Strong synapsescan turn weak

Weak synapsescan turn strong

No plasticity

0 10 20 30 40 500

10

20

30

40

AB

A(

7 3 1 5 8 6 2 4

5 1 7 3 8 4 2 6

4 3 2 8 5 1 6 7

3 4 1 8 5 2 7 6

2 8 5 6 4 3 7 1

5 3 7 1 2 6 8 4

Volle

ys

Frac

tion

early

Each synapse computes what fraction of the time the presynaptic neuron spikes early and compares it to two reference values.

Strong synapsescan turn weak

Weak synapsescan turn strong

No action

AB

A(0

1

Frac

tion

early

Memory tracesynapse 1

1 + 4 E 1 = 1.95

1 + 4 E 0 = 1

−1+ 4 E 1.95 = 0.85

1 + 4 E 0.85 = 1.81

−1+ 4 E 1.81 = 0.72

−1+ 4 E 0.72 = −0.32

Figure 3. Illustrations of input and memory traces in the simplified limit case model. (a) The input comesin volleys. In each volley each input neuron spikes once. We assume that in each volley the spikes come ina random order. Note that the time between the volleys does not need to be regular. (b) Closer look at thevolleys and their effect on synaptic memory. In this figure we see six input volleys with d = 8 input neuronswhere ds = 5 of them provide strong input (darker colored, 1, 4, 5, 6, 7) synapses and the rest are weak(lighter colored, 2, 3, 8). For convenience we label each input spike with the corresponding presynapticneuron ID (1-8). For illustration purposes the threshold for the target neuron to spike is θ = 2 in thisexample. The synapses update their memory with a +1 if they observed a pre-post pair and a −1 if theyobserved a post-pre pair. For the synapse from neuron 1 we show the memory trace as in the main modeland how it changes over time, γ = 0.95 is the decay parameter. Plasticity is modulated by the memorytrace, i.e. fraction of pre-post pairs, which we refer to as the memory trace and allows individual synapsesto sample whether the input weight needs to increase or decrease.

Frontiers 29

Page 30: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

−15 −10 −5 0 5 10

Memory value0.00

0.02

0.04

0.06

0.08

0.10

0.12

0.14

Pro

babi

lity

(4a) Memory distribution, τ0 and τ1.

16 17 18 19 20 21 22 23 24 25ds

0.0

0.1

0.2

0.3

0.4

0.5

0.6

Fra

ctio

nof

syna

pses

wit

hm

(t)/∈

[τ1,τ

0]

τ1 = −8.10, τ0 = 1.90

τ1 = −9.10, τ0 = 2.90

τ1 = −10.10, τ0 = 3.90

τ1 = −11.10, τ0 = 4.90

τ1 = −12.10, τ0 = 5.90

τ1 = −13.10, τ0 = 6.90

(4b) out(d1, d) for different thresholds.

10 12 14 16 18 20

Target interval width, |τ0 − τ1|−0.2

0.0

0.2

0.4

0.6

0.8

Num

ber

ofw

eigh

tch

ange

sin

anup

phas

e

500

1000

1500

2000

2500

3000

3500

4000C

onve

rgen

ceti

me,

ms

(4c) Plasticity, stability trade-off.

0 2000 4000 6000 8000 10000Time (ms)

10

20

30

40

50

60

70

80

Inpu

tw

eigh

t

pw→s = 0.01

pw→s = 0.06

pw→s = 0.25

(4d) Effect of ps→w and pw→s on convergence.

Figure 4. Plasticity vs. stability. (a) The histogram is an empirical distribution of the memory trace ina static setting (ps→w = pw→s = 0) with ds = 20. The mean of the distribution is −3.1 and the coloredvertical bars represent the target interval [τ1, τ0] which we use in fig (b) and (c). The histogram is generatedusing memory traces of synapses in 250 trials where each had 500 up phases. (b) In a static setting as in (a)these curves represent out(d1, d)/d, i.e. the mean fraction of synapses whose memory trace lies outside theinterval [τ1, τ0]. Each data point is the mean of 100 trials and the error bars represent a standard deviationestimate. (c) This figure shows the tradeoff between the number of weight changes in the stable state forall input synapses (d = 100) and convergence time to the stable state, both as a function of the width ofthe target interval. The ticks on the x-axis are colored and correspond to the thresholds in (a) and curvesin (b). In this setup ds = 20 is the stable state which we achieve by choosing ps→w and pw→s such thatthey satisfy the equation dsps→w = (d − ds)pw→s. The blue curve shows the mean number of weightchanges per up phase over all synapses. The red curve shows the mean time for the weights to convergefrom ds = 80 to ds = 25. The error bars represent a standard deviation estimate. In this setup we choseps→w = pw→s = 0.25. (d) This figure compares the effect ps→w and pw→s have on convergence. In thisfigure we set τ1 = −5.1 and τ0 = −1.1 and we compare three different values of pw→s, 0.05, 0.25 and1.0, the corresponding ps→w values are shown in the legend. The figure shows that if we allow too manysynapses to change their weight simultaneously by having a small target interval and large weight changeprobabilities the weights can overshoot the stable state and oscillate around it. Each curve is the averageover 200 trials with a data point every 125 ms and the envelopes represent standard deviation.

This is a provisional file, not the final typeset article 30

Page 31: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

0 5000 10000 15000 20000Time (ms)

0

5

10

15

20

25

30

35

Rat

e

40 Hz up phase, setup 140 Hz up phase, setup 260 Hz up phase, setup 160 Hz up phase, setup 2

(5a) A and B (dashed) rates.

0 5000 10000 15000 20000Time (ms)

300

400

500

600

700

800

900

Inpu

tw

eigh

t(n

S)

(5b) I → A weight

0 5000 10000 15000 20000Time (ms)

300

400

500

600

700

800

900

Inpu

tw

eigh

t(n

S)

(5c) A→ B weight

Figure 5. Single step input forwarding. In this figure we study rate and weight convergence in a feedforward setting. (a) Rates over time in population A (whole) and B (dashed). We compare rates for twodifferent parameter setups and two different input rates. As is expected the rates converge to similar valuesfor both the first and second layer for the same parameter setups. The input rate in the up phase of theinput population (I) is either 40 or 60 Hz. Parameters for the two different setups are shown in Table 8 (b).Each data point of the curves is the average rate over 800 ms which spans 10 up (30 ms) and down (50 ms)phases. (b-c) Input weight convergence for layer A (fig b) and layer B (fig c). Observe that the input ratecontrols the point of convergence. When the rate of A is varying (see green curve) the weights to B need tocompensate quickly. For all figures the curves aggregate the results of twenty trials for each setup. Thecurves in (b) and (c) represent the mean and the envelopes a standard deviation estimate over the 20 trials.The size of population I and A is 300, and of B is 100. The feed-forward density is pIA = pAB = 100/300.

Frontiers 31

Page 32: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

0 5 10 15 20 25 30 35Time (ms)

0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

0.40

0.45

Ave

rag

e n

um

ber

of

spik

es

Layer 1

Layer 2

Layer 3

Layer 4

Layer 5

Layer 6

Figure 6. Input forwarding across multiple layers. The set of parameters corresponds to r = 2/3 as apoint of attraction and the parameters can be found in Table 8 (c). The spike distribution becomes lessrandom for each passing layer and approaches the simple limit case studied in the appendix. Time lagbetween subsequent layers is due to synaptic delay and spike integration time. We correct for the former soin the figure the time lag is only due to spike integration. The plot was generated by estimating the outputrate function of a neuron in the single neuron setting and using the estimated rate function to sample theinput to the next layer. The rate function is sampled from a simulation with 100.000 up phases and a timeresolution of 1 ms and plotted with a resolution of 4 ms. The network is simulated until it has reached itspoint of attraction before we sample the output rate function.

This is a provisional file, not the final typeset article 32

Page 33: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

10 20 30 40 50 60 70 80 90ds

−20

−15

−10

−5

0

5

10

15

Mem

ory

trac

e

Distinguished ruleAll-to-all rule

Figure 7. This figure shows the benefits of using distinguished spikes. We see that with distinguishedspikes, the variance of the memory trace is considerably smaller than for an all-to-all rule, and the memorytrace varies over a larger range. The curves show the mean memory trace value when varying the number ofstrong inputs, ds, in a static setup where γ = 0.95 and ps→w = pw→s = 0. The blue curve corresponds tothe rule where potentiation and depression signals are always with respect to a distinguished postsynapticspike. The green curve corresponds to a rule where all pre-post and post-pre spike pairs within an up phaseresult in a potentation and a depression event respectively. The mean is taken after a 50 s simulation with100 input neurons and a single target neuron over 10 trials. The envelopes represent standard deviation ofthe 1000 data points for each value of ds.

Frontiers 33

Page 34: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

Stable state

1 → 0

0 → 1

10 15 20 25 30 35 400

1

2

3

4

5

Input weight

Expectedchange

Figure 8. The expected weight change of the synapses after sampling pearly and p′early M times which fillstheir memory. For this setup M = 39, τ1 = 10/M and τ0 = 28/M , pw→s = 0.05 and ps→w = 0.2. Theparameters were chosen using the formulas in Section A.1.1 with ε = 0.5, δ = 0.2.

This is a provisional file, not the final typeset article 34

Page 35: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

pearlypearly′

0 20 40 60 80 1000.0

0.2

0.4

0.6

0.8

1.0

Input weight

Probability

(9a) pearly(ds) and p′early(ds).

Stable state

1 → 0

0 → 1

0 20 40 60 800

2

4

6

8

10

Input weight

Expectedchange

(9b) Expected weight change.

pr = 0.4

pr = 0.6

pr = 0.8

pr = 1.

10 15 20 25 30 35 400

200

400

600

800

d1

M'

(9c) Number of input volleys to fill memory.

0 10 20 30 40 500

20

40

60

80

Number of input volleys (multiple of M')

Inputweight

(9d) Input weight over time.

0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.010

15

20

25

30

pr

d s*p r

(9e) For fixed parameters the equilibrium of the process, d∗s, isinversely proportional to the reliability of the synapses, pr.

.

Frontiers 35

Page 36: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

Figure 9 (previous page). Unreliable synapses in the simplified limit case model. (a) The probability of apresynaptic neuron spiking before the postsynaptic neuron (pearly and p′early) provided that the postsynapticneuron spikes. The dependence is monotone in ds as in the reliable setting. (b) Expected weight changedependence on ds, the curves represent expected number of synapses switching from weight 0 to weight 1and from weight 1 to weight 0. The expected change close to d∗s is close to 0 and can be made arbitrarilysmall. (c) Unreliable synapses in our model only record synaptic events in their memory if they are reliable.For a small value of ds the target neuron cannot be reliably activated and thus the number of samplingnecessarily increases. The figure shows the number of input volleys we need, M ′, in order for each synapseto have 2M samples. (d) Fast stabilisation for different starting values of ds. In this setup we apply thelearning step after every 2M/pr input volley. The plot shows the mean over 100 trials and the envelopesrepresent a standard deviation estimate. The parameters are the same as in Figure 8 with reliability pr = 0.5.(e) For unreliable synapses the equilibrium state scales by p−1r when using the same parameters as in thepr = 1 setting. For this figure we use the same thresholds and transition probabilities as in Figure 8 fordifferent values of pr. We set the initial value of ds to 20/pr for each pr and plot the mean value of ds · prwith standard deviation estimates after 50 learning steps over 100 trials.

0.0 0.5 1.0 1.5 2.010

12

14

16

18

20

22

24

σ

Stableinputweight

(10a) Equilibrium, d∗s(σ)

0.0 0.5 1.0 1.5 2.00

10

20

30

40

50

σ

Meanstrongweightindex

(10b) Average index of weight 1 edges

Figure 10. We compare how the plasticity rule designed for spikes incoming according to a uniformlyrandom permutation performs for other input types in the simplified limit case model. A signal on thei-th synapse arrives at time N (i/d, σ) where the standard deviation, σ, controls how close to a fixed oruniform permutation the input is distributed (see Section A.1.3 for details). We choose the parameters as inFigure 8. Initially each synapse is strong independently with probability 4θ/d. (a) For σ = 0 we are in thedeterministic case and only the first 10 input neurons will have strong synapses to the target neuron. But asσ increases the number of strong synapses converges to d∗s = 20. For σ = 2.0 the first 20 input neurons arealready close to a random permutation since the probability of the 20th neuron being earlier than the firstneuron is approximately 0.47. (b) The mechanism has a clear preference for the earliest neurons. But as σincreases they become less distinguishable from later neurons as the plot demonstrates. For both figureseach data point is the mean of 250 trials where for each trial we do 250 learning steps; error bars representa standard deviation estimate.

This is a provisional file, not the final typeset article 36

Page 37: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

ds=80

ds=20

ds=40

0 10 20 30 40 500

1000

2000

3000

4000

5000

6000

Number of input volleys (multiple of M)

Inputweight

(11a) Input weight convergence

1.0 1.5 2.0 2.5

0.0

0.2

0.4

0.6

0.8

1.0

1.2

Log[Strong synapse weight]

Density

(11b) Weight distribution

Setup 1

Setup 2

Setup 3

0 50 100 150 200 2500

200

400

600

800

1000

Number of input volleys (multiple of M)

Inputweight

(11c) Convergence

Setup 1

Setup 2

Setup 3

0 2 4 6 8 100.0

0.1

0.2

0.3

0.4

Synapse weight

Normalisedfrequency

(11d) Weight distribution

17 18 19 20 210

20

40

60

80

100

Frequency

Input weight

(11e) Weight distribution for unilateral memory

.

Frontiers 37

Page 38: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

Figure 11 (previous page). Results for heterogeneous and multimodal weights, and unilateral memory.(a-b) Weight normalisation for heterogeneous weights in the simplified limit case model. (a) With thesame parameters (except for θ) as in Figure 8 we observe input weight convergence. The strong weightsare distributed as 10N (µ,σ) where the mean of the normal is µ = 1.74 and the variance is σ = 0.1002as was observed in (Loewenstein et al., 2011). The expected weight of a strong synapse is 71.67 . . . andtherefore we set θ = 20 · 71.67. Each curve is the mean of 30 trials and the envelopes represent a standarddeviation estimate. (b) Comparison between the aforementioned lognormal distribution of the weightswith the empirical distribution of the weight of synapses selected to be strong in the stable state. Thedistribution is constructed from 5000 trials where for each one we apply 50 learning steps. (c-d) Weightconvergence and weight distribution for multimodal synapse weights in the simplified limit case model. Foran explanation of the three different setups see Section A.1.5. (c) shows convergence for the three differentsetups where all the synapses start with weight 10 and θ = 100. The other parameters were chosen as inFigure 8. For setup 2 we set α = 1.2 and for setup 3 we choose cp = 1, cd = 0.2 and ζ = 0.15 such thatthe weight modifications are similar to the other setups. The curves represent the mean over 100 trialsand the envelopes represent a standard deviation estimate. (d) We compare the weight distribution of thethree setups from (c) after 250 extra learning steps using a scaled histogram with 20 bins for setup 3 and adiscrete distribution plot for setup 1 and 2. Setup 1 produces a bimodal weight distribution while the othertwo setups are unimodal. Note also that none of the update rule imposes a hard bound on the weights, butthat all resulting weights are less than 10 which was the starting weight of the synapses. (e) Distributionof input weight for unilateral memory in the simplified limit case model. The parameters are set as inFigure 10 except that only strong synapses have memory. The plot shows the histogram over 250 trialswhere for each trial we run the process for 250 learning steps and record the input weight at the end of theprocess.

This is a provisional file, not the final typeset article 38

Page 39: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

0 2000 4000 6000 8000 10000 12000 14000Time (ms)

0

20

40

60

80

100

120

ds

Weight change for constant input rate

r = 0.33

r = 0.50

r = 0.67

.

Frontiers 39

Page 40: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

Figure 12 (previous page). Input weight over time for constant rate input for the main model. We comparethree different parameter settings, for the parameters see Table 1 (c). The parameters were chosen such thatfor oscillating input the target neuron would spike in expectation after either one third, half or two thirds ofthe up phase had passed (r = 1/3, r = 1/2 or r = 2/3). For each setting the number ds of strong synapsesis initially set at the point at which it is stable for the oscillating input. The constant input rate is set to 40Hz. For r = 1/3 all the synapses increase their weight since the synapses are early half of the time insteadof just third of the time. For r = 1/2 the process stays stable as expected and for r = 2/3 the input weightdecreases to a point where the input cannot be activated anymore at 40 Hz. This indicates that r shouldbe at most 1/2, and for this setup neurons reduce their reaction to input that comes with a constant inputrate over a long time. Each curve is the mean of 50 trials and the envelopes represent a standard deviationestimate.

This is a provisional file, not the final typeset article 40

Page 41: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

20 25 30 35 40 45λU (ms)

0.1

0.2

0.3

0.4

0.5

0.6

0.7

r

Setup 1Setup 2

(13a) Expected first spike in an up phase, r, mildlydepends on up phase length, λU

20 25 30 35 40 45λU (ms)

10

15

20

25

30

35

40

ds

Setup 1Setup 2

(13b) Monotone dependence of input weight, ds, onup phase length, λU .

40 50 60 70 80 90 100λD (ms)

10

15

20

25

30

35

40

45

50

55

Rat

e

Setup 1Setup 2

(13c) The output rate is the only parameter whichvaries with down phase length, λD

100 150 200 250 300d

0.1

0.2

0.3

0.4

0.5

0.6

0.7

r

Setup 1Setup 2

(13d) Expected first spike in an up phase, r, mildlydepends on input size, d

.

Frontiers 41

Page 42: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

Figure 13 (previous page). Effects on the process in the main model when varying selected parameters,see Section 2.2 for an explanation of the effects. The two setups correspond to parameters in the first twocolumns in table 1 (c). (a-b) The mean relative spike time r in an up phase, and the number ds of strongsynapses, respectively, for different lengths λU of the up phases. As can be observed r depends mildlyon λU . The number of strong synapses, ds, is proportional to r/λU , so up to second-order effects ds isinversely proportional to λU . (c) The output rate dependence on λD. The rate is the only parameter of theprocess which varies with λD. This feature makes our rule unique because it means we can stabilise theinput weight and the number of spikes within an up phase with input coming with different rates. (d) Themild dependency of r on d. If the weight of the weak synapses is not too large then the effect is againmild as in (a). In all figures each data point is the mean of 10 trials and the error bars represent a standarddeviation estimate. For each trial the process was simulated for time equivalent to 200 up phases to allow itto converge, and then the data points were collected over a continued simulation of 200 up phases (r, rate)or at the end of it (ds).

This is a provisional file, not the final typeset article 42

Page 43: A Model of fast local homeostatic plasticity · 2016. 8. 29. · 13 postulate of Donald Olding Hebb for synaptic learning (Hebb, 1949), later popularised as cells that fire 14 together,

Einarsson et al. A model of fast local homeostatic plasticity

0.00

0.02

0.00

0.02

0.00

0.02

0.0

0.2

0.0

0.2

0.0

0.2

0.0

0.2

0.0

0.2

0.0

0.2

0.0

0.2

0.0

0.2

0.0

0.2

0.0

0.8

0.0

0.8

0 5 10 15 20 25 30 350.0

0.8

0 5 10 15 20 25 30 35Time (ms)

Ave

rag

e n

um

ber

of

spik

es

Figure 14. Input forwarding across multiple layers. The set of parameters for the two setups correspondsto distinguished postsynaptic spikes after an expected r = 1/3 (left) and r = 1/2 (right) of the up phasehas passed and the corresponding parameters can be found in Table 1 (c). We observe that the rate functionconverges to a bi- or trimodal burst distribution which is in agreement with how many postsynaptic spikeswe expect in each up phase. The input distribution becomes less random and becomes reminiscent of thevolley setting. Time lag between subsequent layers is due to synaptic delay and spike integration time. Wecorrect for the former so in the figure the time lag is only due to spike integration. The plots were generatedby estimating the output rate function of a neuron in the single neuron setting and using the estimated ratefunction to sample the input to the next layer. The rate function is sampled from a simulation with 100.000up phases and a time resolution of 1 ms and plotted with a resolution of 2 ms. Please note that the scale onthe y-axis is different for rows 1-3, 4-12 and 13-15.

Frontiers 43