6
Long-period rhythmic synchronous ring in a scale-free network Yuanyuan Mi a,b , Xuhong Liao b , Xuhui Huang b , Lisheng Zhang b , Weifeng Gu b , Gang Hu b,1 , and Si Wu a,c,d,1 a State Key Laboratory of Cognitive Neuroscience and Learning and International Digital Group (IDG)/McGovern Institute for Brain Research, b Department of Physics, and c Center for Collaboration and Innovation in Brain and Learning Sciences, Beijing Normal University, Beijing 100875, China; and d Institute of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China Edited by Robert Desimone, Massachusetts Institute of Technology, Cambridge, MA, and approved November 1, 2013 (received for review March 13, 2013) Stimulus information is encoded in the spatial-temporal structures of external inputs to the neural system. The ability to extract the temporal information of inputs is fundamental to brain function. It has been found that the neural system can memorize temporal intervals of visual inputs in the order of seconds. Here we in- vestigate whether the intrinsic dynamics of a large-size neural circuit alone can achieve this goal. The network models we con- sider have scale-free topology and the property that hub neu- rons are difcult to be activated. The latter is implemented by either including abundant electrical synapses between neurons or considering chemical synapses whose efcacy decreases with the connectivity of the postsynaptic neuron. We nd that hub neurons trigger synchronous ring across the network, loops formed by low-degree neurons determine the rhythm of syn- chronous ring, and the hardness of exciting hub neurons avoids epileptic ring of the network. Our model successfully reprodu- ces the experimentally observed rhythmic synchronous ring with long periods and supports the notion that the neural system can process temporal information through the dynamics of local circuits in a distributed way. reservoir network | temporal information processing | scaled chemical synapse O ver the last decades, our knowledge of how the neural sys- tem extracts the spatial structures of visual stimuli has ad- vanced considerably, as is well documented by the receptive eld properties of visual neurons (1). The equally important issue of how the neural system processes temporal information remains much less understood (25). A central issue that has been widely debated is whether timing in the brain relies on a centralized clock, such as a dedicated pacemaker that counts the lapse of time, or whether timing is achieved distributively in local circuits (2, 5, 6). Previous studies have shown that a recurrent network with random connections, diversied single neuron dynamics, and synaptic short-term plasticity (STP) can use time-varying states to retain the memory traces of external inputs (7, 8). However, lim- ited by the network structure, this type of model can only represent temporal information up to hundreds of milliseconds (8). Notably, in their experimental study, Sumbre et al. (9) found that the neural circuit in the optic tectum of zebrash can memorize temporal intervals of visual inputs in the time order of seconds. In this experiment, a visual stimulation was rst pre- sented to a zebrash periodically for 20 times. After this condi- tioning stimulation (CS), the neural circuit of the optic tectum of the zebrash displayed self-sustained synchronous ring with the same rhythm as that of the CS pattern. This sustained rhythmic activity induced regular tail ipping in the zebrash, suggesting that it may serve as a substrate for perceptual memory of rhythmic sensory experience. The longest period that the neural circuit was able to memorize was up to 20 s. How does a neural system acquire and memorize this long- period rhythm? An easy solution is perhaps that a clock in the brain counts the time and guides the rhythmic response of the neural circuit. However, the evidence accumulated thus far tends to suggest that timing in the brain is not controlled by a cen- tralized clock (4, 5). It therefore prompts an important question: is the intrinsic dynamics of a neural circuit alone sufcient to generate these rhythmic synchronous rings? Computationally, the difculty for a homogeneous neural network implementing this task lies on the fact that the time constants of single neurons and neuronal synapses are too short (in the order of 101,000 ms) to hold the stimulation information for sufciently long time to establish associative learning. To tackle this challenging issue, we propose a unique mech- anism relying on a special topology of the network and an appro- priate interaction between neurons. In particular, we consider a network model with scale-free structure (10, 11), i.e., the fraction of neurons in the network with k connections satises PðkÞ k γ , with a constant γ. In practice, we do not require the network topology to be perfectly scale free, but rather that the network consists of a few neurons having many connections and a large number of neurons with few connections. Neurons with k > k th are called hubs and others with k k th are of lower degree (depending on the parameter setting, k th ¼ 6 is used in this study). The dynamics of a single neuron are given by (12) du i dt ¼ 1 e u i ðu i 1Þ u i v i þ b a þ N ji F ij ; [1] τ dv i dt ¼ f ðu i Þ v i ; [2] where u i and v i are variables describing the state of the neuron, u i is analogous to the membrane potential, and v i the recovery Signicance Understanding the mechanisms of how neural systems process temporal information is at the core to elucidate brain func- tions, such as for speech recognition and music appreciation. The present study investigates a simple yet effective mecha- nism for a neural system to extract the rhythmic information of external inputs in the order of seconds. We propose that a large-size neural network with scale-free topology is like a repertoire, which consists of a large number of loops and chains with various sizes, and these loops and chains serve as substrates to learn the rhythms of external inputs. Author contributions: Y.M., G.H., and S.W. designed research; Y.M., G.H., and S.W. per- formed research; Y.M., X.L., X.H., L.Z., W.G., G.H., and S.W. analyzed data; and Y.M., G.H., and S.W. wrote the paper. The authors declare no conict of interest. This article is a PNAS Direct Submission. Freely available online through the PNAS open access option. 1 To whom correspondence may be addressed. E-mail: [email protected] or ganghu@bnu. edu.cn. This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10. 1073/pnas.1304680110/-/DCSupplemental. www.pnas.org/cgi/doi/10.1073/pnas.1304680110 PNAS | Published online November 25, 2013 | E4931E4936 SYSTEMS BIOLOGY PHYSICS PNAS PLUS Downloaded by guest on August 8, 2021

Long-period rhythmic synchronous ring in a scale-free network · Long-period rhythmic synchronous firing in a scale-free network Yuanyuan Mia,b, Xuhong Liaob, Xuhui Huangb, Lisheng

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Long-period rhythmic synchronous ring in a scale-free network · Long-period rhythmic synchronous firing in a scale-free network Yuanyuan Mia,b, Xuhong Liaob, Xuhui Huangb, Lisheng

Long-period rhythmic synchronous firing in ascale-free networkYuanyuan Mia,b, Xuhong Liaob, Xuhui Huangb, Lisheng Zhangb, Weifeng Gub, Gang Hub,1, and Si Wua,c,d,1

aState Key Laboratory of Cognitive Neuroscience and Learning and International Digital Group (IDG)/McGovern Institute for Brain Research, bDepartment ofPhysics, and cCenter for Collaboration and Innovation in Brain and Learning Sciences, Beijing Normal University, Beijing 100875, China; and dInstitute ofNeuroscience, Chinese Academy of Sciences, Shanghai 200031, China

Edited by Robert Desimone, Massachusetts Institute of Technology, Cambridge, MA, and approved November 1, 2013 (received for review March 13, 2013)

Stimulus information is encoded in the spatial-temporal structuresof external inputs to the neural system. The ability to extract thetemporal information of inputs is fundamental to brain function. Ithas been found that the neural system can memorize temporalintervals of visual inputs in the order of seconds. Here we in-vestigate whether the intrinsic dynamics of a large-size neuralcircuit alone can achieve this goal. The network models we con-sider have scale-free topology and the property that hub neu-rons are difficult to be activated. The latter is implemented byeither including abundant electrical synapses between neuronsor considering chemical synapses whose efficacy decreases withthe connectivity of the postsynaptic neuron. We find that hubneurons trigger synchronous firing across the network, loopsformed by low-degree neurons determine the rhythm of syn-chronous firing, and the hardness of exciting hub neurons avoidsepileptic firing of the network. Our model successfully reprodu-ces the experimentally observed rhythmic synchronous firingwith long periods and supports the notion that the neural systemcan process temporal information through the dynamics of localcircuits in a distributed way.

reservoir network | temporal information processing |scaled chemical synapse

Over the last decades, our knowledge of how the neural sys-tem extracts the spatial structures of visual stimuli has ad-

vanced considerably, as is well documented by the receptive fieldproperties of visual neurons (1). The equally important issue ofhow the neural system processes temporal information remainsmuch less understood (2–5). A central issue that has been widelydebated is whether timing in the brain relies on a centralizedclock, such as a dedicated pacemaker that counts the lapse oftime, or whether timing is achieved distributively in local circuits(2, 5, 6). Previous studies have shown that a recurrent networkwith random connections, diversified single neuron dynamics, andsynaptic short-term plasticity (STP) can use time-varying states toretain the memory traces of external inputs (7, 8). However, lim-ited by the network structure, this type of model can only representtemporal information up to hundreds of milliseconds (8).Notably, in their experimental study, Sumbre et al. (9) found

that the neural circuit in the optic tectum of zebrafish canmemorize temporal intervals of visual inputs in the time order ofseconds. In this experiment, a visual stimulation was first pre-sented to a zebrafish periodically for 20 times. After this condi-tioning stimulation (CS), the neural circuit of the optic tectum ofthe zebrafish displayed self-sustained synchronous firing with thesame rhythm as that of the CS pattern. This sustained rhythmicactivity induced regular tail flipping in the zebrafish, suggestingthat it may serve as a substrate for perceptual memory ofrhythmic sensory experience. The longest period that the neuralcircuit was able to memorize was up to 20 s.How does a neural system acquire and memorize this long-

period rhythm? An easy solution is perhaps that a clock in thebrain counts the time and guides the rhythmic response of theneural circuit. However, the evidence accumulated thus far tends

to suggest that timing in the brain is not controlled by a cen-tralized clock (4, 5). It therefore prompts an important question:is the intrinsic dynamics of a neural circuit alone sufficient togenerate these rhythmic synchronous firings? Computationally,the difficulty for a homogeneous neural network implementingthis task lies on the fact that the time constants of single neuronsand neuronal synapses are too short (in the order of 10–1,000ms) to hold the stimulation information for sufficiently long timeto establish associative learning.To tackle this challenging issue, we propose a unique mech-

anism relying on a special topology of the network and an appro-priate interaction between neurons. In particular, we considera network model with scale-free structure (10, 11), i.e., thefraction of neurons in the network with k connections satisfiesPðkÞ∼ k−γ , with a constant γ. In practice, we do not require thenetwork topology to be perfectly scale free, but rather that thenetwork consists of a few neurons having many connections anda large number of neurons with few connections. Neurons withk> kth are called hubs and others with k≤ kth are of lowerdegree (depending on the parameter setting, kth ¼ 6 is used inthis study).The dynamics of a single neuron are given by (12)

duidt

¼ −1eui ðui − 1Þ

�ui −

vi þ ba

�þ ∑

N

j≠iFij; [1]

τdvidt

¼ f ðuiÞ− vi; [2]

where ui and vi are variables describing the state of the neuron,ui is analogous to the membrane potential, and vi the recovery

Significance

Understanding the mechanisms of how neural systems processtemporal information is at the core to elucidate brain func-tions, such as for speech recognition and music appreciation.The present study investigates a simple yet effective mecha-nism for a neural system to extract the rhythmic information ofexternal inputs in the order of seconds. We propose thata large-size neural network with scale-free topology is likea repertoire, which consists of a large number of loops andchains with various sizes, and these loops and chains serve assubstrates to learn the rhythms of external inputs.

Author contributions: Y.M., G.H., and S.W. designed research; Y.M., G.H., and S.W. per-formed research; Y.M., X.L., X.H., L.Z., W.G., G.H., and S.W. analyzed data; and Y.M., G.H.,and S.W. wrote the paper.

The authors declare no conflict of interest.

This article is a PNAS Direct Submission.

Freely available online through the PNAS open access option.1To whom correspondence may be addressed. E-mail: [email protected] or [email protected].

This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1304680110/-/DCSupplemental.

www.pnas.org/cgi/doi/10.1073/pnas.1304680110 PNAS | Published online November 25, 2013 | E4931–E4936

SYST

EMSBIOLO

GY

PHYS

ICS

PNASPL

US

Dow

nloa

ded

by g

uest

on

Aug

ust 8

, 202

1

Page 2: Long-period rhythmic synchronous ring in a scale-free network · Long-period rhythmic synchronous firing in a scale-free network Yuanyuan Mia,b, Xuhong Liaob, Xuhui Huangb, Lisheng

current. τ is the time constant, and N is the number of neurons.The term Fij denotes the neuronal interaction. Fig. 1A shows theevolution of uðtÞ and vðtÞ in an excitation process, mimicking thegeneration of an action potential.Apart from scale-free topology, another key character of our

model is the hardness of activating a hub neuron in comparisonwith exciting a low-degree neuron. This property is crucial foravoiding epileptic firing and can be realized in two ways. Oneway is to consider electrical coupling between neurons (13),implementable by setting Fij ¼ C0 Jijðuj − uiÞ, which is also calleddiffusive coupling. The weights are set Jij ¼ Jji ¼ 1 if the twoneurons i and j have a connection, and Jij ¼ Jji ¼ 0 otherwise.Diffusive coupling, which can be regarded as a constant electricalresister linking two neurons, has the effect of equating thepotentials of connected neurons. It is known that electrical syn-apses have the role of enhancing synchronous firing in a neural

network (13). Here, they have also the important role of increasingthe difficulty of activating a hub neuron, because a hub neuron hasmany connections so that the excitation current it receives is easilyleaked to connected neighbors. We choose the coupling strengthC0, so that a single spike is adequate to activate a low-degreeneuron (Fig. 1B), whereas two or more simultaneously arrivingspikes are needed to excite a hub neuron (Fig. 1 C and D).Alternatively, the hardness of activating a hub neuron can be

implemented by considering that the efficacy of a chemicalsynapse decreases with the connectivity of the postsynapticneuron. This property is realized by setting Fij ¼ CiJijHðuj − θÞ,where H(uj−θ) = uj for uj > θ and H(uj−θ) = 0 otherwise,mimicking that neuron i receives input from neuron j only whenj fires (this occurs when uj > θ, the firing threshold). Jij ¼ 1 or0 depends on whether there is a chemical synapse from neuronj to i or not. The parameter setting Ci ¼ C0=

ffiffiffiffiki

pwith ki the

connectivity of neuron i implies that the more connectionsa neuron has, the weaker the synaptic efficacy is. We choose thevalue of C0, so that a single spike is adequate to activate a low-degree neuron (Fig. 1E), whereas two or more simultaneouslyarriving spikes are needed to excite a hub neuron (Fig. 1F).

ResultsRhythmic Synchronous Firing in a Scale-Free Network. We firstdemonstrate that our network model has the capacity of retainingrhythmic synchronous firing. Because the network behaviors forelectrical and chemical synapses are qualitatively the same, we willpresent the results for electrical synapses here. The results forchemical synapses are shown in SI Text, unless stated specifically.For illustration, an arbitrary scale-free network with γ ¼ 3,

number of neurons N ¼ 210, and mean connectivity hki ¼ 4 isgenerated, as shown in Fig. 2A. The neurons are indexedaccording to the order of their generation by the preferential-attachment rule (11). Beginning with a random initial condition(by setting ui and vi, for i ¼ 1; . . . ;N, to be uniformly distributedrandom numbers between 0 and 1), we evolve the network stateaccording to Eqs. 1 and 2 and find that with a probability ofaround 10%, the network evolves into a self-sustained periodi-cally oscillatory stationary state (i.e., an attractor), whereasin other cases, the initial activity of the network fades awayrapidly. Measured by the mean activity of all neurons, i.e.,huðtÞi ¼ ∑N

i¼1uiðtÞ=N, the oscillatory attractor is characterized bybursts of synchronous firing across the network that are inter-rupted by a rather long interbursting interval during which onlya small number of neurons fire (Fig. 2B and Movie S1). Thus, thenetwork can maintain rhythmic synchronous firing.

Mechanism for Long-Period Rhythmic Synchronous Firing. To unveilthe mechanism underlying the network behavior, we investigatethe activities of individual neurons during rhythmic firing andfind that they can be roughly divided into two groups: those thatfire only once in a single period, referred to as T1 neuronshereafter (Fig. 2C), and those that fire twice in a single period,referred to as T2 neurons hereafter (Fig. 2D) (there are very fewneurons firing more than twice in a single period and they areattributed to T2 neurons for simplicity). The different behaviorsof T1 and T2 neurons disclose the secret of the network dy-namics. We redraw the network diagram in Fig. 2A by separatingall T2 neurons into the center of the figure, as shown in Fig. 2E.Interestingly, we find that all T2 neurons have low-degree con-nectivity and form a loop. We call such a loop formed by low-degree neurons a low-degree loop.The picture of the network dynamics now becomes clear as

described in the following. For illustration, let us consider thatneuron 91 on the loop (marked red in Fig. 2E) is excited first andthat the activity propagates along the loop in an anticlockwisedirection (Movie S2). Initially, the firing of low-degree neuronsin the loop is not sufficient to generate synchronous firing of the

A

C

B

D

E F

Fig. 1. (A) Dynamics of the single neuron model described in Eqs. 1 and 2. Thefunction fðuiÞ in Eq. 2 is chosen to be fðuiÞ ¼ 0, 1− 6:75uiðui−1Þ2 and 1 forui < 1=3, 1=3≤ui ≤ 1, and ui > 1, respectively. We take e ¼ 0:04, τ ¼ 1, a ¼ 0:84,and b ¼ 0:07 so that the dynamics of a single neuron hold excitabilitywhen the input is sufficiently large. After firing, the neuron is subject toa refractory period and then reaches the steady rest state. (B) An exampleof a low-degree neuron connected by electrical synapses with threeneighbors. The coupling strength C0 ¼ 0:174. A single spike is sufficient toactivate this neuron. In a network, a spike can typically activate a neuronwith less than six connected neighbors. (C ) An example of a hub neuronconnected by electrical synapses with eight neighbors. A single spike is notadequate to activate this neuron. (D) The same hub neuron as in C can besuccessfully activated by two or more simultaneously arriving spikes. (E )An example of a low-degree neuron connected by scaled chemical syn-apses with three neighbors. The synapse strength, illustrated by the linewidth, is C0=

ffiffiffi3

pwith C0 ¼ 0:22. The firing threshold θ ¼ 0:15. A single spike

is sufficient to activate this neuron. (F ) Two or more simultaneously ar-riving spikes are required to activate a hub neuron with eight neighborsconnected by scaled chemical synapses. The synapse strength is C0=

ffiffiffi8

p.

E4932 | www.pnas.org/cgi/doi/10.1073/pnas.1304680110 Mi et al.

Dow

nloa

ded

by g

uest

on

Aug

ust 8

, 202

1

Page 3: Long-period rhythmic synchronous ring in a scale-free network · Long-period rhythmic synchronous firing in a scale-free network Yuanyuan Mia,b, Xuhong Liaob, Xuhui Huangb, Lisheng

network and the network state remains silent. The situation startsto change when the excitation propagates to neuron 36 from whichhub neuron 4 becomes activated. This hub neuron fires becauseit receives two spikes around the same time through two trans-mission routes (see red edges in Fig. 2E). Many other hubs arealso connected to the loop; however, they do not fire becausethey do not receive two or more synchronized spikes from theloop. The firing of hub neuron 4 subsequently induces synchro-nous firing in the network due to its massive connections to others.After synchronous firing, the network returns to the interburst-ing period because the majority of the neurons are now in therefractory state.Thus, in generating rhythmic synchronous firing, the role of

a hub neuron is to rapidly spread excitation across the network,

and the role of a low-degree loop is to retain the activity duringthe interbursting interval, with the length of the loop deter-mining the rhythm. Neurons on the loop fire twice in a singleperiod: once in the interbursting interval and once during thehub-induced synchronous burst. Other neurons fire only onceduring the synchronous firing. Electrical synapses (or scaledchemical synapses) between neurons play a critical role, ensuringthat only one or very few hub neurons are activated during thepropagation of excitation along the low-degree loop, whichprotects against epileptic firing of the network.After synchronous firing, the majority of neurons are in the

refractory period and thus cannot be activated immediatelyagain. However, there are exceptions. In the example network ofFig. 2E, the five neurons on the loop marked green (their indexesare 91, 153, 59, 202, and 201) are activated by the excitationpropagating along the loop at around the same time when hubneuron 4 is excited. Therefore, they are not involved in the hub-induced synchronous burst because of their refractory period.This property means that they recover earlier than others fromthe refractory period. As a result, neuron 91 can be activated bythe activity of the network at the end of the synchronous phaseand the propagation of the excitation wave along the loop beginsagain. Thus, the low-degree loop plays another important role ofretaining the excitation seed against a hub-induced synchronousfiring so that a rhythmic network activity can be maintained. Thisproperty is clearly displayed in Movie S2.As a comparison, we also studied network models with non–

scale-free topology, including regular, random, and small-worldnetworks, and found that they all fail to retain long-periodrhythmic synchronous firing, either due to lacking hub neurons totrigger synchronous firing or due to short of low-degree neuronsto retain long-period low-level activities (Figs. S1–S3).

Reservoir Nature of a Scale-Free Network. In the above, we dem-onstrated that a scale-free network has the capacity of retainingperiodical synchronous firings and the length of a low-degreeloop determines the period of the rhythm. We further checkwhether a scale-free network has resources to maintain a broadrange of rhythmic activities. The statistical distribution of thelengths of low-degree loops in a scale-free network with 300neurons is measured. As Fig. 3A shows, the length of low-degreeloops has a broad distribution, indicating that the network is likea reservoir having the potential to maintain rhythms with a widerange of time periods. The longest period is determined by thelength of the longest low-degree loop. We measure how thelatter, referred to as Lmax, scales up with the network size. Fig.3B shows that it increases linearly with the number of neurons(see theoretical analysis for larger networks in Fig. S4). Fitted by

Fig. 2. (A) A network with scale-free topology γ ¼ 3, mean connectivityhki ¼ 4, and N ¼ 210. The diameter of a neuron is proportional to its con-nectivity. (B) The population activity of the network, which displays rhythmicsynchronous firing. (C and D) Activities of individual neurons can be dividedinto two types: those that fire only once in a single period (T1 type; C) andthose that fire twice or more (T2 type, D). (E) Same as A with the positions ofneurons reorganized by placing all T2 neurons (green and orange colored) inthe center and all T1 neurons (blue colored) outside. All T2 neurons havea low-degree connection and form a loop. Two red edges show the path-ways along which excitation of the loop is propagated to a hub neuron. A B

Fig. 3. (A) Distribution of the lengths of low-degree loops in a scale-freenetwork of size N ¼ 300. The result is obtained by averaging 100 randomlygenerated scale-free networks of the same size. (B) Length of the longestlow-degree loop vs. the number of neurons in a network. Actual values areobtained through extensively searching all loops in each network, and theupper bounds are calculated theoretically (Fig. S4). For each data point, theresult is obtained by averaging over 100 randomly generated scale-freenetworks of the same size.

Mi et al. PNAS | Published online November 25, 2013 | E4933

SYST

EMSBIOLO

GY

PHYS

ICS

PNASPL

US

Dow

nloa

ded

by g

uest

on

Aug

ust 8

, 202

1

Page 4: Long-period rhythmic synchronous ring in a scale-free network · Long-period rhythmic synchronous firing in a scale-free network Yuanyuan Mia,b, Xuhong Liaob, Xuhui Huangb, Lisheng

a linear curve, this gives Lmax ≈ 0:12N þ 0:8. Thus, for a networksize N in the range of 104–105 (a very rough estimation of thenumber of neurons in the zebrafish tectum), Lmax is in the rangeof 1:2× 103 –1:2× 104. Consider that the membrane time con-stant of individual neurons is in the order of 10 ms and thetransmission delay between neurons is in the order of 1 ms, thetime consumed for excitation propagating along the longest low-degree loop is therefore in the range of 13–130 s, which fullycovers the period of 20 s observed in the experiment (9).

Matching the Rhythm of External Input. The above analysis suggeststhat to retain long-period rhythmic synchronous firing in a network,the key is to have a low-degree loop of proper size and a hub neuron“hooked” on that loop that can be activated by the excitation of theloop. However, how does the neural system acquire the necessarystructure from a given external rhythmic input?Here, we argue thatthis could be achieved through a learning process. To demonstratethis idea, we carry out the following simulation.A conventional scale-free network has only topology without

a geometrical structure. To incorporate the property that a neu-ral circuit is essentially embedded in a 2D cortical sheet withinhomogeneous connection density in distance, we build upa scale-free network in 2D space accommodating the biologicalfeature that neurons tend to have more connections locally (Fig.4 A and B; for details, see Materials and Methods).To extract the rhythm of periodical stimulation, the key is

to establish association between consecutive stimulations, which

requires the network to hold the stimulation information (memorytrace) for a duration longer than the interstimulation interval.Consider that at t ¼ 0, a stimulation is first presented to a por-tion of neurons in the network (4% used in our simulation toensure the initial synchronous firing of the network), referred toas neural cluster A, whose activation triggers a strong transientresponse in the network. Subsequently, the residue of the stim-ulus-induced activity is propagating in the network along variouspaths, which effectively holds a memory trace of the precedingstimulation (Fig. 4C) and provides a cue for associative learning.Around time T, the period of the rhythmic input, the residualactivity reaches a neural cluster B, and meanwhile, a new roundof stimulation is applied to neural cluster A. According to theidea of Hebbian learning (14–16), the synapses from neuralclusters B to A are potentiated. This learning process occursrepeatedly at each period of the stimulation presentation. Fi-nally, the connections from neural cluster B to A are strength-ened to the extent that the excitation of B can activate A, andthus a closed loop of size T is formed. Note that because thestimulation induces initial synchronous firing, this means thatneural cluster A contains hub neurons and these hub neurons arehooked on the loop after learning. The network can now holdsynchronous firing with the same rhythm as the input. Fig. 4Ddisplays the simulation result for the case of electrical synapses.The result for chemical synapses is shown in Fig. S5.The range of rhythmic inputs a network can learn is de-

termined by the lifetime of a memory trace elicited by thestimulation. We measured how the latter scales up with thenetwork size (Materials and Methods). Fig. 5 A and B shows thatthe mean value of the lifetime of a memory trace, referred to ashttracei, increases linearly with the number of neurons, and can beapproximated to be httracei≈ ð0:036N þ 29Þτ for electrical syn-apses (Fig. 5A) and httracei≈ ð0:030N þ 29Þτ for chemical syn-apses (Fig. 5B), where τ is the time constant of the single neurondynamics. Fig. 5 C and D demonstrates that a scale-free networkwith electrical synapses is able to learn a broad range of rhythmicinputs once the interstimulation interval is smaller than thelifetime of a memory trace the network can hold. The result forchemical synapses is similar and is shown in Fig. S5.

Fig. 4. (A) A scale-free network of 1,000 neurons in 2D space, in whichneurons tend to have higher connectivity locally. (B) The connectivity dis-tribution of the network, which satisfies the power law, with γ ¼ 1:95. (C)The network can retain residual activity elicited by a transient strong stim-ulation for more than 90τ. (Inset) Network activity around 90τ. This residueactivity serves as a memory trace of the stimulation. Electrical synapses areused. Parameters are the same as in Fig. 1 A and B. (D) Network learns togenerate rhythmic synchronous firing with a period of 80τ after an externalstimulation is presented repeatedly for 20 times.

A B

C D

Fig. 5. (A and B) The lifetime of a memory trace vs. the number of neuronsin a scale-free network. (A) Networks with electrical synapses. (B) Networkswith chemical synapses. The parameters for the single neuron dynamics andsynaptic strengths are the same as in Fig. 1. Each data point is obtained byaveraging over 100 networks of the same size. (C and D) A given scale-freenetwork (with electrical synapses) learns to generate a broad range ofrhythmic synchronous firings from different external inputs. An externalstimulation is applied periodically for 20 times. The interstimulation intervalsare (C) T ¼ 40τ and (D) T ¼ 60τ.

E4934 | www.pnas.org/cgi/doi/10.1073/pnas.1304680110 Mi et al.

Dow

nloa

ded

by g

uest

on

Aug

ust 8

, 202

1

Page 5: Long-period rhythmic synchronous ring in a scale-free network · Long-period rhythmic synchronous firing in a scale-free network Yuanyuan Mia,b, Xuhong Liaob, Xuhui Huangb, Lisheng

DiscussionIn summary, we proposed a very simple, yet effective, mechanismto generate and maintain long-period rhythmic synchronous fir-ing in the neural system. The network has scale-free topologyand contains low-degree loops and chains of various sizes, whichendows the neural system with the capacity to process a broadrange of rhythmic inputs. In the presence of a rhythmic externalinput, the neural system selects a low-degree loop from its res-ervoir with the loop size matching the input rhythm, and thismatching operation can be achieved by a learning process.Scale-free topology and the hardness of activating a hub neuron

are two fundamental elements of our model. Strictly speaking, wedo not need the network structure to be perfectly scale free butrather that it contains a few hubs and a large number of low-degree neurons. This type of connection pattern has been sug-gested to achieve a good balance between communication effi-ciency and wiring economy in neural networks (17). A number ofexperimental studies also support the existence of scale-free net-works in the neural system. For instance, it was found that thetopology of developing hippocampal networks in rats and mice(18) and the topology of functional networks in the human brainbased on functional MRI data (19, 20) are scale free. The ex-perimental data also showed that stimulating a single or a fewcortical neurons could significantly modulate perception andmotor output (21, 22) and the global state of the brain (23, 24),indicating the potential presence of hub neurons.To implement the hardness of activating a hub neuron in

comparison with exciting a low-degree one, we proposed two po-tential mechanisms. One is that neurons are connected by elec-trical synapses. Electrical coupling has been found to existabundantly between ganglion cells in the retina and betweeninterneurons in the cortex. It is unclear yet whether, in somecortical regions, electrical synapses may exist in abundance be-tween excitatory neurons (25).The other mechanism is that neurons are connected by chem-

ical synapses with their strengths decreasing with the degree ofconnectivity of postsynaptic neurons. In contrast, if the strengthsof chemical synapses are homogeneous (by setting Ci ¼ C0 in-dependent of ki), then a scale-free network is unable to retainlong-period rhythmic synchronous firing due to that hub neuronsinduce high-frequency oscillating responses (Fig. S6). We foundthat if the synaptic strength scales inversely with the square-root ofneuronal connectivity, i.e., Ci ¼ C0=

ffiffiffiffiki

p, long-period rhythms are

well retained. Notably, this relationship is also the condition fora balanced network producing irregular neural responses (26).The experimental data also showed that synaptic efficacy candecrease with the connectivity of a postsynaptic neuron (27).We also tested the robustness of our model to noise. Two

noise sources were explored. One is introducing heterogeneity inneuronal connections, and the other is the injection of back-ground noisy input. We measured how these two forms of noiseaffect the network responses to an external strong transientstimulation, including (i) the reliability of the network generatingsynchronous firing followed by long-lasting residual activity, and(ii) the lifetime of a memory trace elicited by the stimulation. Wefound that over a wide range of noise amplitudes, the networkperformance is robust (Fig. S7).In general, if a network system has the scale-free type of topology,

i.e., it consists of a few hub nodes and a large number of low-degreenodes, and the property that hub nodes are difficult to be activatedin comparison with low-degree ones (irrespective of the mechanismof how this is achieved), then we would expect that the main find-ings in this work will still be applicable. It will be interesting toexplore whether other dynamic systems hold these properties.Synchronous oscillation has been suggested to play important

roles in brain functions. Different mechanisms have been pro-posed to generate rhythms in neural responses, ranging from single

cell properties (e.g., pacemaker or resonance) to neural populationdynamics (e.g., a network of interneurons or a loop of reciprocallycoupled excitatory and inhibitory neurons) (28). A challengingissue to these proposed mechanisms is to reconcile the notion ofregular oscillation and the seemly contradictory behavior of ir-regular responses of individual neurons. Here, the mechanism wepropose for generating slow oscillation depends on the propaga-tion of neural activity along a loop of low-degree neurons. Thus, itdoes not contradict the phenomenon that individual neurons canrespond irregularly to a constant stimulus (29).Finally, our study has some far-reaching implications on neural

information processing. It suggests that a neural system can useloops and chains of connected neurons to hold the memory traceof input information and that the latter might serve as the sub-strate to process temporal events. The neural model we proposehere agrees with the idea of a reservoir network (8, 30), whichstates that the neural system is like a large repertoire with com-plete or even redundant resources for implementing a brainfunction, in which simplicity, rather than the efficiency of usingresources, matters. Our study also supports the notion that theneural system can process temporal information by using the dy-namics of a local circuit in a distributed way.

Materials and MethodsGeneration of a Scale-Free Network in 2D Space. A neural circuit is essentiallyembedded in 2D space, and neurons tend to have more connections locally.To accommodate this feature, we generate a scale-free network in 2Dspace. First, we use the preferential-attachment rule (11) to generatea preliminary scale-free network and assign neurons’ positions as follows:starting from the center to the peripheral regions, neurons are added oneby one clockwise in a 2D lattice (Fig. 4A). Denote ni to be the neuronadded at the ith step. For a newly added neuron, it has one connectionwith probability P ¼ 0:4 and two connections with probability P ¼ 0:6linked to the existing neurons. The chance for an existing neuron ni linkingto the new neuron is given by ki=∑jkj , where ki is the current connectivityof neuron ni , and the summation runs over all existing neurons. Thispreferential-attachment rule leads to fact that hub neurons tend to belocated at the central area, because earlier added neurons have morechances to be connected to others. This structure agrees with the propertyin the visual system (e.g., the retina) that neurons tend to have higherdensity, and hence higher connectivity, in the fovea than in the peripheralregion. Second, for neuron ni having large enough connectivity (wechoose those neurons with index i< 100 in the preliminary network), weremove its connection to neuron nj with a probability, P ¼ 1− expð− βdijÞ,with β ¼ 0:9 and dij is the Euclidean distance between the neurons in the2D lattice. The connections of low-connectivity neurons are unchanged.A large dij implies that the two neurons are well separated in the spaceand hence have high probability to be disconnected. This pruning ruleachieves the goal that neurons tend to have more connections locally. Byusing the above method, we generate a network as shown in Fig. 4A, andwe confirm that it has the scale-free topology in terms of the distributionof neurons’ connectivity satisfying the power law (Fig. 4B).

Learning Algorithm. We consider a simple learning rule (14–16), which statesthat if a neuron i fires in advance of a neuron j within a short-time windowΔt ðΔt ¼ 2τÞ, then there is a chance (with the probability P ¼ 0:1) that thesynapse from neurons i to j is established. Analogy to the experimentalsetting, we consider an external stimulation is applied to the network re-peatedly with an interval T. This stimulation is able to activate 4% of theneurons in the network instantly, referred to as neural cluster A (theseneurons are randomly and uniformly chosen from the network). Supposethat the first-round stimulation is applied at t ¼ 0, which triggers a strongtransient response of the network, implying that at least one hub neuron isactivated. Subsequently, the residual activity of the transient response ispropagating in the network along various paths. Denote those neurons thathappen to be active after the first-round stimulation in the time windowðT −Δt,TÞ to be neural cluster B, which holds the memory trace of thestimulation. At time T, the second-round simulation is applied, which acti-vates neural cluster A again. According to the aforementioned learning rule,the couplings from clusters B to A are strengthened. This learning process occursrepeatedly at each round of presenting the same simulation. Finally, the con-nections from clusters B to A become sufficiently strong, and the excitation of

Mi et al. PNAS | Published online November 25, 2013 | E4935

SYST

EMSBIOLO

GY

PHYS

ICS

PNASPL

US

Dow

nloa

ded

by g

uest

on

Aug

ust 8

, 202

1

Page 6: Long-period rhythmic synchronous ring in a scale-free network · Long-period rhythmic synchronous firing in a scale-free network Yuanyuan Mia,b, Xuhong Liaob, Xuhui Huangb, Lisheng

B can activate A. Thus, a closed loop of size T is formed, and the network is ableto generate rhythmic activity with the same period as the input.

Lifetime of Memory Trace. We calculate how the lifetime of residual activityelicited by synchronous firing of a network scales up with the network size. Fora given size N, we randomly generate 100 networks with long low-degreechains and scale-free topology. For each network, we stimulate hub neurons,whose excitations trigger synchronous firing of the network followed by long-lasting residual activity. The lifetime of the memory trace is measured from theonset of synchronous firing to the moment when all neurons are inactive. Thestatistics of the memory trace lifetime for a given network size is obtained byaveraging over 100 random networks. We carry out simulations for the net-

work sizes of 500, 1,000 and 1,500 and interpret the relationship between thelifetime of the memory trace and the network size by a linear curve (Fig. 5 Aand B).

ACKNOWLEDGMENTS. We acknowledge the very valuable comments of twoanonymous reviewers. We thank M. M. Poo, T. Sejnowski, M. Tsodyks, M.Rasch, W. H. Zhang, X.-J. Wang, and K. Y. Michael Wong for valuablediscussions. This work is supported by National Foundation of NaturalScience of China Grants 11305112 (to Y.M.), 11135001 and 11174034 (to G.H.),91132702 and 31261160495 (to S.W.), and 11205041 (to X.L.); the Open Re-search Fund of the State Key Laboratory of Cognitive Neuroscience andLearning (CNLYB1211); and Natural Science Foundation of Jiangsu ProvinceBK20130282.

1. Kandel E, Schwartz J, Jessell T, Siegebaum S, Hudspeth A (2013) Principles of NeuralScience (McGraw-Hill Ryerson, New York, 5th Ed).

2. Carr CE (1993) Processing of temporal information in the brain. Annu Rev Neurosci 16:223–243.

3. Mauk MD, Buonomano DV (2004) The neural basis of temporal processing. Annu RevNeurosci 27:307–340.

4. Buhusi CV, Meck WH (2005) What makes us tick? Functional and neural mechanismsof interval timing. Nat Rev Neurosci 6(10):755–765.

5. Ivry RB, Schlerf JE (2008) Dedicated and intrinsic models of time perception. TrendsCogn Sci 12(7):273–280.

6. Gihbon J (1977) Scalar expectancy theory and Weber’s law in animal timing. PsycholRev 84(3):279–335.

7. Karmarkar UR, Buonomano DV (2007) Timing in the absence of clocks: Encoding timein neural network states. Neuron 53(3):427–438.

8. Buonomano DV, Maass W (2009) State-dependent computations: Spatiotemporalprocessing in cortical networks. Nat Rev Neurosci 10(2):113–125.

9. Sumbre G, Muto A, Baier H, Poo M-M (2008) Entrained rhythmic activities of neuronalensembles as perceptual memory of time interval. Nature 456(7218):102–106.

10. Barabási AL, Albert R (1999) Emergence of scaling in random networks. Science 286(5439):509–512.

11. Albert R, Barabási AL (2002) Statistical mechanics of complex networks. Rev Mod Phys74(1):47–97.

12. Bär M, Eiswirth M (1993) Turbulence due to spiral breakup in a continuous excitablemedium. Phys Rev E Stat Phys Plasmas Fluids Relat Interdiscip Topics 48(3):R1635–R1637.

13. Sherman A, Rinzel J (1992) Rhythmogenic effects of weak electrotonic coupling inneuronal models. Proc Natl Acad Sci USA 89(6):2471–2474.

14. Bi GQ, Poo M-M (1998) Synaptic modifications in cultured hippocampal neurons:Dependence on spike timing, synaptic strength, and postsynaptic cell type. J Neurosci18(24):10464–10472.

15. Bloomfield SA, Völgyi B (2009) The diverse functional roles and regulation of neu-ronal gap junctions in the retina. Nat Rev Neurosci 10(7):495–506.

16. Yang XD, Korn H, Faber DS (1990) Long-term potentiation of electrotonic coupling atmixed synapses. Nature 348(6301):542–545.

17. Buzsáki G, Geisler C, Henze DA, Wang X-J (2004) Interneuron Diversity series: Circuitcomplexity and axon wiring economy of cortical interneurons. Trends Neurosci 27(4):186–193.

18. Bonifazi P, et al. (2009) GABAergic hub neurons orchestrate synchrony in developinghippocampal networks. Science 326(5958):1419–1424.

19. Eguíluz VM, Chialvo DR, Cecchi GA, Baliki M, Apkarian AV (2005) Scale-free brainfunctional networks. Phys Rev Lett 94(1):018102.

20. van den Heuvel MP, Stam CJ, Boersma M, Hulshoff Pol HE (2008) Small-world andscale-free organization of voxel-based resting-state functional connectivity in thehuman brain. Neuroimage 43(3):528–539.

21. Brecht M, Schneider M, Sakmann B, Margrie TW (2004) Whisker movements evokedby stimulation of single pyramidal cells in rat motor cortex. Nature 427(6976):704–710.

22. Houweling AR, Brecht M (2008) Behavioural report of single neuron stimulation insomatosensory cortex. Nature 451(7174):65–68.

23. Morgan RJ, Soltesz I (2008) Nonrandom connectivity of the epileptic dentate gyruspredicts a major role for neuronal hubs in seizures. Proc Natl Acad Sci USA 105(16):6179–6184.

24. Li CY, Poo M-M, Dan Y (2009) Burst spiking of a single cortical neuron modifies globalbrain state. Science 324(5927):643–646.

25. Connors BW, Long MA (2004) Electrical synapses in the mammalian brain. Annu RevNeurosci 27:393–418.

26. van Vreeswijk C, Sompolinsky H (1996) Chaos in neuronal networks with balancedexcitatory and inhibitory activity. Science 274(5293):1724–1726.

27. Li H, Li Y, Lei Z, Wang K, Guo A (2013) Transformation of odor selectivity from pro-jection neurons to single mushroom body neurons mapped with dual-color calciumimaging. Proc Natl Acad Sci USA 110(29):12084–12089.

28. Wang X-J (2010) Neurophysiological and computational principles of cortical rhythmsin cognition. Physiol Rev 90(3):1195–1268.

29. Shadlen MN, Newsome WT (1998) The variable discharge of cortical neurons: im-plications for connectivity, computation, and information coding. J Neurosci 18(10):3870–3896.

30. Jaeger H, et al. (2007) Special issues on echo state networks and liquid state machines.Neural Networks 20(3):287–289.

E4936 | www.pnas.org/cgi/doi/10.1073/pnas.1304680110 Mi et al.

Dow

nloa

ded

by g

uest

on

Aug

ust 8

, 202

1