6
A Spiking Neural Network Emulating the Structure of the Oculomotor System Requires No Learning to Control a Biomimetic Robotic Head Praveenram Balachandar and Konstantinos P. Michmizos, Member IEEE Abstract— Robotic vision introduces requirements for real- time processing of fast-varying, noisy information in a con- tinuously changing environment. In a real-world environment, convenient assumptions, such as static camera systems and deep learning algorithms devouring high volumes of ideally slightly- varying data are hard to survive. Leveraging on recent studies on the neural connectome associated with eye movements, we designed a neuromorphic oculomotor controller and placed it at the heart of our in-house biomimetic robotic head prototype. The controller is unique in the sense that (1) all data are encoded and processed by a spiking neural network (SNN), and (2) by mimicking the associated brain areas’ connectivity, the SNN required no training to operate. A biologically-constrained Hebbian learning further improved the SNN performance in tracking a moving target. Here, we report the tracking performance of the robotic head and show that the robotic eye kinematics are similar to those reported in human eye studies. This work contributes to our ongoing effort to develop energy-efficient neuromorphic SNN and harness their emerging intelligence to control biomimetic robots with versatility and robustness. I. INTRODUCTION Covering all ranges of robotics, from structure [1], [2] and mechanics [3], [4] to perception [5], [6], actuation [7], [8] and autonomy [9], [10], biomimetic robots imitate design [2] and movement [8] principles found in nature to perform desired tasks in unstructured environments [6]. An orthogonal direction towards biomimesis is to imitate the most advanced biological controller: the brain. This particular type of brain-mimesis often entails the use of neural networks [11], of varying degrees of complexity [12], [13] and biological plausibility [14], [15], [16], that promise advances to Robotics [17] and insights to Brain Science [18], [19]. With robots being arguably the sweet spot for neuromor- phic artificial intelligence, the emergence of neuromorphic chips [20], [21] has spurred interest for a bottom-up re- thinking of biomimetic controllers in the form of spiking neural networks (SNN) that seamlessly integrate to non-Von Neumann architectures. We and others have recently pro- posed brain-inspired SNNs that embed into such chips and solve robotic problems with unparalleled energy-efficiency [22] while promising a robust yet versatile alternative to the brittle inference-based machine learning solutions [23]. The main criticism to neuromorphic computing is that, in the absence of a strong learning algorithm, the current state of PB and KM are with the Computational Brain Lab, Depart- ment of Computer Science, Rutgers University, New Jersey, USA [email protected] the art does not share the same scaling abilities with the mainstream deep learning methods. Alongside efforts to implement backpropagation algo- rithms in SNN [24], [25], most neuromorphic algorithms are indeed simple enough to be trained via variations of STDP, a Hebbian-type local learning rule [26], [27]. An alternative direction, that we have started to explore, entails the SNNs to be dictated by the underlying neural connectome associated with the targeted function [22], [28]. In this paper, we extend this direction by presenting a neuro-mimetic SNN that draws from the connectome of the human oculomotor system and enables our in-house robotic head to track a laser target. We demonstrate how the robotic prototype achieved real-time tracking of the visual target, by coordinating saccadic and pursuit eye movements with neck movements. Given that the structure of any network, be it biological or artificial, serves its function, the SNN’s behavior emerged out of this bottom up neural-level representation, without any training. II. METHODS A. The Robotic Head Prototype The biomimetic robotic head prototype comprised of two cameras (eyes) and a neck (Fig. 1a). Each camera was mounted on a pan-tilt mechanism, which allowed horizontal and vertical movement of the eyes. The eyes were fixed to a base plate mounted on top of another pan-tilt system, to replicate neck movements. Overall, the robot had 6 degrees of freedom (DOF). The three pan-tilt systems were controlled by Dynamixel AX-12A digital servos driven by an Arduino- based Arbotix-M controller. The controller relayed servo position deltas to the Arbotix-M robocontroller through a USB serial interface. The deltas were computed based on the location of the laser target in a foveated field of view. The range-of-motion (ROM) for the servos controlling the eyes was restricted to 100 (70) degrees horizontally (vertically) from the center, to keep the eye kinematics within the bio- logically plausible range. Similar restrictions for biologically plausible ROM were imposed on neck movement. The target was projected on a wall at 55 cm away from the eyes, using a laser diode mounted on a separate Arduino- controlled pan-tilt system. The movement profiles of the laser and the eyes were calibrated with respect to a fixed reference position for estimating the tracking accuracy of the proposed controller, by using linear regression analysis libraries to determine a higher order polynomial relation between the target angle and servo position. The laser and servo positions were recorded and mapped to the eye and neck kinematics, as well as target position. arXiv:2002.07534v1 [cs.RO] 18 Feb 2020

A Spiking Neural Network Emulating the Structure of the ... · An orthogonal direction towards biomimesis is to imitate the most advanced biological controller: the brain. This particular

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A Spiking Neural Network Emulating the Structure of the ... · An orthogonal direction towards biomimesis is to imitate the most advanced biological controller: the brain. This particular

A Spiking Neural Network Emulating the Structure of the OculomotorSystem Requires No Learning to Control a Biomimetic Robotic Head

Praveenram Balachandar and Konstantinos P. Michmizos, Member IEEE

Abstract— Robotic vision introduces requirements for real-time processing of fast-varying, noisy information in a con-tinuously changing environment. In a real-world environment,convenient assumptions, such as static camera systems and deeplearning algorithms devouring high volumes of ideally slightly-varying data are hard to survive. Leveraging on recent studieson the neural connectome associated with eye movements, wedesigned a neuromorphic oculomotor controller and placed itat the heart of our in-house biomimetic robotic head prototype.The controller is unique in the sense that (1) all data areencoded and processed by a spiking neural network (SNN), and(2) by mimicking the associated brain areas’ connectivity, theSNN required no training to operate. A biologically-constrainedHebbian learning further improved the SNN performancein tracking a moving target. Here, we report the trackingperformance of the robotic head and show that the roboticeye kinematics are similar to those reported in human eyestudies. This work contributes to our ongoing effort to developenergy-efficient neuromorphic SNN and harness their emergingintelligence to control biomimetic robots with versatility androbustness.

I. INTRODUCTION

Covering all ranges of robotics, from structure [1], [2]and mechanics [3], [4] to perception [5], [6], actuation[7], [8] and autonomy [9], [10], biomimetic robots imitatedesign [2] and movement [8] principles found in natureto perform desired tasks in unstructured environments [6].An orthogonal direction towards biomimesis is to imitatethe most advanced biological controller: the brain. Thisparticular type of brain-mimesis often entails the use ofneural networks [11], of varying degrees of complexity [12],[13] and biological plausibility [14], [15], [16], that promiseadvances to Robotics [17] and insights to Brain Science [18],[19].

With robots being arguably the sweet spot for neuromor-phic artificial intelligence, the emergence of neuromorphicchips [20], [21] has spurred interest for a bottom-up re-thinking of biomimetic controllers in the form of spikingneural networks (SNN) that seamlessly integrate to non-VonNeumann architectures. We and others have recently pro-posed brain-inspired SNNs that embed into such chips andsolve robotic problems with unparalleled energy-efficiency[22] while promising a robust yet versatile alternative to thebrittle inference-based machine learning solutions [23]. Themain criticism to neuromorphic computing is that, in theabsence of a strong learning algorithm, the current state of

PB and KM are with the Computational Brain Lab, Depart-ment of Computer Science, Rutgers University, New Jersey, [email protected]

the art does not share the same scaling abilities with themainstream deep learning methods.

Alongside efforts to implement backpropagation algo-rithms in SNN [24], [25], most neuromorphic algorithms areindeed simple enough to be trained via variations of STDP,a Hebbian-type local learning rule [26], [27]. An alternativedirection, that we have started to explore, entails the SNNs tobe dictated by the underlying neural connectome associatedwith the targeted function [22], [28]. In this paper, we extendthis direction by presenting a neuro-mimetic SNN that drawsfrom the connectome of the human oculomotor system andenables our in-house robotic head to track a laser target. Wedemonstrate how the robotic prototype achieved real-timetracking of the visual target, by coordinating saccadic andpursuit eye movements with neck movements. Given that thestructure of any network, be it biological or artificial, servesits function, the SNN’s behavior emerged out of this bottomup neural-level representation, without any training.

II. METHODS

A. The Robotic Head Prototype

The biomimetic robotic head prototype comprised of twocameras (eyes) and a neck (Fig. 1a). Each camera wasmounted on a pan-tilt mechanism, which allowed horizontaland vertical movement of the eyes. The eyes were fixed toa base plate mounted on top of another pan-tilt system, toreplicate neck movements. Overall, the robot had 6 degreesof freedom (DOF). The three pan-tilt systems were controlledby Dynamixel AX-12A digital servos driven by an Arduino-based Arbotix-M controller. The controller relayed servoposition deltas to the Arbotix-M robocontroller through aUSB serial interface. The deltas were computed based on thelocation of the laser target in a foveated field of view. Therange-of-motion (ROM) for the servos controlling the eyeswas restricted to 100 (70) degrees horizontally (vertically)from the center, to keep the eye kinematics within the bio-logically plausible range. Similar restrictions for biologicallyplausible ROM were imposed on neck movement.

The target was projected on a wall at 55 cm away fromthe eyes, using a laser diode mounted on a separate Arduino-controlled pan-tilt system. The movement profiles of the laserand the eyes were calibrated with respect to a fixed referenceposition for estimating the tracking accuracy of the proposedcontroller, by using linear regression analysis libraries todetermine a higher order polynomial relation between thetarget angle and servo position. The laser and servo positionswere recorded and mapped to the eye and neck kinematics,as well as target position.

arX

iv:2

002.

0753

4v1

[cs

.RO

] 1

8 Fe

b 20

20

Page 2: A Spiking Neural Network Emulating the Structure of the ... · An orthogonal direction towards biomimesis is to imitate the most advanced biological controller: the brain. This particular

Fig. 1: (a) The robotic head prototype fixating at a moving laser target projected on a wall. A pan-tilt system controlledthe position of the laser (not shown). (b) The Receptive Field (RF) of the superior colliculus (SC) neurons (artificial retina)shown as squares in the input image. An SC neuron was activated when the input (laser dot) was within their receptivefield. (c) The variation of SC neurons’ weights with respect to their retinotopic pixel position. The same weight distributionapplied to both horizontal and vertical distances. (d) The SNN connectivity for moving the eyes and the neck in the horizontaldirection. The shown connectivity represents innervation of the eyes’ muscles for moving the eyes to the left and the neckto the right. The opposite movements were controlled by a sub-network of symmetric connectivity to the one shown here(omitted for illustrative purposes). (e) The SNN connectivity for moving the eyes and the neck in the vertical direction. Theinput SC neurons correspond to the neurons in the top half above the fovea and the bottom half above the fovea for botheyes. The input from the two eyes are used together to generate coordinated movement of both eyes upward or downward.

B. Spiking Neural Network: The Oculomotor Controller

Mimicking biology, we embedded into the eyes of thecontroller a foveated structure (Fig. 1b). The goal of theSNN was to keep the moving laser target within the “fovea”of each eye through coordinated movements of both eyesand the neck. It did so by emulating the distinct neurons andneural areas involved in pursuit and saccade eye movementsas well as neck movement, described below. In that sense, theSNN’s target tracking behavior emerged from the structureof the network and, thereby, required no learning to operate.

The design of the SNN structure was based mostly onthe connectome of the saccade system in the brain. Saccadecontrol structures are grouped into two regions for verticaland horizontal movement control, namely the horizontal gazecenter (PPRF) and vertical gaze center (riMLF), found inthe pons region of the brainstem [29], [30], [31]. Taking in-spiration from these structures, we designed the oculomotorSNN having two separate sub-networks, one serving as thecontroller for horizontal eye and neck movemements (PPRF)and another being the controller for the vertical movements(riMLF) [32] (Fig. 1d, e). The structure of the SNN forvertical control was simpler than the horizontal one underthe assumption that both eyes moved together and by the

same amount.Each camera frame was mapped to an input neuronal

layer, so that a neuron fired when the laser activated any ofthe pixels within that neuron’s receptive field. Such neuronshave been found in the superior colliculus (SC) [33]. TheSC innervated the gaze centers which in turn innervated theextra-ocular muscles to shift the gaze towards the target.When the target was significantly away from the fovea, thecontroller also engaged the neck through the SC and thevestibular system, following experimental findings [34].

The SC neurons’ firing rate encoded the distance ofsaccadic eye movements. Specifically, stimulating neuronsat the periphery resulted to larger saccades compared tostimulating neurons with receptive fields close to the fovea.In accordance with experimental findings [35], the weightsof the synaptic connections between the SC neurons and thebrainstem regions (Fig. 1c) were modeled as an increasingfunction of the distance (number of pixels) from the fovea(center of the frame).

For the horizontal saccade control, the SNN includedseveral bursting neurons, namely long lead bursting neurons(LLBNs), excitatory bursting neurons (EBNs) and inhibitorybursting neurons (IBNs). Upon activation of SC neurons,

Page 3: A Spiking Neural Network Emulating the Structure of the ... · An orthogonal direction towards biomimesis is to imitate the most advanced biological controller: the brain. This particular

LLBNs and EBNs were triggered, initiating the oculomotorresponse to the target. Another class of neurons, omni-pause neurons (OPNs), controlled fixation and maintainedthe fixation on a target, by inhibiting the activity of EBNsand IBNs. The combined effect of excitation from the SCneurons and the selective inhibition from the OPNs and IBNsled to the movement of both eyes in the same directiontowards the target. The conjugate eye movement is facilitatedin the brain through the abducens nuclei and interneurons thattrigger the response of the contralateral oculomotor nuclei[29], [30]. The latter neurons were modeled in the SNN asmotor neurons (MNs).

Independent movement of the eyes, or dis-conjugate (ver-gence) movement, was also possible through the same struc-ture. Several bursting neurons are found to be directionselective [36], contributing to the oculomotor response onlyin a specific direction. We modeled these neurons, denotedby S in Fig. 1, that received inputs from both the ipsilateraland contralateral sides and inhibited the activity that pro-moted conjugate eye movements. This mechanism for theindependent control of the robotic eyes is also in alignmentwith experimental findings [30].

The effective output of the sub-networks for horizontaland vertical movement was translated into equivalent servoposition deltas using a firing rate encoding scheme. The firingrate was computed over windows of 20ms and then scaledto a servo position value between 0 to 1023, which was therange of valid positions for the AX-12A servos.

C. Reward-based Hebbian LearningAlthough the SNN can control the robotic head with no

training, we also included a biologically plausible learningmechanism, to see whether this would further refine therobot’s performance. Interestingly, saccadic amplitude inprimates is known to adapt to both a bottom-up visualerror signal [37] and a top-down behavioral (goal) signal[38]. Here, we introduced a bottom-up reward-based learningmechanism, based on Sejnowskis Hebbian learning rule [39].

Reward-based learning relies on maximizing the rewardsignal associated with the network performing as expected,here when the target was on the fovea. A global reward signalpromoted the SNN behavior in bringing the target onto thefovea. The synaptic adaptation was semi-local, i.e. the weightchange depended on the global reward signal and the pre-synaptic and post-synaptic neuron activities at the synapse.The reward value as a function of the position of the target onthe frame is shown in Fig. 2. For every pair of pre-synapticneuron j and post-synaptic neuron i, reward-based Hebbianlearning was defined as:

τedei j

dt=−ei j +H(pre j, posti) (1)

dwi j

dt= MH(pre j, posti)ei j (2)

M(t) = R(t)−< R > (3)

where ei j is the synaptic eligibility trace for the pair ofneurons, wi j is the weight of the synapse between those

Fig. 2: Reward values as a function of the position of thetarget on the frame. The fovea center is at position = 360pixels.

neurons, H is the Hebbian learning term and M(t) is theneuromodulator signal at time t denoting the differencebetween given and expected rewards. Here, we empiricallyestimated the expected reward < R > as the running averageand the time constant τe was chosen in the range of 1 sec,to bridge the delay between action choice and final rewardsignal. The Hebbian term (H(pre j, posti)), was modelledbased on the Sejnowski learning rule. This rule relies onthe activities of pre- and post-synaptic neurons, as well asthe rate of spikes of these neurons over a window. It is basedon the idea that firing rate of the neurons vary around theirmean values < vi > and < v j > and defined by:

dwi j

dt= γ(vi−< vi >)(v j−< v j >) (4)

where γ is the learning rate and v j,i are the firing rates ofthe pre- and post-synaptic neuron, respectively.

The laser and the servo positions for each DOF of therobotic head was recorded at 45 Hz and the kinematics ofthe eyes with respect to that of the laser was studied withand without learning. Both repetitive and random pattern oftarget positions were used to train the controller and the meanaccuracy was used as a measure of behavioral performance.

III. RESULTS

We validated the proposed SNN for its ability to trackaccurately a moving target, both without and with Hebbianlearning. When we did not incorporate learning into the SNN,the vast majority of its weights were adapted by the relativestrengths of the connections between neural areas found inexperimental studies, and the rest weights were found bytrial and error. The C++ code is available in Appendix B in[40]. For the kinematics, the sign of the angles representedposition of the target with respect to the origin, defined as thecenter of the frame. For horizontal movement, the negative(positive) angles represented positions on the left (right)side of the origin. Similarly, for vertical movement, negative(positive) values represented positions below (above) theorigin. For calculating the discrepancy between fovea and

Page 4: A Spiking Neural Network Emulating the Structure of the ... · An orthogonal direction towards biomimesis is to imitate the most advanced biological controller: the brain. This particular

Fig. 3: (a) Kinematics of the eye with respect to the kinemat-ics of the target. The target, shown in blue, is moved aroundthrough a sequence of positions on the wall demonstratingsudden changes in both horizontal and vertical directions.The eyes, shown in red and black, follow the target bymaking multiple saccade like movements. Top panel showsthe horizontal kinematics of the eyes with respect to thetarget and bottom panel shows the vertical component ofthe movement. (b) Kinematics of the eyes for a sequence oftarget positions without learning. (c) Kinematics of the eyesfor a sequence of target positions with reward-based hebbianlearning.

target positions, the neck position was added to the horizontaland vertical components of each eye.

For the no-learning SNN, the mean relative error forthe eye position with respect to the target, averaged overboth horizontal and vertical direction of both the eyes andneck, was −0.685◦, averaged over ten 2-minute trackingexperiments, for randomly moving targets. An example ofthe kinematics of the prototype with respect to the kine-matics of the laser target on the wall is shown in Fig. 3a.We observed small, jerk-like, eye movements, similar to

TABLE I: Median Relative Error (RE) and root mean squareerror (RMSE) for eye position with respect to target, withoutand with Hebbian learning (-HL), for ten 2-min experimentswhere the robot tracked randomly moving targets.

Eye Kinematics RE RE-HL RMSE RMSE-HL

Left Eye - Horizontal −1.87◦ −1.487◦ 3.55◦ 2.87◦Right Eye - Horizontal 2.115◦ 2.049◦ 3.93◦ 3.44◦Left Eye - Vertical −1.219◦ −0.697◦ 3.14◦ 3.02◦Right Eye - Vertical −0.823◦ −0.421◦ 3.03◦ 2.95◦

the experimentally reported miniature versions of voluntarysaccades, named microsaccades. We attribute this behaviorto the property of the SNN controller, which tries to fixateon the target within the bounds of the fovea: When thedimensions of the fovea were smaller than the target size,the controller constantly readjusted itself, attempting to fitthe target within the fovea. This oscilatory activity wasparticularly evident in the horizontal position, partly becauseof the larger complexity of the underlying network, comparedto the one controlling the vertical movements. Interestingly,the vertical movement exhibited a delayed response (lowerbandwidth), leading to lower accuracy for vertical trackingcompared to horizontal tracking. This was further correctedby allowing reinforcement learning.

The introduction of reward-based learning into the SNNresulted to a better tracking ability (Fig. 3c). Learningallowed for a noticeable improvement in tracking for targetswith fast vertical components. To quantify the trackingabilities in both cases, we presented a random sequence oftarget positions to both controllers (Table I.)

IV. DISCUSSION

Here, we introduced an SNN oculomotor controller and itsintegration to our in-house robotic head prototype. The kine-matics of the robotic eyes were remarkably similar to thoseof the human gaze in tracking a moving target [38]. In thatsense, the goal of this work was to demonstrate that “machinebehavior” in general, and robotic function in particular, canemerge naturally from an a-priori knowledge that dictatesthe controllers structure. By drawing from the connectome ofthe brain areas associated with the targeted behavior, this andother efforts to develop biologically realistic SNNs can helpadvance Brain Science and Robotics, two fast growing fieldsthat are also converging via biomimicry and neuromorphiccomputing [41], [42], [22], [28].

In the technical domain, adding biological constraints to anSNN structure removes the need for assuming all-to-all initialconnectivity for the trainable network. This may translate tofurther improvements in training efficiency, as it limits learn-ing to a small number of synaptic connections. In addition,contrary to the typical neuron models in deep networks thatcan be optimized to perform complex computational tasks[43], spiking neuron models have a non-differentiable output(their all-or-none firing) and therefore are incompatible withstandard gradient-descent supervised learning methods [44].

Page 5: A Spiking Neural Network Emulating the Structure of the ... · An orthogonal direction towards biomimesis is to imitate the most advanced biological controller: the brain. This particular

In the absence of a strong learning algorithm, the main criti-cism to neuromorphic solutions is that promising preliminaryresults [22] cannot share the same scaling abilities with themainstream deep learning approaches. Here we show the firstfruits of our efforts towards scalable SNNs that, by beingable to host biological principles of computation known tobe critical for intelligence, can give end-to-end neuromorphicsolutions towards fully autonomous systems.

Effective as they may have become, robots still cannotduplicate a range of human behaviors, such as dynamicallyresponding to changing environments using error-prone sen-sors. To operate in a real-world environment, an autonomousrobot should 1) be robust to a noisy neural representation,2) adapt to a fast changing environment, and 3) learn withno or limited supervision or reinforcement. The embodimentof SNNs into robots has been rather sparse and the currentapproaches aim to give a proof of concept [12], [45], [46],[16], rather than a whole-behaving robot. While there isdefinitely value in studying simplified tasks and basic sensoryrepresentations [47], there is an ongoing need to proposenew controllers capable of naturally handling richer, noisierand more complex scenarios [48]. This work suggests that apromising path towards duplicating a human-like behavior isto draw knowledge from how synergy is achieved in neuronsacross the implicated brain areas. In addition, contrary to theconstantly online processing taking place in the oculomotorsystem, the traditional learning algorithms rely on separatetraining and inference phases. That is why we employedunsupervised learning, which is a better fit for lifelonglearning of a continuously evolving network that can adaptto new targets and movement patterns.

The re-emergence of neuromorphic computing calls fora bottom-up rethinking of computational algorithms thatcan seamlessly integrate into non-Von Neumann hardware[22], promising unparalleled energy-efficiency and a robustyet versatile alternative to the brittle inference-based AIsolutions [49]. The proposed work brings us closer to re-alize this promise by tackling a robotic task where energy-efficiency may become crucial and controllers can scale, inorder to exploit spatiotemporal context and commonsenseunderstanding.

In the scientific domain, this work paves the way for anew direction, that of drawing from neuroscience biologicalknowledge and translating it to computational primitivesapplicable to SNN. Despite being equipped with biologicallyrealistic models of neurons, current SNN can only offer weaksuggestions on the underlying neural mechanisms that giverise to the targeted behavior. Recent applications of gradient-descent alternatives to SNNs [24], [25] promise to intro-duce SNNs to scalable problems, but they inherit the mainlimitations that deep networks have. For example, learningthrough backpropagation will basically match the networksinput to its output, much regarding and, thereby, structuringthe network as a black-box. Our bottom-up approach canspur the development of neural-controlled robots as test-bedsthat may be used to inform brain scientists on how the neuralsystem could be structured to function properly. This paper

opens up a fascinating possibility for artificial networks to beconstrained by the structure of their biological counterparts.Alongside research on the computational mechanisms ofbiological learning [50], such efforts can introduce newpush-pull dynamics between robotics and neuroscience byexploring and exploiting interpretable connections betweenneurophysiology and behavior.

Finally, this work not only aims to catalyze efforts towardsreplicating, if not augmenting, human cognitive abilities andengrafting them into intelligent robotic assistants. It alsogives an educational direction as enabling students across allages to interact physically with a robot ignites their interestand understanding on its underlying working mechanisms.

V. CONCLUSION

Overall, the paper introduces an alternative approach to-wards designing robot controllers, that of developing SNNsinspired by the structure of the brain areas associated withthe targeted behavior. Here, target tracking was achievedby emulating at a reasonable scale the connectome andthe underlying types of the associated neurons, not throughlearning. This result suggests that building neuro-inspiredcontrollers for this and other types of autonomous roboticbehavior is a direction worth pursuing.

REFERENCES

[1] R. S. Fearing, K. H. Chiang, M. H. Dickinson, D. Pick, M. Sitti,and J. Yan, “Wing transmission for a micromechanical flying insect,”in Robotics and Automation, 2000. Proceedings. ICRA’00. IEEEInternational Conference on, vol. 2. IEEE, Conference Proceedings,pp. 1509–1516.

[2] Y. Sato, Y. Hiratsuka, I. Kawamata, S. Murata, and S.-i. M. Nomura,“Micrometer-sized molecular robot changes its shape in response tosignal molecules,” Science Robotics, vol. 2, no. 4, 2017.

[3] P. Birkmeyer, K. Peterson, and R. S. Fearing, “Dash: A dynamic16g hexapedal robot,” in Intelligent Robots and Systems, 2009. IROS2009. IEEE/RSJ International Conference on. IEEE, ConferenceProceedings, pp. 2683–2689.

[4] J. Zhao, J. Xu, B. Gao, N. Xi, F. J. Cintrn, M. W. Mutka, and L. Xiao,“Msu jumper: A single-motor-actuated miniature steerable jumpingrobot,” IEEE Transactions on Robotics, vol. 29, no. 3, pp. 602–614,2013.

[5] X. Deng, L. Schenato, W. C. Wu, and S. S. Sastry, “Flappingflight for biomimetic robotic insects: Part i-system modeling,” IEEETransactions on Robotics, vol. 22, no. 4, pp. 776–788, 2006.

[6] A. Ramezani, S.-J. Chung, and S. Hutchinson, “A biomimetic roboticplatform to study flight specializations of bats,” Science Robotics,vol. 2, no. 3, 2017.

[7] D. W. Haldane, M. M. Plecnik, J. K. Yim, and R. S. Fearing, “Roboticvertical jumping agility via series-elastic power modulation,” ScienceRobotics, vol. 1, no. 1, 2016.

[8] A. J. Ijspeert, “Biorobotics: Using robots to emulate and investigateagile locomotion,” science, vol. 346, no. 6206, pp. 196–203, 2014.

[9] W.-S. Chu, K.-T. Lee, S.-H. Song, M.-W. Han, J.-Y. Lee, H.-S.Kim, M.-S. Kim, Y.-J. Park, K.-J. Cho, and S.-H. Ahn, “Review ofbiomimetic underwater robots using smart actuators,” Internationaljournal of precision engineering and manufacturing, vol. 13, no. 7,pp. 1281–1292, 2012.

[10] V. Kopman and M. Porfiri, “Design, modeling, and characterization ofa miniature robotic fish for research and education in biomimetics andbioinspiration,” IEEE/ASME Transactions on mechatronics, vol. 18,no. 2, pp. 471–483, 2013.

[11] G. Bekey and K. Y. Goldberg, Neural Networks in robotics. SpringerScience and Business Media, 2012, vol. 202.

[12] E. A. Antonelo, B. Schrauwen, X. Dutoit, D. Stroobandt, and M. Nut-tin, “Event detection and localization in mobile robot navigation usingreservoir computing,” in International Conference on Artificial NeuralNetworks. Springer, Conference Proceedings, pp. 660–669.

Page 6: A Spiking Neural Network Emulating the Structure of the ... · An orthogonal direction towards biomimesis is to imitate the most advanced biological controller: the brain. This particular

[13] X. Wang, Z.-G. Hou, F. Lv, M. Tan, and Y. Wang, “Mobilerobots’ modular navigation controller using spiking neural networks,”Neurocomputing, vol. 134, pp. 230–238, 2014. [Online]. Available:http://www.sciencedirect.com/science/article/pii/S0925231214000976

[14] R. Batllori, C. B. Laramee, W. Land, and J. D. Schaffer, “Evolvingspiking neural networks for robot control,” Procedia Computer Sci-ence, vol. 6, pp. 329–334, 2011.

[15] C. Casellato, A. Antonietti, J. A. Garrido, R. R. Carrillo, N. R. Luque,E. Ros, A. Pedrocchi, and E. D’Angelo, “Adaptive robotic controldriven by a versatile spiking cerebellar network,” PLoS One, vol. 9,no. 11, p. e112265, 2014.

[16] S. Soltic and N. Kasabov, A biologically inspired evolving spiking neu-ral model with rank-Order population coding and a taste recognitionSystem Case Study. IGI Global, 2011, pp. 136–155.

[17] J. L. Krichmar, D. A. Nitz, J. A. Gally, and G. M. Edelman,“Characterizing functional hippocampal pathways in a brain-baseddevice as it solves a spatial memory task,” Proceedings of the NationalAcademy of Sciences of the United States of America, vol. 102, no. 6,pp. 2111–2116, 2005.

[18] E. M. Izhikevich and G. M. Edelman, “Large-scale model of mam-malian thalamocortical systems,” Proceedings of the national academyof sciences, vol. 105, no. 9, pp. 3593–3598, 2008.

[19] S. Song, K. D. Miller, and L. F. Abbott, “Competitive hebbianlearning through spike-timing-dependent synaptic plasticity,” Natureneuroscience, vol. 3, no. 9, pp. 919–926, 2000.

[20] M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday,G. Dimou, P. Joshi, N. Imam, S. Jain et al., “Loihi: A neuromorphicmanycore processor with on-chip learning,” IEEE Micro, vol. 38, no. 1,pp. 82–99, 2018.

[21] S. B. Furber, F. Galluppi, S. Temple, and L. A. Plana, “The spinnakerproject,” Proceedings of the IEEE, vol. 102, no. 5, pp. 652–665, 2014.

[22] G. Tang, A. Shah, and K. P. Michmizos, “Spiking neural networkon neuromorphic hardware for energy-efficient unidimensional slam,”2019 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS), 2019.

[23] G. Tang, I. E. Polykretis, V. A. Ivanov, A. Shah, andK. P. Michmizos, “Introducing astrocytes on a neuromorphicprocessor: Synchronization, local plasticity and edge of chaos,”in Proceedings of the 7th Annual Neuro-Inspired ComputationalElements Workshop, ser. NICE 19. New York, NY, USA:Association for Computing Machinery, 2019. [Online]. Available:https://doi.org/10.1145/3320288.3320302

[24] G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass,“Long short-term memory and learning-to-learn in networks of spikingneurons,” in Advances in Neural Information Processing Systems,Conference Proceedings, pp. 787–797.

[25] S. B. Shrestha and G. Orchard, “Slayer: Spike layer error reassignmentin time,” in Advances in Neural Information Processing Systems,Conference Proceedings, pp. 1412–1421.

[26] Z. Bing, C. Meschede, K. Huang, G. Chen, F. Rohrbein, M. Akl,and A. Knoll, “End to end learning of spiking neural network basedon r-stdp for a lane keeping vehicle,” in 2018 IEEE InternationalConference on Robotics and Automation (ICRA). IEEE, ConferenceProceedings, pp. 1–8.

[27] P. U. Diehl, D. Neil, J. Binas, M. Cook, S.-C. Liu, and M. Pfeiffer,“Fast-classifying, high-accuracy spiking deep networks through weightand threshold balancing,” in 2015 International Joint Conference onNeural Networks (IJCNN). IEEE, Conference Proceedings, pp. 1–8.

[28] G. Tang and K. P. Michmizos, “Gridbot: An autonomous robotcontrolled by a spiking neural network mimicking the brain’s navi-gational system,” in Proceedings of the International Conference onNeuromorphic Systems, 2018, pp. 1–8.

[29] D. L. Sparks, “The brainstem control of saccadic eye movements,”Nature Reviews Neuroscience, vol. 3, no. 12, pp. 952–964, 2002.

[30] C. A. Scudder, C. R. Kaneko, and A. F. Fuchs, “The brainstem burstgenerator for saccadic eye movements,” Experimental brain research,vol. 142, no. 4, pp. 439–462, 2002.

[31] C. A. Scudder, “A new local feedback model of the saccadic burstgenerator,” Journal of neurophysiology, vol. 59, no. 5, pp. 1455–1475,1988.

[32] W. King, A. Fuchs, and M. Magnin, “Vertical eye movement-relatedresponses of neurons in midbrain near intestinal nucleus of cajal.”Journal of Neurophysiology, vol. 46, no. 3, pp. 549–562, 1981.

[33] C. Prablanc, D. Pelisson, and Y. Rossetti, “Single cell signals: an

oculomotor perspective,” Neural Control of Space Coding and ActionProduction, vol. 142, p. 35, 2003.

[34] B. D. Corneil, E. Olivier, and D. P. Munoz, “Neck muscle responses tostimulation of monkey superior colliculus. i. topography and manipu-lation of stimulation parameters,” Journal of Neurophysiology, vol. 88,no. 4, pp. 1980–1999, 2002.

[35] N. J. Gandhi and H. A. Katnani, “Motor functions of the superiorcolliculus,” Annual review of neuroscience, vol. 34, pp. 205–231, 2011.

[36] K. Ohtsuka and H. Noda, “Direction-selective saccadic-burst neuronsin the fastigial oculomotor region of the macaque,” Experimental brainresearch, vol. 81, no. 3, pp. 659–662, 1990.

[37] J. Wallman and A. F. Fuchs, “Saccadic gain modification: visual errordrives motor adaptation,” Journal of neurophysiology, vol. 80, no. 5,pp. 2405–2416, 1998.

[38] A. C. Schutz, D. Kerzel, and D. Souto, “Saccadic adaptation inducedby a perceptual task,” Journal of vision, vol. 14, no. 5, pp. 4–4, 2014.

[39] T. J. Sejnowski, “Storing covariance with nonlinearly interactingneurons,” Journal of mathematical biology, vol. 4, no. 4, pp. 303–321, 1977.

[40] P. Balachandar, “A neuro-inspired oculomotor controller for a robotichead prototype,” MSc Thesis, Department of Computer Science,Rutgers University, 2017.

[41] E. Snell-Rood, “Interdisciplinarity: Bring biologists into biomimetics,”Nature, vol. 529, no. 7586, p. 277, 2016.

[42] A. Cully, J. Clune, D. Tarapore, and J.-B. Mouret, “Robots that canadapt like animals,” Nature, vol. 521, no. 7553, pp. 503–507, 2015.

[43] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol.521, no. 7553, p. 436, 2015.

[44] L. Abbott, B. DePasquale, and R.-M. Memmesheimer, “Buildingfunctional networks of spiking model neurons,” Nature neuroscience,vol. 19, no. 3, pp. 350–355, 2016.

[45] B. Mitchinson, M. Pearson, A. G. Pipe, and T. J. Prescott, “Biomimeticrobots as scientific models: a view from the whisker tip,” Neuromor-phic and Brain-Based Robots, pp. 23–57, 2011.

[46] Y. Quionez, M. Ramirez, C. Lizarraga, I. Tostado, and J. Bekios,“Autonomous robot navigation based on pattern recognition techniquesand artificial neural networks,” in International Work-Conference onthe Interplay Between Natural and Artificial Computation. Springer,Conference Proceedings, pp. 320–329.

[47] D. Gamez, A. K. Fidjeland, and E. Lazdins, “ispike: a spiking neuralinterface for the icub robot,” Bioinspiration and biomimetics, vol. 7,no. 2, p. 025008, 2012.

[48] A. Cully, J. Clune, D. Tarapore, and J.-B. Mouret, “Robots that canadapt like animals,” Nature, vol. 521, no. 7553, pp. 503–507, 2015.

[49] M. Davies, “Benchmarks for progress in neuromorphic computing,”Nature Machine Intelligence, vol. 1, no. 9, pp. 386–388, 2019.[Online]. Available: https://doi.org/10.1038/s42256-019-0097-1

[50] B. J. Lansdell and K. P. Kording, “Spiking allows neurons toestimate their causal effect,” bioRxiv, 2019. [Online]. Available:https://www.biorxiv.org/content/early/2019/02/06/253351