14
Robotics and Autonomous Systems 51 (2005) 41–54 New technologies for testing a model of cricket phonotaxis on an outdoor robot Richard Reeve a, , Barbara Webb a , Andrew Horchler b , Giacomo Indiveri c , Roger Quinn b a School of Informatics, University of Edinburgh, JCMB, King’s Buildings, Mayfield Road, Edinburgh, EH9 3JZ, UK b Department of Mechanical and Aerospace Engineering, Case Western Reserve University, Room 418, Cleveland, OH 44106-7222, USA c Institute of Neuroinformatics, University/ETH Zurich, Winterthurerstr. 190, CH-8057 Z¨ urich, Switzerland Received 27 July 2004; accepted 12 August 2004 Available online 6 January 2005 Abstract If biological inspiration can be used to build robots that deal robustly with complex environments, it should be possible to demonstrate that ‘biorobots’ can function in natural environments. We report on initial outdoor experiments with a robot designed to emulate cricket behaviour. The work integrates a detailed neural model of auditory localisation in the cricket with a robot morphology that incorporates principles of six-legged locomotion. We demonstrate that it can successfully track a cricket calling song over natural terrain. Limitations in its capability are evaluated, and a number of biologically based improvements are suggested for future work. © 2004 Elsevier B.V. All rights reserved. Keywords: Outdoor robotics; Cricket phonotaxis; Neural modelling; Neuromorphic systems 1. Introduction The ability of animals to deal flexibly with complex environments is often advanced as a reason to adopt a biology-based approach to robotics. This suggests that robots designed to emulate biological systems should be tested in natural conditions. But to date there are only Corresponding author. Tel.: +44 131 651 3446; fax: +44 131 651 3435. E-mail address: [email protected] (R. Reeve). a few examples of such evaluations e.g. the Sahabot using polarised light navigation in the Tunisian desert [1]; or recent testing of Robolobster in the Red Sea (Frank Grasso, personal communication). We have built a series of robots based on cricket phonotaxis, that is, the ability of female crickets to lo- cate a mate by moving towards male calling songs (both simulated and real). These robots have been shown to reproduce many aspects of the insect’s behaviour, in- cluding sound localisation in noisy conditions, prefer- ence for conspecific pattern in the calling song, distin- 0921-8890/$ – see front matter © 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.robot.2004.08.010

New technologies for testing a model of cricket phonotaxis

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: New technologies for testing a model of cricket phonotaxis

Robotics and Autonomous Systems 51 (2005) 41–54

New technologies for testing a model of cricket phonotaxison an outdoor robot

Richard Reevea,∗, Barbara Webba, Andrew Horchlerb, Giacomo Indiveric,Roger Quinnb

a School of Informatics, University of Edinburgh, JCMB, King’s Buildings, Mayfield Road, Edinburgh, EH9 3JZ, UKb Department of Mechanical and Aerospace Engineering, Case Western Reserve University, Room 418, Cleveland,

OH 44106-7222, USAc Institute of Neuroinformatics, University/ETH Zurich, Winterthurerstr. 190, CH-8057 Z¨urich, Switzerland

Received 27 July 2004; accepted 12 August 2004Available online 6 January 2005

Abstract

If biological inspiration can be used to build robots that deal robustly with complex environments, it should be possibleto demonstrate that ‘biorobots’ can function in natural environments. We report on initial outdoor experiments with a robotdesigned to emulate cricket behaviour. The work integrates a detailed neural model of auditory localisation in the cricket with arobot morphology that incorporates principles of six-legged locomotion. We demonstrate that it can successfully track a cricketc vementsa©

K

1

ebrb

f

abotsertSea

cketlo-bothwn tor, in-fer-tin-

0d

alling song over natural terrain. Limitations in its capability are evaluated, and a number of biologically based improre suggested for future work.2004 Elsevier B.V. All rights reserved.

eywords:Outdoor robotics; Cricket phonotaxis; Neural modelling; Neuromorphic systems

. Introduction

The ability of animals to deal flexibly with complexnvironments is often advanced as a reason to adopt aiology-based approach to robotics. This suggests thatobots designed to emulate biological systems shoulde tested in natural conditions. But to date there are only

∗ Corresponding author. Tel.: +44 131 651 3446;ax: +44 131 651 3435.E-mail address:[email protected] (R. Reeve).

a few examples of such evaluations e.g. the Sahusing polarised light navigation in the Tunisian de[1]; or recent testing of Robolobster in the Red(Frank Grasso, personal communication).

We have built a series of robots based on criphonotaxis, that is, the ability of female crickets tocate a mate by moving towards male calling songs (simulated and real). These robots have been shoreproduce many aspects of the insect’s behavioucluding sound localisation in noisy conditions, preence for conspecific pattern in the calling song, dis

921-8890/$ – see front matter © 2004 Elsevier B.V. All rights reserved.oi:10.1016/j.robot.2004.08.010

Page 2: New technologies for testing a model of cricket phonotaxis

42 R. Reeve et al. / Robotics and Autonomous Systems 51 (2005) 41–54

guishing between competing sound sources, and usingoptomotor correction to do phonotaxis with motor out-put biased or randomly disturbed[2–5]. The most re-cent robot uses an auditory processing circuit closelybased on cricket ears and is controlled by a realisticneural network that replicates known neural connec-tivity in the cricket[6]. In this paper we aim to evaluatethe performance of this system when implemented andtested on an outdoor robot, to explore the issues raisedby the natural habitat for this behaviour.

These issues fall into three broad areas. The firstconcerns the nature of the stimulus—how the soundis propagated and what kinds of interference and dis-tortion occur. To what extent does the auditory local-isation system we have implemented on the robot todate need to be altered to deal with this, since we knowthat sound propagation on grass outdoors is very dif-ferent from that in a lab environment? The second areais motor capability: the cricket can traverse rough ter-rain, while our algorithms have been developed for awheeled robot on a flat floor. Can we control a robotwith a cricket-like morphology using such algorithms?The third area (in which only preliminary results are de-tailed in this paper, but which is the subject of ongoingresearch) is physical plausibility. The robot used hereis too big to interact which the outdoor environment ina manner directly comparable to that of crickets, suchas how it detects and deals with obstacles that mightblock its path towards the sound. In order to approachthis problem we must look at methods which will al-l f ther ot.A ra-t s[

in-s cketst earl them ck-r gsn sta-c iersa l legsm ratediR ed-d h

each have three protruding legs that rotate as the robotmoves (seeSection 2.1.1). The three-spoke design andtorsional compliance in the drive train allow it to climbup and down shallow stairs and inclines and easily tra-verse most terrains, such as asphalt, grass, mud, gravel,and light brush. The platform used in this study couldmove at a speed of up to 4 body-lengths per second andhas a turning radius of 1.5 body-lengths.

Full details of the methods used to integrate the pre-vious robot controller onto this platform have been de-scribed in Horchler et al.[11]. Here we analyse theperformance of this implementation, to address someof the issues of hearing and responding to sound out-doors.

2. Methods

2.1. Hardware

2.1.1. Robot baseThe robot base was the WhegsTM Autonomous Sen-

sor Platform (WhegsTM ASP), shown inFig. 1a. It isbased upon a lightweight 60 cm long× 15 cm wide alu-minium chassis. It has six 15 cm-radius three-spokewheel-legs, each of which is arranged 60◦ out of phasefrom adjacent wheel-legs. This allows the robot tomove with a nominal tripod gait with all six wheel-legspowered by a single 90 W Maxon motor and transmis-sion. The torque delivered to each wheel-leg passest per-m un-t eralw tipo ticalm nga ingi er-v d inp rivem ller( asu

2to a

f , al-l ey

ow us to reduce the size and power demands oobot controller so that it will fit onto a smaller rob

promising technology is Very Large Scale Integion (VLSI) implementations ofneuromorphic circuit7], of the type described inSection 2.1.4.

The robot platform we use here is inspired byect walking. Insects such as cockroaches and criypically use a tripod gait, in which the front and regs on one side of the body move in phase with

iddle leg on the other side. Close studies of cooach locomotion[8] also reveal that the front leormally swing head-high to surmount many obles without changing gait, but when larger barrre encountered the gait changes, and contralateraove in phase. These strategies have been incorpo

nto a robot morphology called ‘Whegs’TM, that unlikeHex [9], uses only a single drive motor and embed passive compliance[10]. It uses six hubs whic

hrough a torsionally compliant mechanism thatits a wheel-leg to comply if an obstacle is enco

ered, thus moving into phase with the contralatheel-leg. Additionally, large compliant feet at thef each spoke cushion and smooth the robot’s verotion without seriously compromising its climbibility. To turn, front and rear rack-and-pinion steer

s activated with two electrically coupled Futuba somotors. Two 3000 mAh battery packs connectearallel provide 5 V to the servos and 8.4 V to the dotor via an Astro-Flight electronic speed contro

ESC). A third electrically isolated battery pack wsed to power the control system (see below).

.1.2. SensorsA pair of miniature microphones were mounted

our-bar mechanism attached to the front steeringowing them to pivot with the front wheel-legs. Th

Page 3: New technologies for testing a model of cricket phonotaxis

R. Reeve et al. / Robotics and Autonomous Systems 51 (2005) 41–54 43

Fig. 1. The WhegsTM Autonomous Sensor Platform: (a) robot and (b) hardware used in outdoor phonotaxis experiments.

were positioned about 10 cm above the ground surface,facing forward, separated by 1.8 cm (one quarter thewavelength of the carrier frequency of cricket song,which is 4.8 kHz). The output from these microphoneswas processed with a customised electronic circuit[12]based on the ear morphology of the cricket. The inputfrom each microphone is delayed by 52�s (a quar-ter cycle of 4.8 kHz sound) by a bucket brigade de-lay chip and then subtracted from the live signal fromthe other, effectively performing a phase comparisonand thus providing directional information. The micro-phone separation and delay times make the directionaloutput accurate for the typical cricket song signal.

2.1.3. Control systemFig. 1b shows the main hardware elements of the

control system for this robot. The auditory circuit hadbeen designed to interface to a Khepera robot, and itproved simplest to mount this small robot directly onthe WhegsTM ASP base, and use it to do the sensorypre-processing. This consisted of converting the audi-tory signal amplitudes to Poisson spike trains, with pro-grammable threshold and saturation levels, and trans-ferring these via a serial line to a PC/104 processor run-ning a neural simulator under Linux. The motor outputwas also encoded as a spike train and transferred back tothe Khepera, where it was interfaced to the WhegsTM

steering servos and electronic speed controller via aPIC microcontroller. The robot carries the entire pro-cessing system, including power supply, and operatesa con-fi s es-s und

and long distances where a permanent tether wouldbe unworkable, and radio communication would betoo slow for reactive behaviour. To communicate withthe user interface (on a laptop), a PCMCIA 802.11bwireless ethernet card is installed in the PC/104 on-board the robot. The laptop also runs a tracking sys-tem based on triangulation using retractable tethers (seebelow).

In a second set of experiments investigating how toreduce the size and power requirements of the con-trol system, we replaced a section of the controllerwith a single ultra low power analogue VLSI (aVLSI)chip. This modelled the early auditory processing inthe cricket, taking input from the cricket ear model andfeeding spiking neural output into a higher level partof the neural simulation. This was done with a view toeventually replacing the whole neural simulation witha custom VLSI chip, thus obviating the need for thebulky and power hungry PC/104 system.

2.1.4. aVLSI hardwareThe prototype chip was designed to replace only

a small fraction of the full control system in the firstinstance, with the aim of implementing the more com-plete version once the details of the neural networkused in the controller are worked out. The chip wasimplemented using a standard 1.5�m Complemen-tary Metal Oxide Semiconductor (CMOS) technologyand occupies an area of 2× 2 mm2. It comprises asymmetric network of low-power leaky integrate andfi ya st (the

utonomously except for start and stop signals andguration commands between experiments. This iential for outdoor experiments over uneven gro

re (I&F) neurons[13] interconnected by inhibitornd excitatory adaptive synapses[14], that model

he early stages of auditory processing of cricket

Page 4: New technologies for testing a model of cricket phonotaxis

44 R. Reeve et al. / Robotics and Autonomous Systems 51 (2005) 41–54

Omega Neurons (ON1) and Auditory Neurons (AN1))in Fig. 2a.

Each I&F neuron circuit exhibits realistic spikeresponse properties with adjustable absolute refrac-tory period, spiking threshold voltage modulation, andspike-frequency adaptation. A detailed characteriza-tion of this circuit has been presented in[13,15].

The synaptic circuits also exhibit realistic dynam-ics. They have short-term depression properties andcan update their efficacies following a Spike-Timing-Dependent Plasticity (STDP) rule. In the experimentscarried out in this work the STDP part of the circuitswas switched off and we used fixed sets of weights.The synaptic circuits and their STDP response proper-ties are described in[14].

To interface the chip to the auditory circuit, we con-structed a printed circuit board (PCB) containing a log-amplifier able to rescale and shape the ear circuit’s out-put voltage signals. The reshaped signals were thenconnected to two on-chip p-type MOSFETs which, op-erating in the weak-inversion regime, inject currents inthe two input silicon neurons exponentially related totheir voltage inputs. In our system the firing rate of theinput neurons is therefore linearly proportional to theear’s outputs.

2.2. Software

2.2.1. Neural simulatortor

t udi-

tory processing in the cricket[6]. The behaviour ofthe basic neural model is related to single compart-ment ‘leaky integrate and fire’ models, but based moreclosely on the models described by Koch[16]. The be-haviour is modelled at a similar level of complexityto the hardware neurons described above. The neuronis considered to be an RC circuit with a fixed mem-brane capacitance and membrane conductance and abase potential across the membrane to which it will de-cay exponentially in the absence of external input. If themembrane potential rises above a threshold, the neu-ron will ‘fire’, sending a spike to any output synapses.The synapses are modelled as a variable conductancewith a battery potential which corresponds to the rever-sal potential of the ion channel opened by the synapticneurotransmitter. Their properties include a delay (cor-responding to the sum of possible axonal, neurotrans-mitter, and dendritic delays), a variable time coursefor the exponential decay of conductance, as well as amore standard weight determining the standard con-ductance change in the affected ion channel. Theyalso allow short-term adaptation of the weight throughmechanisms of facilitation and depression. Despite thiscomplexity, the model can run in real time for robotcontrol.

2.2.2. Auditory circuitWe have designed a neural circuit based on the phys-

iological mechanisms known to underlie phonotaxisbi ts

hysiol

The PC/104 is running a spiking neural simulahat we have developed to test hypotheses of a

Fig. 2. Neural circuits based on cricket neurop

ehaviour in the cricket (reviewed in[17]). It is shownn Fig. 2a. For each of the two ‘auditory nerve’ inpu

ogy for (a) auditory processing and (b) motor control.

Page 5: New technologies for testing a model of cricket phonotaxis

R. Reeve et al. / Robotics and Autonomous Systems 51 (2005) 41–54 45

(left and right), there were four pairs of Poisson spik-ing neurons with differing threshold and saturation lev-els. This meant the subsequent processing could dealwith a larger range of input amplitudes, which was nec-essary given the substantial attenuation of the soundsignal over the distances we wanted to run the robot.The spiking input from the auditory pre-processing ex-cites one pair of auditory interneurons (ON) and onepair of ascending neurons (AN). The first (mutually in-hibitory) pair performs cross-inhibition of the other tosharpen the directionality of the signal. An importanteffect of this was to produce a form of gain control.Louder sounds were more likely to activate auditoryneurons on both sides so mutual inhibition would re-duce the overall response as well as increase the relativedifference.

In the second set of experiments this circuitry wasimplemented in silicon on the aVLSI neural chip de-tailed above which then fed into the brain circuitry be-low.

The AN pair then conveys this signal to ‘Brain Neu-rons’ (BN1 and BN2) that use dynamic synapse prop-erties to filter the song for the appropriate temporalpattern. BN1 requires the correct gap to occur betweensound bursts for it to recover from synaptic depres-sion and thus fire efficiently at the onset of each burst.BN2 requires the onsets signalled by BN1 to occursufficiently close together before it will respond. Thefiltered output indicates, through the activity of theleft and right brain neurons, if a male of the corrects re-f otorc rec-t beh

2a

‘ nsc putt is ise tes aS d in-h orys nse,e viat theb ex-

periments below, the GO neuron receives a constantinput, so that the robot’s default behaviour will be tomove forward.

The two burst-generator neurons (BG) excite, re-spectively, a left forward (LF) and a right forward (RF)motor neuron. These normally connect directly to thespeed controllers for the independently driven wheelsof a Koala or Khepera robot. For WhegsTM ASP how-ever, both neurons provide excitation for a forward sig-nal that controls the drive motor, while LF excites andRF inhibits a signal that controls the position of theservomotors that steer the robot. If LF and RF are bal-anced then the robot steers forward, otherwise it turnsaccording to the difference in activity (LF–RF). Theactivity of the LF and RF neurons is modulated bythe output of the auditory processing described above,via left turn (LT) and right turn (RT) neurons that pro-duce appropriate excitation and inhibition to affect therobot’s direction.

2.3. Experimental methods

Our aim was to demonstrate that the robot could per-form the basic task of the female cricket: to recogniseand track towards a male cricket calling song over areasonable distance in a natural outdoor environment.The robot was tested on a grass-covered (later frost-covered) area between one of the University of Stir-ling’s buildings and a small lake bordered by trees. Forthe main results given below, the area used was approx-i ota er-a

andc wep stedo ourc dem udew B att

llyr el-e ther r ther al-c isa De-

pecies is calling from the left or the right. Theore, the BN2 output needs to connect to the montrol neurons in order to cause a turn in the diion from which the louder and/or clearer song caneard.

.2.3. Motor control circuitThe motor control circuit (Fig. 2b) is based upon

burst-generator’ (BG) consisting of a pair of neurooupled by mutual excitation, so that sufficient ino either produces continuous spiking of both. Thventually terminated because the pair also exciTOP neuron that eventually becomes active anibits the bursting pair. In theory, a variety of senstimuli or internal factors can activate a motor respoither by direct excitation of the burst generator or

he GO neuron, which modulates the sensitivity ofurst pair by low frequency tonic excitation. In the

mately 10 m× 7 m. This area was fairly level but nsmooth lawn. There was little wind. The air tempture was below 0◦C.

The target was a speaker placed on the groundonnected to a laptop computer, through whichlayed a simulated male cricket song. This consif two ‘chirps’ per second, where each chirp is fycles (‘syllables’) of 25 Hz square wave amplituodulation of a 4.8 kHz tone. The sound amplitas approximately 85 dB at the speaker and 65 d

he robot’s starting points.The neural simulation program automatica

ecorded the inputs and activity of all the neuralments during trials. In addition, the position ofobot was tracked using retractable lines that tetheobot to fixed points. The robot’s position is then culated by triangulation. This position informationutomatically synchronised with the internal data.

Page 6: New technologies for testing a model of cricket phonotaxis

46 R. Reeve et al. / Robotics and Autonomous Systems 51 (2005) 41–54

tails of the implementation of this tracking system aregiven in Horchler et al.[11].

In a second set of experiments the aVLSI neural chipreplaced the simulated auditory nerve and ON and ANneurons, and recordings of the AN output from the chipand the resulting BN and motor responses were madeat a range of distances from the speaker to investigatethe similarity of behaviour using the two different tech-nologies. The intention was to carry out a matching setof experiments with the new setup, but this proved im-possible due to the temperature sensitivity of the aVLSIchip. As a result, these experiments were carried out in-doors with the robot raised on a block to stop it movingwhilst allowing the motors to run and generate similarbackground noise to that experienced in the outdoor ex-periment. The sound amplitude was considerably lowerto minimise echos in the indoor environment.

3. Results

Fig. 3 shows the tracks produced by the robot inthirty trials, from three different starting positions asshown: 7 m straight ahead (facing the speaker) 3.5 mto one side with the speaker on the left, and 4.5 m tothe other side, with the speaker on the right. It is evi-dent that the robot can track towards the sound source.Defining success as getting within one body length ofthe speaker, there was only one trial in which the robotdid not find the sound source, and only one other trialw athst e,2l was0 rrec-t ot allo pik-i cht lt osi-t ghts t re-s itory

Fig. 3. Tracks of the robot towards the sound source in 30 outdoortrials, from three starting positions.

nerve, with more fibres becoming active as the robotapproaches the sound. On the left side the time scaleonly shows the chirp pattern in the song (repeated firingbursts at 2 Hz); on the right side the four syllables thatmake up each chirp can be seen in the firing patterns.It is evident that those fibres with a low enough thresh-old to respond to sound from a distance (N1 and N2)also encode more noise, and consequently it is harderto see the sound pattern in their response than in N3and N4. As a consequence the activation of the AN andON interneurons is also more noisy at the start of thetrack.

A clear encoding of several consecutive syllables byAN1 is needed before BN1 and BN2 start to fire. It canalso be seen that BN2 firing shows strong directional-ity, with almost no overlap in activity between the leftand right neurons. For each chirp indicated in the firingof BN2 there is at most one spike in the appropriate

F g: left, a track from the centre: right, a track from the side (note different times ghout. Upper plots show the auditory nerve, middle plots the auditory processingc otor circuit inFig. 2b, the control signals sent to the robot, and theX andYc er system.

ith significant indirectness in the path taken. Pook an average time of 43.8±11.8 s from the centr8.6±8.6 s from the right, and 24.0±12.4 s from the

eft. The average forward speed over the 30 trials.2 m/s. There was an average of 12 direction co

ions made per trial, although as discussed below, nf these are clearly reflected in the tracks. The s

ng activation of the neural circuits during approao the sound is illustrated inFig. 4 for two successfuracks, one taking 50 s from the central starting pion (left plots) the other a fast 5 s track from the riide (right plots). The upper plots show the differenponse thresholds of the different fibres in the aud

ig. 4. Activation patterns of the neurons during sound trackincales). For neural signals, vertical bars represent spikes throuircuit in Fig. 2a, lower plots show critical elements from the mo-ordinates of the robot’s actual track as recorded by the teth

Page 7: New technologies for testing a model of cricket phonotaxis

R. Reeve et al. / Robotics and Autonomous Systems 51 (2005) 41–54 47

Page 8: New technologies for testing a model of cricket phonotaxis

48 R. Reeve et al. / Robotics and Autonomous Systems 51 (2005) 41–54

RT or LT neuron to signal a turn. In the centre track itcan be seen that these spikes alternate from one side tothe other as the robot turns back and forth; whereas inthe track from the right, consecutive turn signals in onedirection are followed by similar signals in the otherdirection, resulting in only two changes in steering di-rection during the track.

To illustrate in close-up how the turning is imple-mented,Fig. 5 shows the membrane potentials of asubset of the neurons shown inFig. 4. Right forward(RF) and left forward (LF) neurons normally producesynchronised bursts of spikes to drive the robot for-ward. In this plot, the sound is on the left, so the leftauditory neuron (ANL) encodes the song pattern (withsome noise) while the right AN is less activated (dueboth to quieter input and to the cross-inhibition fromON). Two of the chirps in ANL are clear enough toproduce (via the BN neurons, not shown here) a spikein the left turn (LT) neuron. Each spike in LT inhibitsthe firing of LF, and increases the firing rate of RF, forabout 0.5 s. The difference in firing rate between LFand RF determines the steering signal.

The distance over which the robot could track wasmore limited than we had originally hoped. Althoughthe straight-ahead trials started the robot at a distanceof 8 m, the robot did not actually start responding

Fig. 5. Membrane potentials of three neurons on each side during aturn. Neurons produce a spike when their membrane potential risesabove the mark shown on they-axis, at which time the membranepotential surges briefly to its maximum value, and the spike is trans-mitted to any connected neurons. The graph shows a response to achirp in ANL causes a spike in LT which inhibits LF and excites RF,sending a steering signal to the robot.

to the sound till somewhat closer. If it was startedfrom a position and direction that did not bring itwithin a few metres of the sound, it was unlikely totrack it successfully. We could quantify this more pre-cisely by using the internal neural data to determinewhere in each track the robot made the first turning re-sponse to the sound signal. These positions are shownin Fig. 6. For the central tracks the mean distancewas 3.19 m (minimum 2.23, maximum 5.31) from fac-ing to the right, 3.34 m (minimum 3.12, maximum3.68) and facing left, 3.58 m (minimum 2.92, maximum4.37).

The main cause of this limitation was that the am-plitude of the sound signal beyond this distance wasnot sufficient for the robot to detect the song patternabove background noise. This can been seen inFig. 7which shows the activation of the neurons on one sideof the auditory circuit at different distances from thesound source. As the sound pattern in AN becomes in-creasingly clear, BN1 and BN2, which filter for thispattern as described in theSection 2, start to respond.The main cause of background noise was the motorsof the robot, which caused both electromagnetic dis-tortion and auditory interference. The former was par-tially reduced by shielding the microphone leads butcould perhaps be further reduced. Several possibili-ties for dealing with the latter problem are discussedbelow.

Fig. 6. Location of the first attempted turn in response to sound foreach of the thirty tracks. Active tracking of the sound only occurswithin a radius of about 3 m from the speaker.

Page 9: New technologies for testing a model of cricket phonotaxis

R. Reeve et al. / Robotics and Autonomous Systems 51 (2005) 41–54 49

Fig. 7. Response of the auditory neurons on the left side of the circuit to sound at different distances. A clear pattern in AN1 and corresponding‘recognition’ response in BN1 and BN2 only occur when the robot is relatively close to the sound source.

An additional problem with the system is also ev-ident from close inspection of the lower left traces inFig. 4. The spiking behaviour in RT and LT is produc-ing appropriate alterations in the steering signal. Therobot is effectively travelling along theX-axis of itstrack. At the beginning it moves fairly rapidly towardsthe origin (theX-value decreases), with little deviation(theY-value stays constant). Once it starts respondingto the sound by steering, it slows down its approach(theX-value decreases more slowly). However, therestill appears to be little deviation. If it was actuallyzigzagging to the sound, as the steer signals seem toindicate, each change in direction should be visible as

a change in theY-value of the track, but it remains al-most constant.

What is occurring? From observing the robot it be-came evident that when it started reacting to sound, itwould often come to a stop and take several seconds tomove again, and make further stops while approachingthe sound. During the stops, the front axle and the earswould be turned towards the sound source, or mightoscillate back and forth several times around the soundsource direction. It appeared that when the WhegsTM

base was turning, the level of forward motor torque be-ing supplied was not always sufficient to overcome theextra friction of the pivoted wheel-legs. This could also

Page 10: New technologies for testing a model of cricket phonotaxis

50 R. Reeve et al. / Robotics and Autonomous Systems 51 (2005) 41–54

be a problem in starting from a stop, where overcomingthe initial inertia could take a few seconds. On the otherhand, when already moving straight, the same level oftorque would make the robot move so fast that it riskedexiting the critical area before noticing the sound. Whatwas lacking was any form of motor feedback to allowthe system to regulate its torque to deal appropriatelywith the different situations. As this robot base hadpreviously only been used under remote control, it hadnot been appreciated how much trimming of the speedsignal was needed to allow it to cope smoothly withstarting, turning and other changes such as slopes ordifferences in ground friction. It is evident from thefact that the robot did, largely, succeed in tracking thesound that it could manage to turn sufficiently often.Nevertheless the tracking could have been faster andmore efficient if the actuators had actually performedcorrectly according to the motor signals that were sent.

Replacing the early auditory network (the Poissonneurons representing the auditory nerve and the ONand AN neurons) with the low power aVLSI chip andassociated circuitry has very little effect on the abilityof the circuit to determine the direction of the soundsource. Unfortunately, the chip, being the first iterationof this design, turned out to be very temperature sensi-tive which made it impossible to tune it sufficiently wellto run outdoors. In a stable temperature environmentit was possible to tune the silicon neurons to replacethe simulated neurons very satisfactorily, andFig. 8shows typical responses of the AN and BN circuits tos witht o-t irpsf 3%f bed satef icalo telyo

4

ion,w cketp oundi sedo ork

allows us to evaluate the functional role of the variouscomponents of the network. It also demonstrates thatwe can capture some of the capabilities of animals anduse them to achieve real-world robot tasks. The envi-ronment was non-uniform, and the grass produced amuch quieter and more diffuse sound field than a labo-ratory; that this had little effect on the tracking ability ofthe robot gives us confidence that the auditory circuitis not dependent on the surroundings, but rather willwork in a variety of environments. The main limita-tions were that this only worked over a limited distancerange, of around 3–3.5 m, and that the robot did not al-ways properly execute the motor commands generatedby the neural circuit.

This second problem has several obvious solutions.It might be possible to use feed-forward control to reg-ulate the motor output, e.g. to ensure that torque is in-creased proportionally to the degree of steering, giventhat we can to some extent predict that this is needed.However, it is likely to be more effective to use someform of feedback. For example, an encoder or tachome-ter speed measurement could be used with standard PIDcontrol to regulate the motor output (as is already builtinto the low-level control of the wheeled robots we haveused previously). More interestingly, we could examinesome of the ways in which insects appear to use pro-prioceptive feedback in controlling their actions, andsee how these might best be adapted to robot control.

In a similar way, it is interesting to consider what so-lutions the insect itself suggests to the first problem, thato pli-t reasei rpo-r te tos oft s theu lsoh sev-e r ex-a att el-o onset avei tionsi ort-t

thatm the

ound sources placed perpendicular and in linehe direction of travel of the robot. At 1m with the mors running, the BN2 neurons detected 100% of chrom a sound source on the left of the robot, and 7rom one on the right. This discrepancy seems toue to global parameters being unable to compen

or hardware mismatch between nominally identn-chip circuits. This was not possible to complevercome.

. Discussion

Using a robot base inspired by insect locomote were able to demonstrate that our model of crihonotaxis could be used successfully to locate s

n an outdoor environment. The model is closely ban known neurophysiology of the animal, so this w

f dealing with the substantial decrease in sound amude over longer distances, and the consequent incn the signal to noise ratio. We have already incoated two neural mechanisms that should contribuolving this problem. The first is range fractionationhe input across the auditory nerve. The second ise of cross-inhibition (the ON connections) which aas the effect of gain control. However, there areral additional mechanisms that might be used. Fomple Romer and Krusch[18] have demonstrated th

here is a relatively slow inhibitory current that devps over several seconds allowing the ON/AN resp

o adapt to the prevailing sound level. Although we hncluded depression effects in the synaptic connecn our current model, these effects are relatively sherm and recover between chirps.

We have not as yet included further mechanismsight allow the animal to better filter the signal from

Page 11: New technologies for testing a model of cricket phonotaxis

R. Reeve et al. / Robotics and Autonomous Systems 51 (2005) 41–54 51

Fig. 8. Response of the aVLSI AN1 auditory neurons (spikes shown) and the connected simulated BN1 and BN2 neurons (membrane potentialsshown) to a quiet sound source at 1 m shows clear directionality.

noise. The ears circuit does not do any filtering for thecarrier frequency of the sound. We have demonstratedthat such filtering is not necessary to explain the carrierfrequency selectivity of the cricket[12]. This is becausethe phase comparison mechanism for localisation isitself frequency specific, thus the wrong frequency ofsound simply cannot be localised. However, crickets doshow frequency tuning of auditory fibres, and the inputto the AN and ON neurons comes specifically fromreceptors sensitive to the typical calling song frequencyrange. This may well be necessary for discriminatingthe sound, at low amplitudes, from background noiseor noise created by the animal’s own movements.

On this latter point, it should be recalled that thecricket’s ears are in fact located on its forelegs. Aseach leg is placed on the ground, the vibrations causesubstantial interference to the song signal[19]. Onestrategy that may be used by the cricket, and adoptedfor the robot, would be to stop frequently to collectsound information without the motor noise. However,it has been shown that the cricket can make accuratecourse corrections without stopping[20]. As the in-terference is synchronised with the stepping cycle thecricket could theoretically use corollary discharge topredict when the sound signal should be processed andwhen it should be ignored. As the motor noise on the

Page 12: New technologies for testing a model of cricket phonotaxis

52 R. Reeve et al. / Robotics and Autonomous Systems 51 (2005) 41–54

robot is more continuous, either pauses in movement,or specific filtering to separate the signal and the noiseare more plausible options.

The robot could detect sound from further away; theproblem was the difficulty in detecting the sound pat-tern with sufficient clarity for recognition. It is thereforeof interest to note some evidence that the cricket is lessdiscriminating for the sound pattern when the soundamplitude is low, and only as the amplitude increasesdoes it require the species specific pattern to maintaintracking[21], though more recent experiments appearnot to support this (Hedwig and Poulet, personal com-munication). They note that this could well be an advan-tageous adaptation to the problem of sound distortionat a distance. It would be interesting to consider howsuch a mechanism could be implemented in the neuralcircuit we are using.

It should also be noted that the distance range weachieved in the current work is not all that limited whencompared to the cricket. Estimates for the cricket sug-gest it may be capable of tracking two to four times asfar, but not a substantial distance further. A difficulty forus here is that the rate with which amplitude decreaseswith distance is fixed by the physics of the situation,so cannot be scaled to match the larger body size, andbigger turning circle, of the current WhegsTM imple-mentation. The WhegsTM design has been used to buildmuch smaller robot bases (e.g. ‘Mini-WhegsTM’ [22]has a body length of 8 cm) but to utilise this we wouldalso need to shrink the corresponding control hardware,o al

wst ula-t ta-t finale sed.F nsi-t eri-m hiph ce inm ano atorst willb tersa min-i ip’sc will

hence be much more compact and the chip will be ableto operate from a non-stabilized power supply, suchas the one already used by the robot’s motor circuits.The second problem was mismatch on the chip, mak-ing parameter tuning very difficult; this too is a wellunderstood problem and we will minimize device mis-match effects in the design process of the new circuitsand include appropriate interfacing modules able to re-ceive the voltage signals directly from the ear’s circuits,since this was identified as a significant source of thecurrent problems. This will also reduce the amount ofcircuitry required on the PCB.

Although these issues did disrupt the experiments,both are now well understood, and should be dealt within future revisions of the chip and board. The solutionsalso help with space issues—bulky interface boardswere required to connect this particular aVLSI chipto the system for the experiment, but significant spacesavings on the PCB are identified above which shouldallow us to fit it onto smaller platforms such as Mini-WhegsTM and Khepera.

Our plans for future work in the short-term, there-fore, will focus on implementing some of the abovestrategies, i.e. to continue to work on aVLSI hardwaresolutions which will allow us to significantly reducethe size and power requirements of the robots we useoutdoors, and to improve the motor control of the robotand enable the auditory network to deal with the am-plitude/noise problem more effectively. This will thenallow us to test the robot on more varied terrain, e.g.w le-m robotb uslyo anb

A

R

, Abot.

bot.

r operate it off-board (which is far from simple onegged robot in an rugged outdoor environment).

This leads on to the final experiment, which shohat it is practical to replace parts of the neural simion with a much lower power hardware implemenion of the same circuit. Two issues arose in thesexperiments which need to be specifically addresirstly, the chip proved to be very temperature se

ive, which made it impossible to do outdoor expents; this was unfortunate, but this first aVLSI cad not been designed with temperature invarianind. The second revision of the chip will containn-chip voltage reference and on-chip bias gener

hat compensate for temperature variations. Thisoth reduce the number of external potentiomennd components (such as voltage regulators) and

mize the temperature-related variations in the chircuits. The PCB hosting this next generation chip

ith slopes or uneven footing. We also plan to impent additional sensory mechanisms on the samease, including the optomotor sensor used previon the wheeled robots[5] and active antennae that ce used for obstacle detection and avoidance.

cknowledgement

This research was supported by the EPSRC.

eferences

[1] D. Lambrinos, R. Moller, T. Labhart, R. Pfeifer, R. Wehnermobile robot employing insect strategies for navigation, RoAuto. Syst. 30 (1/2) (2000) 39–64.

[2] B. Webb, Using robots to model animals: a cricket test, RoAuto. Syst. 16 (2–4) (1995) 117–134.

Page 13: New technologies for testing a model of cricket phonotaxis

R. Reeve et al. / Robotics and Autonomous Systems 51 (2005) 41–54 53

[3] H.H. Lund, B. Webb, H.J. Hallam, Physical and temporal scal-ing considerations in a robot model of cricket calling song pref-erence, Artif. Life 4 (1) (1998) 95–107.

[4] B. Webb, T. Scutt, A simple latency dependent spiking neuronmodel of cricket phonotaxis, Biol. Cybernet. 82 (3) (2000) 247–269.

[5] B. Webb, R.R. Harrison, Integrating sensorimotor systems in arobot model of cricket behavior, in: Sensor Fusion and De-centralised Control in Robotic Systems III, SPIE, Boston,2000.

[6] R. Reeve, B. Webb, New neural circuits for robot phonotaxis,Philos. Trans. R. Soc. Lond. Ser. A 361 (2003) 2245–2266.

[7] G. Indiveri, R. Douglas, ROBOTIC VISION: neuromorphicvision sensor, Science 288 (2000) 1189–1190.

[8] J.T. Watson, R.E. Ritzmann, S.N. Zill, A.J. Pollack, Controlof obstacle climbing in the cockroach,Blaberus discoidalis. I.Kinematics, J. Comp. Physiol. A 188 (2002) 39–53.

[9] U. Saranli, M. Buehler, D. Koditschek, Design, modeling andpreliminary control of a compliant hexapod robot, in: Proceed-ings of the IEEE International Conference on Robotics andAutomation, 2000.

[10] R.D. Quinn, J.T. Offi, D.A. Kingsley, R.E. Ritzmann, Improvedmobility through abstracted biological principles, in: Proceed-ings of the IEEE International Conference on Intelligent Robotsand Systems (IROS’02), 2002, pp. 2652–2657.

[11] A.D. Horchler, R.E. Reeve, B.H. Webb, R.D. Quinn, Robotphonotaxis in the wild: a biologically-inspired approach to out-door sound localisation, in: Proceedings of the IEEE Interna-tional Conference on Advanced Robotics, vol. 3, 2003, pp.1749–1756.

[12] H.H. Lund, B. Webb, H.J. Hallam, A robot attracted to thecricket speciesGryllus Bimaculatus, in: P. Husbands, I. Har-vey (Eds.), Proceedings of the Fourth European Conference onArtificial Life, MIT Press, Cambridge, MA, 1997 , pp. 246–255.

[ cir-on

[ ike-a-

MA,

[ anDIT

[ ss,

[ R.R.: In-

[ ro-ath-

1–

[ di-ory

responses during locomotion, J. Comp. Physiol. A 163 (1988)633–640.

[20] B. Schmitz, H. Scharstein, G. Wendler, Phonotaxis inGrylluscampestris L. I. Mechanism of acoustic orientation in intactfemale cricket, J. Comp. Physiol. A 148 (1982) 431–444.

[21] J.M. Doolan, G.S. Pollack, Phonotactic specificity of the cricketTeleogryllus oceanicus: intensity dependent selectivity for tem-poral parameters of the stimulus, J. Comp. Physiol. A 157(1985) 223–233.

[22] J.M. Morrey, B. Lambrecht, A.D. Horchler, R.E. Ritzmann,R.D. Quinn, Highly mobile and robust small quadraped robots,in: Proceedings of the IEEE/RSJ International Conferenceon Intelligent Robots and Systems, vol. 1, 2003, pp. 82–87.

Richard Reevereceived his B.A. degree inMathematics from the University of Cam-bridge in 1992 and his M.Sc. and Ph.D. de-grees in Artificial Intelligence from the Uni-versity of Edinburgh in 1994 and 2000 re-spectively. In 2000 he joined the Psychol-ogy department at the University of Stir-ling as a post-doctoral researcher. He thenvisited the Institute for Neuroinformaticsin the University of Zurich/ETH Zurich in2003, and is now a post-doctoral researcher

in the Institute for Perception, Action and Behaviour at the Uni-versity of Edinburgh. His research focus has been the neural con-trol of low-level animal behaviors such as legged locomotion andcricket phonotaxis; in particular modeling simple sensorimotor be-haviors of insects on robots, and studying the neural models nec-essary for these tasks in order to better understand the underlyingb

ineyIn-hol-ndp atine-usein

n n int men-t euralr eviewa clud-i anda Neu-r

13] G. Indiveri, A low-power adaptive integrate-and-fire neuroncuit, in: Proceedings of the IEEE International SymposiumCircuits and Systems, IEEE, 2003, pp. 820–823.

14] G. Indiveri, Neuromorphic bistable VLSI synapses with sptiming-dependent plasticity, in: Advances in Neural Informtion Processing Systems, vol. 15, MIT Press, Cambridge,2002.

15] D.B.-D. Rubin, E. Chicca, G. Indiveri, Firing proprieties ofadaptive analog VLSI neuron, in: Proceedings of the Bio-A2004, LNCS, Springer, 2003.

16] C. Koch, Biophysics of Computation, Oxford University PreOxford, 1999.

17] G.S. Pollack, Neural processing of acoustic signals, in:Hoy, A.N. Popper, R.R. Fay (Eds.), Comparative Hearingsects, Springer, New York, 1998, pp. 139–196.

18] H. Romer, M. Krusch, A gain control mechanism for pcessing of chorus sounds in the afferent auditory pway of the bushcricketTettigonia viridissima(Orthoptera:Tettigoniidae), J. Comp. Physiol. A 186 (2000) 18191.

19] K. Schildberger, J.J. Milde, M. Horner, The function of autory neurons in cricket phonotaxis II Modulation of audit

iology.

BarbaraWebb received her B.Sc. degreePsychology from the University of Sydnin 1988 and her Ph.D. degree in Artificialtelligence from the University of Edinburgin 1993. She held lectureships in Psychogy at the universities of Nottingham aStirling, and since 2003 has a readershithe School of Informatics at Edinburgh,the Institute for Perception, Action and Bhaviour. Her research focus has been theof robots as models for exploring issues

euroethology, in particular the problem of sound localisatiohe cricket. The work encompasses issues of hardware impleation of sensors, sensory integration, sensorimotor control, nepresentation, and locomotion. She has been invited to write rrticles about the biorobotic approach by several journals in

ngNature, and has edited a recent book “Biorobotics: methodspplications.” She is a member of the International Society foroethology.

Page 14: New technologies for testing a model of cricket phonotaxis

54 R. Reeve et al. / Robotics and Autonomous Systems 51 (2005) 41–54

Andrew Horchler rececieved his B.S.E.degree in Mechanical Engineering fromfrom Princeton University in 2000. He iscurrently an M.S. and Ph.D. candidate andNSF IGERT Fellow in Mechanical Engi-neering and Aerospace Engineering at CaseWestern Reserve University, in Cleveland,Ohio. As a member of the Biologically In-spired Robotics Laboratory at Case he hasdesigned and built five “WhegsTM” vehi-cles and collaborated with researchers at the

University of Edinburgh to study issues of sensorimotor integrationin insect behavior. He is currently developing an insect-inspired ac-tive antennae system to further investigate multi-sensory behavior inrealistic environments. His research interests include nonlinear sys-tems analysis, design and control of legged robots, and sensorimotorintegration for mobile robots. He is a student member of the Societyfor Neuroscience and IEEE.

Giacomo Indiveri is a Research Assistantat the Institute of Neuroinformatics (INI) ofthe Swiss Federal Institute and the Univer-sity of Zurich. He graduated from the Ital-ian “National Research Program on Bio-electronic technologies” (cum laude) andobtained his PhD degree in electrical engi-neering from the University of Genoa, Italy.He was a Postdoctoral fellow at the Cal-ifornia Institute of Technology, under theguidance of Prof. Christof Koch, where he

worked on the design of analog VLSI neuromorphic devices for low-level visual tasks and motion detection. His current research activities

are on neuromorphic VLSI systems for implementing cortical modelsof attention, and for exploring the computational properties of VLSInetworks of integrate and fire neurons. Dr. Indiveri is co-teacher oftwo classes on the analysis and design of analog VLSI NeuromorphicSystems at UNI-ETH Zurich, co-organizer of the Workshop on Neu-romorphic Engineering, held annually in Telluride, and co-authorof the book “Analog VLSI, Circuits and principles”, by Liu et al.published by MIT Press in 2002.

RogerQuinnreceived his B.S. and M.S. de-grees in Mechanical Engineering from theUniversity of Akron in 1980 and 1982 andhis Ph.D. degree in Engineering Science andMechanics from Virginia Polytechnic Insti-tute in 1985. In 1986 he joined the Me-chanical and Aerospace Engineering De-partment at Case Western Reserve Univer-sity in Cleveland, Ohio as the General Mo-tors Assistant Professor, where he is cur-rently an Arthur P. Armington Professor of

Engineering. He has directed the Biologically Inspired Robotics Lab-oratory at Case since its inception in 1990. Six insect inspired leggedrobots and two worm inspired robots have been developed. The sec-ond legged robot, RII, received the Best Video Award at the 1995IEEE International Conference on Robotics and Automation. Thethird legged robot, RIII, is based upon the cockroach and was afinalist for the 1998 Discover Magazine Technology Awards. Heis continuing the development of cockroach robots as well as di-recting the development of a revolutionary group of vehicles thatuse appendages called “whegs”. His research is devoted to biologi-cally inspired robotics. He is a member of the American Society ofMechanical Engineers (ASME), the Society for Neuroscience, andIEEE.