13
Cyclops: In Situ Image Sensing and Interpretation in Wireless Sensor Networks Mohammad Rahimi ,Rick Baer ,Obimdinachi I. Iroezi , Juan C. Garcia , Jay Warrior ,Deborah Estrin ,Mani Srivastava Center for Embedded Networked Sensing, UCLA, Los Angeles, CA 90095,USA Agilent Technology Agilent Laboratories, 3500 Deer Creek Road, Palo Alto, CA 94304,USA [email protected],rick [email protected],jay [email protected],[email protected],[email protected] ABSTRACT Despite their increasing sophistication, wireless sensor net- works still do not exploit the most powerful of the human senses: vision. Indeed, vision provides humans with un- matched capabilities to distinguish objects and identify their importance. Our work seeks to provide sensor networks with similar capabilities by exploiting emerging, cheap, low- power and small form factor CMOS imaging technology. In fact, we can go beyond the stereo capabilities of human vi- sion, and exploit the large scale of sensor networks to pro- vide multiple, widely different perspectives of the physical phenomena. To this end, we have developed a small camera device called Cyclops that bridges the gap between the compu- tationally constrained wireless sensor nodes such as Motes, and CMOS imagers which, while low power and inexpensive, are nevertheless designed to mate with resource-rich hosts. Cyclops enables development of new class of vision applica- tions that span across wireless sensor network. We describe our hardware and software architecture, its temporal and power characteristics and present some representative ap- plications. Categories and Subject Descriptors: B.0 [Hardware]: General; D.2.11 [Software]: Software Engineering: Software Architectures General Terms: Performance, Design, Measurement, Ex- perimentation Keywords: Sensor Network, Vision, Imaging, CMOS Imag- ing, Power Efficiency 1. INTRODUCTION Today as we make progress toward making sophisticated, reliable and low power sensor networks [1] [2], we still face difficulties in collecting useful data from our environment. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SenSys’05, November 2–4, 2005, San Diego, California, USA. Copyright 2005 ACM 1-59593-054-X/05/0011 ...$5.00. Figure 1: Cyclops is a smart vision sensor that will be controlled by a low power micro-controller host. This picture shows Cyclops with an attached MICA2 Mote [4] Most low power sensors are constrained by limited cou- pling to their environment. While vision can reveal immense amount of information about the surrounding environment, there are serious obstacles for using vision in wireless sen- sor networks. Vision sensors are typically power hungry and vision algorithms are computationally expensive, with high power consumption requirements [3]. In wireless sen- sor networks, vision can only play a meaningful role if the power consumption of the network node is not significantly increased. Networks with a large number of vision-sensing nodes should utilize algorithms that consume less power. Scalability in number would 1) overcome occlusion effects, 2) provide multiple perspectives and 3) provide closer views of the phenomena (or objects). This may enable the appli- cation of lightweight vision algorithms, with lossy inferences that can be reinforced through multiple observations and interpretation in the network. On the sensor side,the emergence of integrated CMOS camera modules with low power consumption and moder- ate imaging quality [6] [7] brings such dreams closer to re- ality. These modules are widely used in electronic gadgets

Cyclops: In Situ Image Sensing and Interpretation in ... · Cyclops sensor and analyzes its characteristics. It further discuss Cyclops software framework, provides timing analy-sis

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Cyclops: In Situ Image Sensing and Interpretation in ... · Cyclops sensor and analyzes its characteristics. It further discuss Cyclops software framework, provides timing analy-sis

Cyclops: In Situ Image Sensing and Interpretationin Wireless Sensor Networks

Mohammad Rahimi†,Rick Baer‡,Obimdinachi I. Iroezi†, Juan C. Garcia†,Jay Warrior‡,Deborah Estrin†,Mani Srivastava†

†Center for Embedded Networked Sensing, UCLA, Los Angeles, CA 90095,USA‡Agilent Technology Agilent Laboratories, 3500 Deer Creek Road, Palo Alto, CA 94304,USA

[email protected],rick [email protected],jay [email protected],[email protected],[email protected]

ABSTRACTDespite their increasing sophistication, wireless sensor net-works still do not exploit the most powerful of the humansenses: vision. Indeed, vision provides humans with un-matched capabilities to distinguish objects and identify theirimportance. Our work seeks to provide sensor networkswith similar capabilities by exploiting emerging, cheap, low-power and small form factor CMOS imaging technology. Infact, we can go beyond the stereo capabilities of human vi-sion, and exploit the large scale of sensor networks to pro-vide multiple, widely different perspectives of the physicalphenomena.

To this end, we have developed a small camera devicecalled Cyclops that bridges the gap between the compu-tationally constrained wireless sensor nodes such as Motes,and CMOS imagers which, while low power and inexpensive,are nevertheless designed to mate with resource-rich hosts.Cyclops enables development of new class of vision applica-tions that span across wireless sensor network. We describeour hardware and software architecture, its temporal andpower characteristics and present some representative ap-plications.

Categories and Subject Descriptors: B.0 [Hardware]:General; D.2.11 [Software]: Software Engineering: SoftwareArchitectures

General Terms: Performance, Design, Measurement, Ex-perimentation

Keywords: Sensor Network, Vision, Imaging, CMOS Imag-ing, Power Efficiency

1. INTRODUCTIONToday as we make progress toward making sophisticated,reliable and low power sensor networks [1] [2], we still facedifficulties in collecting useful data from our environment.

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.SenSys’05, November 2–4, 2005, San Diego, California, USA.Copyright 2005 ACM 1-59593-054-X/05/0011 ...$5.00.

Figure 1: Cyclops is a smart vision sensor thatwill be controlled by a low power micro-controllerhost. This picture shows Cyclops with an attachedMICA2 Mote [4]

Most low power sensors are constrained by limited cou-pling to their environment. While vision can reveal immenseamount of information about the surrounding environment,there are serious obstacles for using vision in wireless sen-sor networks. Vision sensors are typically power hungryand vision algorithms are computationally expensive, withhigh power consumption requirements [3]. In wireless sen-sor networks, vision can only play a meaningful role if thepower consumption of the network node is not significantlyincreased. Networks with a large number of vision-sensingnodes should utilize algorithms that consume less power.Scalability in number would 1) overcome occlusion effects,2) provide multiple perspectives and 3) provide closer viewsof the phenomena (or objects). This may enable the appli-cation of lightweight vision algorithms, with lossy inferencesthat can be reinforced through multiple observations andinterpretation in the network.

On the sensor side,the emergence of integrated CMOScamera modules with low power consumption and moder-ate imaging quality [6] [7] brings such dreams closer to re-ality. These modules are widely used in electronic gadgets

Page 2: Cyclops: In Situ Image Sensing and Interpretation in ... · Cyclops sensor and analyzes its characteristics. It further discuss Cyclops software framework, provides timing analy-sis

camera module CPLD SRAM FLASH

MCU

sync

data

data bus

handshake

SRAM bank

address bus

osc.

Host (Mote)

16

3

3

8

8

3

I2C imager control

3

FLASH bank

frame memory

serial interface

imager clock control

Asynchronous Trigger

Supply

VCC

GND

ON/OFF

VCC

imager power control

VCC

memory & CPLD

power control

Imager clock

CPLD clock control

(From MCU)

(From MCU)

(I2C)serial interface

(USART)

Debugging

Figure 2: Diagram shows Hardware architecture offirst generation of Cyclops.

such as cell phones and personal digital assistants (PDAs).They typically include a lens, an image sensor, and an imageprocessing circuit in a small package. In spite of this highlevel of integration, these devices are still difficult to applyin lightweight sensor network nodes. Image data must becaptured synchronously at high rates and programming iscomplex.

Cyclops (Fig-1) attempts to overcome these difficulties.It is an electronic interface between a camera module anda lightweight wireless host. Cyclops contains programmablelogic and memory circuits for high-speed data transfer. Italso contains a micro-controller (MCU) that presents a sim-ple interface to the outside world. It isolates the cameramodule’s requirement for high-speed data transfer from thelow-speed capability of the embedded controller, providingstill image frames at low rates. Since capturing and interpre-tation of an image is potentially time consuming, separatecomputational resources are used for imaging and communi-cation. This is particularly important for network-enablednodes which experience asynchronous events (e.g. MAClayer event) with stringent delay constraint requirements.

Cyclops power consumption is minimal to enable largescale deployment and extended lifetime. This results in ex-treme constraints in its computational power and imagingsize. This makes Cyclops appealing for particular classes ofapplications. Any application that requires high speed pro-cessing or high resolution images should utilize a differentimaging platform.

The rest of this paper describes the architecture of theCyclops sensor and analyzes its characteristics. It furtherdiscuss Cyclops software framework, provides timing analy-sis of different operations and Cyclops energy requirementsin its different states. Finally it presents two example appli-cations that further illustrate Cyclops capability. Our futurework will apply this architecture to distributed imaging ap-plications.

2. HARDWAREThe main principles in Cyclops hardware design are:

• low power consumption, on the order of a sensor net-work node (e.g. Mote) [4] [5].

• simple interfacing that separates the complexity of thevision algorithm from the network node.

• adaptivity to make it a flexible sensor for applicabilityto a variety of sensor network problems.

To achieve this Cyclops exploits 1) Computational paral-lelism to isolate prolonged sensing computations 2) on-demandcontrol of clocking resources to decrease power consumption3) on-demand access to memory by using an external SRAMfor storing image frames and 4) capability for automatic re-laxation of each subsystem to its lower power state. Next,we describe Cyclops hardware organization and its individ-ual components.

2.1 Hardware OrganizationCyclops consists of an imager, a micro-controller (MCU),

a complex programmable logic device (CPLD), an externalSRAM and an external Flash (Fig-2). The MCU controlsthe Cyclops sensor. It can set the parameters of the imager,instruct the imager to capture a frame and run local com-putation on the image to produce an inference. The CPLDprovides the high speed clock, synchronization and memorycontrol that is required for image capture. The combina-tion of the MCU and the CPLD provides the low powerbenefits of a typical MCU with on-demand access to highspeed clocking through a CPLD. Furthermore, the CPLDcan perform a limited amount of image processing such asbackground subtraction or frame differentiation at capturetime. This results in extremely economical use of resourcessince the CPLD is already clocking at the capture time.When MCU does not need the CPLD services it halts itsclock to minimize power consumption.

Cyclops uses external SRAM to increase the limited amountof internal MCU memory and provide the necessary memoryfor image storage and manipulation. In essence, the externalmemory provides us on-demand access to memory resourcesat the capture and computation time. The SRAM is kept insleep state when the memory resources are not needed. Inaddition, Cyclops has an external Flash memory. The Flashmemory provides permanent data storage for functions suchas template matching or local file storage.

The MCU and CPLD and both memories share a commonaddress and data bus. This facilitates easy data transfer be-tween the imager, SRAM and FLASH memory but it alsorequires an appropriate mechanism that guarantees synchro-nized access to such shared resources. This will be furtherdescribed in Cyclops firmware discussion.

Each module in Cyclops has several power states. Typ-ically the lowest power consumption sleep states have thehighest wake-up cost. Consequently, the application shouldchoose the sleep level based on its amortized cost. In ad-dition Cyclops has an Asynchronous Trigger input. Thiscan be used as a paging channel in applications that arecharacterized by prolonged sleep times and quick responseto asynchronous ambient variations. The paging channelcan be connected to sensors of other modalities for eventtriggering. For example it could be connected to a passiveIR detector or a microphone to trigger image capture onmotion, or connected to a magnet sensor to monitor a tran-sition (e.g. opening and closing gates). Next we describeeach component more elaborately.

Page 3: Cyclops: In Situ Image Sensing and Interpretation in ... · Cyclops sensor and analyzes its characteristics. It further discuss Cyclops software framework, provides timing analy-sis

Figure 3: ADCM-1700 camera module block di-gram. It consist of an F2.8 lens, image sensor anddigitizer, image processing units and data commu-nication units.

2.2 ImagerThe main requirement for the camera module is low power

consumption. We chose a medium quality imager with mod-erate clock speed requirements and extremely low powerconsumption in its sleep state. It is an ultra compact CIFresolution CMOS camera module (ADCM-1700) from Agi-lent Technology. It combines an Agilent CMOS image sensorand processor with a high quality lens. The lens has a fo-cal length of 2.10mm with practical focal lenght of almost100mm to infinity and has 52 deg field of view. Its clockfrequency can be set to anywhere from 4 to 32 MHz. We setthe imager clock to 4MHz to provide the CPLD (which op-erates at 16MHz clock), enough time to grab an image pixeland copy it into the memory. The image sensor (Fig-3) hasCIF resolution (352×288). The output of the image array isdigitized by a set of ADCs to generate the raw image data.The camera module contains a complete image processingpipeline. The pipeline performs demosaicing (reconstruc-tion), image size scaling, color correction,tone correctionand color space conversion. It also implements automaticexposure control and automatic white balancing. Cyclopsgenerally operates on images of reduced resolution becauseof its constrained memory and power.

The camera module has two parameters for adjusting thesize of the image. Input window size determines the arrayof sensor pixels selected from the total number of availablepixels (352 × 288) and output window size determines thenumber of pixels in the output image. When the input win-dow size is greater than the output window size, the cameramodule averages the input pixels to generate the result. Theimage output is normally taken from the center of the fieldof view of the sensing array. An offset can be adjusted topan the region of interest across the field of view.

The image processing unit is capable of generating varietyof image formats. Currently Cyclops supports three imageformats of 8-bit monochrome, 24-bit RGB color, and 16-bitYCbCr color. The camera module is programmable througha synchronous serial I2C port. Many low level parameterssuch as exposure time and color balance may be accessed.Data is output over a parallel bus consisting of eight datalines and three synchronization lines.

2.3 Micro-controllerThe MCU is responsible for controlling Cyclops, commu-

nicating with the host and performing image inference. Mostnotably, by providing separate processing power, it isolatessensing and networking computations from each other. Thisis very important since Cyclops computations are poten-tially prolonged and might block highly asynchronous net-work operations. The Cyclops micro-controller(MCU) is anATMEL ATmega128L [8]. It has an 8-bit RISC core pro-cessor with a 16-bit address Harvard architecture. It hasan arithmetic logic unit that provides a multiplier support-ing both signed/unsigned multiplication and fractional for-mat. It provides general purpose registers, internal oscilla-tors, timers, UARTs and other peripherals. In Cyclops, theMCU operates at 7.3728MHz and 3.3V .

The system is very memory constrained: it has 128KB ofFlash program memory and 4KB of SRAM data memory.The MCU is designed such that it can map 60KB of exter-nal memory into its memory space. The combination of theinternal and external memory presents a continuous and co-hesive memory space. External memory accesses are slower,but this is invisible from a programming perspective. Theexternal SRAM is referred to as extended Memory since itextends the Cyclops memory to the maximum addressable64KB.

The processor integrates a set of timers which can be con-figured to generate interrupts at regular intervals. This canbe used to make Cyclops a synchronous sensor or an asyn-chronous sensor exploiting an underlying synchronous sam-pling mechanism. The MCU has two clock domains, themain clock for processor and I/O, and external timer clockfor independent timer and clock operation. It has six sleepmodes: idle, ADC noise reduction, power-down, power-save,standby and extended standby. Typically Cyclops is eitheractive, in power-save(PS) or in power-down(PD) mode. Inpower-down mode, MCU shuts everything but the I2C andasynchronous interrupt logic necessary for wake up. Power-save mode is similar to the power down mode, but leavesan asynchronous timer running. This enables Cyclops to gointo a deep sleep mode leaving an underlying synchronoustimer operational for periodic sensing services.

2.4 CPLDImage capture requires faster data transfer and address

generation than what a lightweight processor can provide.To satisfy this requirement we use a complex programmablelogic device (CPLD) as a lightweight frame grabber. TheCPLD provides 1) on-demand access to high speed clockingat capture time and 2) (potentially) computation as imagecapture is underway (e.g. calculating statistics such as his-togram at capture time). The use of an FPGA was rejectedbecause the high power required during initial configurationwould have increased the amortized cost of the power-downstate. Provisions have been provided to further save powerby disabling the CPLD clock and by removing power fromthe entire memory subsystem (CPLD and memories).

A Xilinx XC2C256 CoolRunner CPLD was chosen forits low power consumption. This device consists of six-teen Function Blocks inter-connected by a low power Ad-vanced Interconnect Matrix (AIM). The AIM feeds 40 trueand complement inputs to each Function Block. The Func-tion Blocks consist of a 40 by 56 Programmable Logic Array(PLA) and 16 macrocells which contain numerous configu-

Page 4: Cyclops: In Situ Image Sensing and Interpretation in ... · Cyclops sensor and analyzes its characteristics. It further discuss Cyclops software framework, provides timing analy-sis

ration bits that allow for combinational or registered modesof operation. It has 256-macrocell device and 80 user I/Os.The CPLD runs at 1.8V , obtained from a dedicated voltageregulator, and has quiescent current of as low as 13µA. Itruns from a 16MHZ input clock and feeds a 4MHz clock tothe imager at the capture time. We picked the CPLD’s clockto be higher than the imager clock to allow a certain amountof image processing on the CPLD at image capturing time.

2.5 SRAM and FLASHCyclops needs memory for image buffering and local in-

ference processing. It uses external CMOS static randomaccess memory (SRAM) which provides 64KB 1 in 8bitsformat ( TC55VCM208A from TOSHIBA). The device op-erates from a single 2.3V to 3.6V power supply to supportextended battery operation. It provides both high speedand low power at an operating current of 3mA/MHz and aminimum cycle time of 40ns. The memory is automaticallyplaced in low-power mode at 0.7µA standby current when itis in not in use.

In addition, Cyclops has 512KB of CMOS Flash pro-grammable and erasable read only memory (AT29BV040Afrom ATMEL) which is organized in 8bits format. This pro-vides Cyclops with permanent storage to save information(e.g. templates). It operates from 2.7V to 3.6V for extendedbattery operation and has auto-sleep functionality of 40µAcurrent when it is in no use.

2.6 Other ComponentsCyclops has three MOSFET switches for power control.

One switch can completely turn off the device, the secondswitch activates the memory subsystem (SRAM, CPLD andFlash) and the third switch activates the imager. Theseswitches can override the normal auto-shutdown procedureof the devices to achieve further reduction in power con-sumption.

3. SOFTWAREThe motivation for our Cyclops firmware design is:

• transparency in using resources (such as the imager orSRAM) while supporting their automatic relaxation tothe lowest possible power state.

• supporting the long computations that are needed forimage operations.

• supporting synchronized access by both the MCU andthe CPLD to shared resources such as SRAM and theimager.

This implies that, unlike a network-centeric approach thatcalls for handling a high degree of network asynchronousevents, the Cyclops architecture should target sequential im-age capture and processing. Thus following a frame capture,Cyclops calls a series of potentially long synchronous oper-ations with little concurrency.

1Since it is difficult to obtain memory chips with less than512KB of capacity, we used SRAM and FLASH memorieswhich are 512KB and we have organized each of the memo-ries as eight 64kB banks, with the bank address selected bythe CPLD. We are currently using only the lowest bank ineach device. This obviously has additional power cost dueto lack of the off-the-shelf 64KB memory.

memory Flash

Memory Manager

Sensor Application(Inference)

Flash Manager

CPLD SRAM FlashImager

CPLD Code CyclopsHardware

CyclopsSoftware

CyclopsSoftware

I2C

I2CPacket Libraries

Matrix

StatisticsMotion

Color

Host Mote (network)

ActiveEye

Layer

Figure 4: Component layouts in Cyclops. The com-ponents consist of the main sensing application, thelibraries that encapsulate image manipulation logicand the devices that enable communication withhardware components.

To efficiently utilize Cyclops constrained resources, Cy-clops firmware is written in the nesC [9] language and runsin TinyOS [10] operating system environment. This helpsthe Cyclops software architecture to abstract functionalitiesin the form of components with clear interfaces and exploitstandard TinyOS scheduler and services. Next, we describeCyclops software organization, its components and its imagecapturing and control stack (i.e. Active Eye).

3.1 Software OrganizationThere are three sets of components in Cyclops (Fig-4):

drivers, libraries and the sensor Application. Drivers arethe set of components that enable the MCU to communi-cate with peripherals. Examples of drivers are the imagersetting and capturing drivers, CPLD drivers, Flash mem-ory drivers and drivers that enable communication with thehost Mote. There are two different class of libraries: prim-itive structural analysis libraries and high level algorithmiclibraries. Among the very important structural libraries arethe variety of matrix operation, statistics and histogram op-eration. These libraries are key to processing raw images.Other libraries are closer to application (algorithm) such asbackground subtraction, object detection or motion vectorrecognition libraries that are designed to do specific highlevel tasks. Finally the sensor Application is the main sens-ing component on Cyclops. It receives commands from thehost and is responsible for performing necessary sensing op-erations as requested by the host. It is the sensing applica-tion that makes Cyclops a smart sensor.

In Cyclops, external address and data bus are sharedamong many entities including MCU, SRAM, FLASH andCPLD. Consequently devices may contend for bus access.The SRAM and Flash are slave devices, hence a mechanismis only necessary for the synchronization of MCU and CPLDbus access. We modified TinyOS scheduler to support Di-rect Memory Access for such synchronized bus usage. WhenCPLD needs access to the bus (e.g. capturing an image) theCPLD driver requests the scheduler a permit to perform a

Page 5: Cyclops: In Situ Image Sensing and Interpretation in ... · Cyclops sensor and analyzes its characteristics. It further discuss Cyclops software framework, provides timing analy-sis

Active Eye Layer

Imager Capture and Control

Format WindowSize

Exposure InitializationSnap

Imager Communication

CPLDDriver

MCU

Imager

Control

(I2C)

Data

(CCIR-656)

StdControl

Cyclops-ImageCyclops-capture-parameters

DataDriver

CPLD

Figure 5: The Active Eye layer contains the imagecapture and control stack. It consists of componentsthat control the imager and the CPLD. The CPLDmodules consist of the CPLD driver inside the MCUand the CPLD data driver in the CPLD.

Direct Memory Access (DMA) transaction. On completionof the current active task such a permit is granted by thescheduler and CPU goes to power-save mode (power-down ifthere is no active timer). At the end of the transaction, theCPLD will notify the scheduler that it may resume normaloperation. In essence there are two mechanisms that guar-antees safe access of each component to memory: 1) thedefault mode in which the MCU controls the SRAM mem-ory and 2) the DMA mode in which the scheduler sleeps.This combination means that any external memory accessin the TinyOS task routines happens while the MCU has ac-cess to the SRAM. Libraries should be implemented in taskcontext and this leverages access to the external memorywithout any additional synchronization requirement.

3.2 Active Eye: Image Capturing and ControlLayer

The Active Eye Layer (AEL) enables Cyclops to see theexternal world . It is the Cyclops image capture and controlstack and is responsible for communicating with the imager,setting its parameters, keeping it in proper state, and finallycapturing the image. It consists of different components thatexist in the MCU and CPLD (Fig-5). The rest of this sectiondescribes Active Eye components and their interactions.

The main component in the AEL is the image captureand control module. It uses subordinate control modulesfor controlling the imager. These modules are responsiblefor changing different aspects of the image. Examples ofthese modules are a window size module that is responsi-ble for setting the input and output size window; a formatmodule which is responsible for setting image format (i.e.monochrome, RGB color, etc); and an exposure modulewhich sets capture exposure parameters. All these mod-ules use the imager communication component to access thecamera over the imager I2C bus.

The AEL uses the CPLD for image data capture andmanipulation. The CPLD modules consist of the CPLD

Image

Internal Memory$0000$10FF$1100

$FFFF

External Memory

Cb01 Y0 Cr01 Y1 Y2 Crn-1 Y0 Yn

Y0 Y1 Y2 Y3 Y4 Y5 Yn-2 Yn-1 Yn

R0 G0 B0 R1 G1 B1 Rn Gn Bn

Cb23 Cb45

Y6

R2

HSYNC

VSYNC

Imager Clock

Data[7:0] Any of the

data formats00

00

00

00

00

00

Set captureParameters*

snap

Active ActiveMCU Sleep (Direct Memory Access Cycle)

Imager

I2C

CPLD

Data Driver

CPLD Interrupt

*Only if capture parameters have been changed

Activate Imager

MCU

CPLD

Clock

Figure 6: This picture shows the flow of data at cap-ture time. Active Eye enables the camera moduleclock, sets capture parameters and snaps the image.It then requests a DMA transaction from the sched-uler and the MCU goes to sleep when the transac-tion is granted. The camera module provides datavia a parallel bus that includes vertical and horizon-tal synchronization signals. The CPLD receives theimage data stream and saves it into memory. Uponcapture of a complete frame, the CPLD notifies theMCU and a normal MCU operation resumes.

driver inside the MCU and data driver inside the CPLD.The CPLD data driver module is written in Verilog Hard-ware Descriptive Language. It is essentially a state ma-chine with different states trajectories that are selected bymicro-controller commands. The micro-controller places theCPLD on the proper state by programming the CPLD’s in-ternal registers and by asserting control over CPLD hand-shake lines. To capture an image, the AEL first enables theimager module clock and updates the capture parameters(if they have been modified). It further places the cam-era module in video output mode and sets the CPLD stateto capture mode. Upon completion, the CPLD notifies theMCU that it may resume operation and relaxes to standbymode. Fig-6 illustrates this process.

There are three interfaces that facilitate access to AEL:“StdControl”, “snap” and “captureParameters”. The “Std-Control” enables initialization, starting and stopping of theAEL stack. At starting routine AEL turns ON imager powersupply, waits for a predetermined time for the imager to beaccessible (i.e. 200ms) and sets the initialization parame-ters in the imager via its I2C bus. Starting the AEL shouldhappen only once, AEL then always relaxes to its lowest pos-sible state by disabling imager clock when it is not needed.Still, application can stop Active Eye Layer in cases withextremely low duty cycling to further reduce power con-sumption. In such cases they should start AEL before usingit again.

The second interface is “captureParameters” which en-ables the user to provide AEL with new capture conditions.Capture parameters should only set once and will be pre-served afterward. This means that in constant capture con-ditions this interface would be called once. Cyclops captureparameters structure is as follows:

Page 6: Cyclops: In Situ Image Sensing and Interpretation in ... · Cyclops sensor and analyzes its characteristics. It further discuss Cyclops software framework, provides timing analy-sis

typedef struct CYCLOPS_Capture_Parameters{

wpos_t offset;wsize_t inputSize;uint8_t testMode;uint16_t exposurePeriod;color8_t analogGain;color16_t digitalGain;uint16_t runTime;

}CYCLOPS_Capture_Parameters;

The input size (a two element structure) determines the sizeof of the input region of interest (ROI). In practice these val-ues can be between 24-288 for image height and 24-352 forits width. The offset (another two element structure) de-termines the position of the ROI within the field of view(FOV), relative to the center of the array. Modification ofthis parameter enables the user to pan across the image sen-sor window. The test mode parameter can be used to obtaina synthetic test image. The exposure period controls the du-ration of the exposure. The analog and digital gains controlthe amount of gain that is applied to each color channel.The camera module can also determine the exposure andgain values automatically. The run time parameter causesthe camera to run for an additional period before capturebegins in order to allow the auto functions to converge.

The third interface is “snap” which should be called eachtime an image capture is required. To snap the image thedesired snap condition such as image type, its height andwidth, and its memory location should be set in the imagedata structure. The snap interface then returns the image,filling the desired memory location with the image content.Cyclops image data structure is as follows:

typedef struct CYCLOPS_Image{

uint8_t type;uint16_t size;uint8_t nFrames;uint8_t *imageData;bufferPtr imageHandle;

}CYCLOPS_Image;

The type specifies the image format (i.e. monochrome, RGB,etc). The size is the output image size. “nFrames” specifiesthe number of sequential frames to capture in memory. The“imageData” is a pointer to the image location in memory.The imageHandle is a memory handler (pointer to memorypointer) when dynamic memory is available. Currently weallocate memory statically.

3.3 LibrariesTinyOS does not provide a preemptive mechanism in its

synchronous execution model (i.e., tasks cannot preempttasks). This means that a synchronous task will run tocompletion when not preempted by an interrupt handler.Consequently, long running tasks can block access to com-putational resources, thereby starving other tasks. A basicdesign principle of TinyOS is that long-running tasks shouldbe broken down into smaller tasks that are posted sequen-tially. In this manner, a large block of processing can explic-itly yield the processor at predetermined safe points duringits execution to promote fairness and prevent starvation. Innetwork-enabled sensor nodes, task-yielding is particularlyimportant to cope with asynchronous requirements of thenetwork

Image processing operations are typically long-runningand not suitable for sequential decomposition. The conse-quence of loading a sensor network node with processing the

Library OperationMatrix Image to Matrix ConversionMatrix Matrix to Image ConversionMatrix Extracting a particular plane in an image

e.g. red plane in RGB or Cb in YCbCrMatrix Extract RowMatrix Extract ColumnMatrix ThresholdMatrix Clone MatrixLogic matrix AND operationLogic matrix OR operationLogic matrix XOR operationLogic matrix-wide shift right operationLogic matrix-wide shift left operationArithmetic matrix AdditionArithmetic matrix SubtractionArithmetic matrix ScalingStatistics minimum,max,summationHistogram histogram generationSobel Matrix derivative vs. hight and widthTemplate Provides hollow or solid rectangles

of different sizes

Table 1: This table shows different primitive li-braries and the operations that they currently sup-port. These operations are supported on matrices ofdifferent depth such as one, eight or sixteen, whenapplicable.

image is a significant increase in implementation complexityto break long image processing operations into pieces. Thisin turn results in reduction of performance of the system. Inour case, due to existence of dedicated processor we buildan execution model that supports continent serialization ofimage processing operations. When an image is captured,one expects a series of potentially long computational blockswith limited concurrency requirements. This gives Cyclopsthe liberty of using prolonged computations in the TinyOSsynchronous model. Cyclops libraries are implemented inlong tasks with potential delays of hundreds of milliseconds.In addition since the libraries are not pending for any eventthey do not use the TinyOS split-phase call-back model.This means that the result of their computation is readilyavailable at the completion of their execution. This greatlysimplifies implementation of any algorithm using Cyclopslibraries.

Libraries are divided in two categories: primitive struc-tural libraries such as matrix operation libraries or histogramlibraries and advanced libraries such as coordinate conver-sion and background subtraction. We next briefly describesome of the Cyclops libraries.

3.3.1 Primitive Structural LibrariesAn important data structure in any image analysis is ma-

trix operation. Cyclops supports matrices of different depthof one, eight or sixteen. A matrix is defined as:

typedef struct{uint8_t depth;union{

uint8_t* ptr8;uint16_t* ptr16;

Page 7: Cyclops: In Situ Image Sensing and Interpretation in ... · Cyclops sensor and analyzes its characteristics. It further discuss Cyclops software framework, provides timing analy-sis

}data;uint16_t rows;uint16_t cols;

} CYCLOPS_Matrix;

where depth determines the size of elements of the matrix(i.e. 1,8,16), data is the location of the memory that ma-trix resides and rows and cols are the number of rows andcolumns in the matrix. Any library that implements a ma-trix operation typically supports matrices of different depth.For example MatrixArithmetic library supports addition ofmatrices of depth 8 and 16 (for depth 1 it is not defined)and MatrixLogic supports matrices of depth 1,8 and 16. Inaddition, the default assumption is that values are clippedwhen they overflow the depth of the output matrix.

Table-1 shows the current set of supported libraries andsome of their associated operations. The Matrix library en-ables primitive matrix operations such as conversion of im-age to a matrix (and reverse). In addition, it enables ex-tracting of particular planes of an image such as individualRGB or YCbCr in a form of a matrix and has primitivesfor extraction of particular rows or columns of a matrix.Histogram library generates histogram of the input matrix.To adapt histogram operation to constrained Cyclops en-vironment, it can generate only histograms in 4,8,16 or 32breaking points. This results in efficient implementation ofhistogram library. The Sobel library provide derivative ofa matrix vs. its rows or columns. It is implemented us-ing Sobel algorithm. This library is useful in primitive edgedetection applications. Template library provides matriceswith determined patterns. Currently, only rectangular tem-plates are implemented.

3.3.2 Advanced Libraries

• Background Subtraction: In many applications, ob-jects appear on a largely stable background. Back-ground subtraction is important in many image analy-sis and manipulation [3]. The main issue is obtain-ing a good estimate of the background in the faceof changes in illumination, moving objects and grad-ual environmental variation. One method that usuallyworks fairly well is to estimate the value of backgroundpixels using a moving average filter. In this approachwe estimate the value of the background image as aweighted average of the previous background and theinstantaneous image. More precisely:

Bn = λ × Bn−1 + (1 − λ) × Img

Where Bn is the background image at nth iterationof the algorithm, Bn−1 is previous background imageand Img is the instantaneous image. In our case,tofurther enhance the performance of the library we pickλ = 0.25 to be able to implement that efficiently withshift and addition operations.

• Co-ordinate Conversion: In any network based appli-cation that uses multiple numbers of Cyclops, the re-sult of individual Cyclops inference should be inter-changed across the network. In such cases these re-sults should be interpreted in a unified world coordi-nate system. This library facilitates changing coor-dinate system from the Cyclops coordinate system toa world coordinate system (and reverse). The library

currently support 2-dimensional coordinate conversionfor translation and rotational adjustment.

Object

Camera Coordinate System

World Coordinate System

camera

Translation

Rotation

4. PERFORMANCE ANALYSISOur analysis of the Cyclops platform focuses on the plat-

form’s power consumption and its temporal characteristics.Table-2 shows different Cyclops power states, their associ-ated active clock domains, and the sources that can affectCyclops transition in that state. It also shows measuredpower consumption of Cyclops in each state. Cyclops nor-mal states are 1)Image capture, 2)Extended memory access,3)Permanent memory access, 4)Microcontroller internal and5)Sleep. During normal operation, based on I/O requests(i.e. transition sources), Cyclops transparently makes theresources available and upon releasing those resources,it au-tomatically relaxes to the lowest possible state. This processlets the Cyclops be in the sleep state when no resources arein use.

In addition, Cyclops has two other sleep states, OFF andShutdown. In the OFF state Cyclops’ main MOSFET switchis off and the device is completely turned off. In this stateCyclops can not be programmed until it is turned ON bythe host. The Shutdown state is the same as sleep stateexcept that Cyclops shuts down its external memory sub-system. Since external peripherals are OFF, the contents ofthe SRAM are not preserved. Further reduction in power(i.e. shutdown and OFF states) requires an explicit instruc-tion from application since the contents of the memory willbe lost in those states.

In practice, Cyclops energy consumption depends on thepower consumption of the different states and their time du-ration. In the case of the imager, the image capturing timedepends mainly on two parameters: 1) the input image sizeand 2) the ambient light intensity. Input image size deter-mines the size of sensor array and the subsequent number ofnecessary operations to convert each pixel to a digital value.The ambient light intensity determines the necessary expo-sure time of the sensor array. Fig-7(a) shows the effect of in-put image size on capture operation for room light intensity.In all these cases we used the imager’s automatic exposurefeature. In our implementation the CPLD needs two videoframes to synchronize with the imager video stream. Thisincreases the cost of capture operation significantly.

Fig-7(b) shows Cyclops timing for some structural librariesand Fig-7(c) shows Cyclops timing for some of our advancedlibraries for different input matrix sizes. In our case, thetiming of these libraries are dominantly determined (almostlinearly) by input matrix size. Most of these operations hap-pen in Cyclops extended memory access mode.

Table-3 shows power consumption of some known hostnodes [11]. It shows that power consumption of Cyclopsin different states is comparable to these host nodes. In

Page 8: Cyclops: In Situ Image Sensing and Interpretation in ... · Cyclops sensor and analyzes its characteristics. It further discuss Cyclops software framework, provides timing analy-sis

Mode Components State Active Clock Domains Transition Source PowerMCU SRAM FLASH Image CPLD MCU ASY. CPLD Imager MCU I2C Timer CPLD Trig.

Capture

Image PD/PS† √S‡ √ √ √ √ √ √ √†† √ √

42mW

ExtendedMemory

√ √S S S

√ √ √ √ √ √ √W=53mW

Access R=50mW

PermanentMemory

√S

√S

√ √ √ √ √ √ √ √W=64.8mW

Access R=28mW

MCUInternal

√S S S S

√ √ √ √ √ √23mW

Sleep PD/PS S S S S√ √ √†† √

0.7mW

Shutdown PD/PS OFF OFF OFF OFF√ √ √†† √

60µW

OFF OFF OFF OFF OFF OFF ≤ 1µW

Table 2: This table shows Cyclops’ different states, their associated active clock domain, the sources thatcan cause a transition out of that state, and the measured power consumption of the device. †) “PD” ispower-down and “PS” is power-save mode. The measurements reported are for “PD” mode.‡“S” stands forSleep †† Timer interrupt only is in effect if Cyclops is in PS mode.

0

20

40

60

80

100

120

140

160

1024 4096 16384

Frame

Capture

Power(µJ)

0

2520

4200

5880

840

1680

3360

5040

6720

Time(ms)

Input Image

(a)

0

10

20

30

40

50

60

70

80

90

100

1024 4096 16384

And/or/xor

Min/Max

Avg

Add

Sub

Scale

Time(ms)

Input Matrix

(b)

0

50

100

150

200

250

300

350

400

450

500

1024 4096 16384

Soble Derivation

Background Subtraction

Histogram (8 Bins)

Histogram (16Bins)

Time(ms)

Input Matrix

(c)Figure 7: This figure shows the timing cost of different operations in Cyclops, a) frame time and capturetime of images of different sizes, where frame time is imager frame time and capture time is Cyclops capturetime (Active Eye delay). In all the three cases the input and output window sizes are the same, b) structurallibrary operations c) performing more advanced operations.

Operation Telos Mica2 MicaZMote Standby (RTC on) 15.3µW 57.0µW 81.0µWMCU Idle 163.5µW 9.6mW 9.6mWMCU Active 5.4mW 24.0mW 24.0mWMCU + Radio RX 65.4mW 45.3mW 69.9mWMCU + Radio TX (0dBm) 58.3mW 76.2mW 63.0mW

Table 3: Power consumption of some of the of wellknown sensor network host nodes. These numbersare cited from [11] for a 3V battery

particular, Cyclops sleep power is comparable to a Mote’sStandby power and its image capture power is comparableto the host’s power in radio operation.

There are various improvement that we are consideringfor the Cyclops imaging stack. Improving the CPLD’s syn-chronization with the imager will improve the timing (andenergy cost) of the capture significantly. In addition us-ing more parallelism in the CPLD logic will reduce the thenumber of the CPLD’s clock cycles for performing a pixeltransfer to the SRAM. This would let us increase the imagerclock speed and facilitate faster image capture operation.

5. PUTTING IT ALL TOGETHERNow, that we have shown a few sample components, we

will examine their composition and interaction within anapplication.

5.1 Design PrinciplesIn designing an algorithm for Cyclops, we try to achieve a

low data profile to be able to use its limited computationaland memory resources efficiently. In our experience, the keyissues in the development of a Cyclops application are:

• Data reduction: we keep the image size at minimumsize to the extent that application warrants it. Thiscan be achieved by selecting the proper image aspectratio and output resolution.

• Efficiency in Computational Arrangements: Applica-tions may have different computational arrangementsfor performing a particular inference on an image. Wedesigned our algorithm such that the arrangement ofits computations, results in rapid reduction of the datasize along the flow of its execution.

• Static Computation: When possible, we perform acomputation and reuse its result(s) multiple times. Forexample results of trigonometric functions for coordi-nate conversion is saved for later reuse. In additionmany libraries embed computation in the form of staticlookup tables to trade program memory space withcomputational speed.

Page 9: Cyclops: In Situ Image Sensing and Interpretation in ... · Cyclops sensor and analyzes its characteristics. It further discuss Cyclops software framework, provides timing analy-sis

We now discuss our test environment and two specific ap-plications. While these applications do not present a com-prehensive algorithm and analysis study, they illustrate thecapability and applicability of Cyclops.

5.2 Test EnvironmentCyclops has a serial port that connects it directly to PC.

We have developed a TinyOS component (called serialDump)that enables an application to dump particular locations ofmemory into serial port. For example an application thattests a particular image processing library may dump theinput image as well as output results into a PC. This canbe used for verification of performance of the algorithms onCyclops. In cases that the algorithm includes more thanone Cyclops, one can use a serial expansion module to con-nect multiple Cyclops directly to a PC. In addition thereis a wireless version of serial debugging component (calledradioDump) that enables Cyclops to dump its memory con-tent via an RF radio link. In this case one Mote is attachedto Cyclops and another Mote is attached to a MIB510 pro-gramming and serial communication board (from xbow tech-nology [4]). In this case several wireless Mote pairs can beoperated simultaneously on different frequencies.

Serial Multiplexer

LAN

P P P P P

PC

ProgrammingBoard

Cyclops

Serial Multiplexer

LAN

P

PC

M

M

P

M

M

P

M

M

Mote

(a) (b)

In our experience, it is often more convenient to developimage processing routines within a high level signal process-ing language. In such cases we save a series of image framesfrom Cyclops on a host PC. Once the algorithms are devel-oped, they can be ported to Cyclops. In practice, we foundthat these debugging capabilities provide us with 1) a richset of data to design an algorithm in high level languages (i.e.matlab) 2) a high degree of visibility into different stages ofan algorithm when it is ported into Cyclops. We now de-scribe the two applications that we have explored.

5.3 Object detectionIdentifying moving objects from a sequence of images is

a fundamental and critical task in many sensing applica-tions. A common approach is to perform background sub-traction, which identifies moving objects from the portion ofan image frame that differs significantly from a backgroundmodel. There are many challenges in developing a goodbackground subtraction algorithm. First, it must be robustagainst changes in illumination. Second, it should avoid de-tecting non-stationary background objects such as movingleaves, rain, snow, and shadows cast by moving objects. Fi-nally, its internal background model should react quicklyto changes in background such as starting and stopping ofvehicles.

There are many ways to approach this problem and wechose the basic method [12] to minimize computation re-quirements. In our case (Fig-8) Cyclops periodically cap-tures images from the outside world and constructs the sta-tionary background using the moving average of the series of

GetImage

UpdateBackground

CalculateForeground

CalculateIlluminationThreshold

FindMaximum

Extract MaximumNeighborhood Matrix

Count Number of pixels above Illumination Th.

PeriodicWake-up

> N

NotifyHost

Sleep

Figure 8: Diagram of Cyclops object detection algo-rithm.

the images. It further constructs a model of the foregroundbased on the absolute difference of the instantaneous im-age and the constructed background. Here we consider theabsolute value to accommodate the fact that objects maybe darker or brighter than the background. When thereis no object in the scene, such differences are minimal butpresence of an object leads to substantial variation in theforeground. Using a threshold filter we further constructthe object map. In practice, the absolute value of thesevariation depends on the field illumination. We constantlyestimate the intensity of illumination field and adjust thethreshold value as a factor of background illumination. Fi-nally we apply a blob filter by counting the number of pixelsthat exceed a threshold count. In our scheme, we initiallyfind the location of maximum value of the object map andextract a sub-matrix of that neighborhood. This reducesthe data size significantly with the expense of losing otherobjects in the foreground. This means that we only detectthe object with the biggest impact on the foreground image.

To test performance of our algorithm we deployed Cyclopsoutside, in UCLA campus. In our case Cyclops periodicallysleeps for 15Sec intervals and executes the algorithm aftereach wake-up. Cyclops then constantly updates an attachedcomputer with its instantaneous image as well as its instan-taneous background and foreground images. In addition itnotifies the attached computer if it has detected an object inthe scene. Fig-9 shows our result for six consecutive obser-vations. Here the top row is instantaneous Cyclops images,following by its updated background and foreground image.The bottom row shows Cyclops foreground map in a per-spective view. In our experiments Cyclops was 78.4% rightand in 21.6% of cases it reported a false/positive or falsenegative. We also measured the execution time of our ob-ject detection algorithm and it runs in 240ms,60.8ms and16.8ms for image sizes of 128 × 128, 64 × 64 and 32 × 32respectively.

While these experiments are far from comprehensive, theyshowcase the potential capabilities of Cyclops. In our expe-rience in many cases the background model was pollutedwith the foreground objects in the scene. To remedy thisproblem we can drop the consecutive frames with the sameobject locations. In addition we can add a feedback fromthe object detection algorithm to the background updatingmodel to further reduce such pollution effect. In our exper-iments the image sizes are 128 × 128 pixels. In practice theaspect ratio of the scene can be proportional to the expectedobject trajectory. For example if the goal is detecting hu-mans, we do not expect presence of a human in the upperside of an image and we can use images with modified aspectratio to further reduce data size and the execution time ofthe algorithm. Finally our blob filter(s) can be proportionalto the aspect ratio of potential objects (i.e. human body).

Page 10: Cyclops: In Situ Image Sensing and Interpretation in ... · Cyclops sensor and analyzes its characteristics. It further discuss Cyclops software framework, provides timing analy-sis

Figure 9: In this figure the rows from the top are: I) Cyclops instantaneous images, II) its constructedbackground, III) its constructed foreground and IV) a perspective plot of the foreground. This data has beengenerated by constantly dumping the three images from Cyclops to an attached logging computer.

5.4 Hand Posture RecognitionVisual interpretation of hand gesture is a convenient method

for Human-computer-interaction (HCI). It can be exploitedfor contact-less command and control of devices and sys-tems. There are two categories of such interactions: staticand dynamic. A static gesture is a particular hand configu-ration and pose represented by a single image. A dynamicgesture is a moving gesture, represented by a sequence ofimages. Here, we focus on applicability of Cyclops for recog-nition of static gestures.

Gesture recognition is a very hard problem particularlywhen the hand occupies only a small portion of the image.In most cases, the hand must dominate the image for sim-ple statistical techniques to work. In our case, if Cyclops isbeing used in a network, we can assume that due to exsi-tance of a netowrk link it can be kept at close vicinity ofthe human hand. Even at such close distances designinga vision algorithm that performs reliably for different peo-ple, with different illumination conditions, and with trans-lational changes is challenging. In addition the algorithmshould run fast enough so that the user senses no apprecia-ble delay between making a gesture and the response of thesystem. While there are variants of algorithms in the liter-ature, we have chosen a technique based on the orientationhistogram of the image [13] [14]. The main incentive forchoosing this technique is achieving high speed performancewith reasonable quality (i.e. limited variant of pose). In our

case, Cyclops performs pattern recognition using a trans-form that converts an image into a feature vector, whichwill then be compared with the feature vectors of a train-ing set of gestures. We use the Euclidean distance of thefeatures of the input gesture vs. a set of trained featuresand find the closest candidate. Our transform is orientationhistogram:

hist(arctan(dImg/dx

dImg/dy))

where the orientation term gives robustness to illuminationvariation by calculating dominant orientation texture of theimage and histogram term provides translational invariance.Fig-10 shows block diagram of our system. In our case weimplemented derivative components using Sobel derivativemask and to further speed up the calculation we used theabsolute value of the Sobel transform. In addition we useda look-up table for “arctan” operation for faster responsetime. We used Cyclops histogram library with 32 brakingpoint for calculating orientation histogram. To avoid anyfloating point operation we kept all intermediate matricesin 8bit depth integer format. In addition to increase theperformance of our computation we kept the result of eachcomputation within maximum dynamic range of the follow-ing block of computation. For example since the value of“arctan” operation is changing between 0 to 1.570696, in theimplementation of its look-up table, we calibrate its valueto be between 0-255. This improves the performance of the

Page 11: Cyclops: In Situ Image Sensing and Interpretation in ... · Cyclops sensor and analyzes its characteristics. It further discuss Cyclops software framework, provides timing analy-sis

GetImage

CalculateSobel vs. X

CalculateSobel vs. Y

Calculate Histogram

Find the ClosestMatch in Album

Calculate

)()(

xDyD

Look-up

))()(

arctan(xDyD Notify

Host Sleep

Figure 10: Diagram of Cyclops gesture recognitionalgorithm.

histogram operation which has an input dynamic range of0-255.

To test the performance of our algorithm, we applied Cy-clops to an album of 10 static hand gestures from a subsetof American Sign Language (ASL). We further divided thatalbum into training and test data sets. We calculated theorientation histogram of the training images, using their av-erage as representative of that family. We then comparedthe test-sample recognition performance and picked a subsetof best performing gestures as the Cyclops gesture vocabu-lary. Finally we implemented the algorithm on the Cyclopsand performed a test of recognition performance of the ges-tures in real-time. Fig-11 shows Cyclops observed gestureand its associated orientation histogram. This data has beengenerated by constantly dumping the Cyclops observing im-age as well as its orientation histogram and its detected ges-ture. In practice, for image sizes of 64 × 64 pixels and fora vocabulary of 5 gestures, in constant illumination we had92% of successful and 8% of failure recognition. In thesecases the execution time of computation is 448ms. We leavefurther improvement and analysis of the performance of ouralgorithm as our continuing research.

6. RELATED WORKRecently, low power CMOS imagers have been subject

of tremendous attention [6] [7] [15] [16] [17]. CMOS im-agers provide a low power alternative to charged-coupleddevice (CCD) imagers which have dominated the high qual-ity imaging technology for the past decades. Large amountof research in improving the quality of CMOS imagers andreducing its size and power consumption has lead to sig-nificant growth of CMOS technology in commercial devicesand in miniature imaging platforms. The commercializationprocess of these products however, has been dominated bythe requirement of the market. This has lead to the classof devices that are designed to provide human pleasing im-ages [18] [19] [20] [21] and integrate with powerful processorsin cell phones and personal data assistant devices (PDAs)that have sufficient memory and clock speed. This is a se-rious obstacle in using them with less capable devices. Inaddition these devices require a complex programming in-terface. While Cyclops uses a CMOS imager, it hides thecomplexity of the device from the host processor by pro-viding a simple interface. In addition it performs internalcomputation to present a form of inference from the imageto the host.

In terms of system integration, low power CMOS videochips for portable multimedia has been investigated in liter-atures. In this research the goal is provision of video streamdata at lowest possible power by using high degree of com-pression and maximum integration of the imagers, signalprocessing and radio transmission components. This of-ten leads to extremely low power data transmission rate for

video streaming in portable devices [22] [23]. The result ofthis high level of integration however is minimal access to thelower layers and lack of flexibility. This is in contrast withthe goal of Cyclops to provide access to the user to design asensor for the required imaging operation. In addition thesehighly integrated devices are link layer components that cannot be used in a network with on demand image capture andinterpretation requirements.

Finally distributed image processing systems have recentlygained new level of attention due to the ubiquitous avail-ability of imagers such as webcam. Due to the high powerconsumption of these imaging devices and their complex pro-gramming requirements, many of these systems exploit com-putationally capable hosts. These hosts are often some sortof light weight embedded computer with permanent powerrequirements and high cost of individual node. This fun-damentally limits the scale of the experiments. Example ofthese cases are distributed attention, surveillance and mon-itoring systems which often combined with some form ofactuation of the imagers [25] [26] [24]. In these researches,the goal of the system is detecting interesting objects orevents with maximum efficiency (i.e. coverage, minimumfalse-positive). Other examples of these system are animalsurveillance systems where the goal of the system is mon-itoring the health and condition of animals particularly inthe context of biological experiments [27]. In all these casesthe dimension of the deployment, the complexity of the dis-tributed algorithm and the power availability assumptionwere based on the available classes of imagers and accom-panied computers. Cyclops on the other hand enables muchlarger scale experiments but with much limited amount ofavailable computation. This is entirely a new regime of op-eration that has not been explored before.

7. CONCLUSION AND FUTURE WORKIn this paper we presented a vision sensor for wireless sen-

sor networks that performs local image capture and analysis.We introduced Cyclops and explained its hardware and soft-ware architecture and provided its power characteristics andtemporal profile. We further demonstrated its applicabilityto object detection and gesture recognition. Through thispaper we used the dedicated logic (CPLD) only for framecapturing. Such dedicated logic resources can be used forfurther reduction in processing time and power, particularlyfor operations that can be pipelined during image captureoperation.

Cyclops makes it possible to create new vision applica-tions. It can be used in conjunction with other sensors todisambiguate ambient conditions or detect motion in secu-rity or surveillance applications. It can detect changes insize or shape of the objects for applications such as systemmonitoring and diagnostics or for phenology and habitatmonitoring.

While our initial experiments in the application domainare encouraging, they are far from comprehensive and it willbe the subject of our ongoing research. In particular, futurework will focus on collaborative in-network applications suchas multi-Cyclops object detection in different coverage sce-narios, hierarchical vision applications and more complexmulti-modal sensing scenarios.

Page 12: Cyclops: In Situ Image Sensing and Interpretation in ... · Cyclops sensor and analyzes its characteristics. It further discuss Cyclops software framework, provides timing analy-sis

Figure 11: In this figure the rows from the top are: I) Cyclops instantaneous images, II) its constructed Sobelderivatives in and vertical III) horizontal directions and IV) the polar view of its orientation histogram. Thisdata has been generated by constantly dumping the images, their derivatives and their orientation histogramsfrom Cyclops to an attached logging computer.

8. ACKNOWLEDGEMENTThis work was made possible with support from The Cen-

ter for Embedded Networked Sensing (CENS) under theNational Science Foundation (NSF) Cooperative AgreementCCR-0120778 and support from Agilent Technology.

9. REFERENCES[1] Estrin D., Govindan R., Heidemann J. and Kumar S,

“Next century challenges: Scalable coordination insensor networks”, Fifth Annual ACM/IEEEInternational Conference on Mobile Computing andNetworking (MobiCom), 1999.

[2] Cerpa A., Elson J., Estrin D., Girod L., Hamilton M.and Zhao J., “Habitat monitoring: Application driverfor wireless communications technology” ACMSIGCOMM Workshop on Data Communications inLatin America and the Caribbean, Costa Rica, 2001.

[3] Forsyth D. and Ponce J., “Computer Vision : AMordern Approach”, Prentice-Hall, 2001.

[4] Crossbow Technology, “http://www.xbow.com”

[5] Ember Corporation,“http://www.ember.com/index.html”

[6] El Gamal A., “Trends in CMOS Image SensorTechnology and Design”, International ElectronDevices Meeting Digest of Technical Papers, 2002.

[7] Hurwitz J., Smith S.G., Murray A.A., Denyer P.B.,Thomson J., Anderson S., “miniature imaging modulefor mobile applications”, ISSCC Digest of TechnicalPapers, 2001.

[8] Atmel Corporation,“http://www.atmel.com/products/avr”

[9] Gay D., Levis P., Behren R.V., Welsh M., Brewer E.and Culler D., “The nesC Language: A HolisticApproach to Networked Embedded Systems”,Programming Language Design and Implementation(PLDI), 2003.

[10] Tinyos: An operating system for networked sensors,“http://tinyos.millennium.berkeley.edu”.

[11] Polastre J., Szewczyk R., Culler D., “Telos: EnablingUltra-Low Power Wireless Research”, The FourthInternational Conference on Information Processing inSensor Networks: Special track on Platform Tools andDesign Methods for Network Embedded Sensors, 2005.

[12] Cheung S.C. and Kamath C., “Robust techniques forbackground subtraction in urban traffic video”, Video

Page 13: Cyclops: In Situ Image Sensing and Interpretation in ... · Cyclops sensor and analyzes its characteristics. It further discuss Cyclops software framework, provides timing analy-sis

Communications and Image Processing, SPIEElectronic Imaging, San Jose, 2004.

[13] William T. Freeman, Michal Roth “OrientationHistograms for Hand Gesture Recognition” IEEE Intl.Workshop on Automatic Face and Gesture Recognition,Zurich, 1995.

[14] Freeman W.T., Anderson D.B., Beardsley P.A., DodgeC.N., Roth M., Weissman C.D., Yerazunis W.S., KageH., Kyuma K.,Miyake Y. and Tanaka K., “ComputerVision for Interactive Computer Graphics”, IEEEComputer Graphics and Applications, Vol. 18, 1998.

[15] Rhodes H., Agranov G., Hong C., Boettiger U.,Mauritzson R., Ladd J., Karasev I., McKee J., JenkinsE., Quinlin W., Patrick I., Li J., Fan X., Panicacci R.,Smith S., Mouli C., and Bruce J., “CMOS ImagerTechnology Shrinks and Image Performance”, MicronTechnology, Inc., 8000 S. Federal Way, Boise, Idaho.

[16] Fossum E.R., “Digital Camera System on a Chip”,IEEE Computer Society Press ,Los Alamitos, CA,USA, V. 18 ,I. 3, 1998.

[17] Findlater K.M., “A CMOS camera employing a doublejunction active pixel”, Thesis submitted for the degreeof Doctor of Philosophy, University of Edinburgh, 2001.

[18] STMicroelectronics, “http://www.st.com/”

[19] Micron, “http://www.micron.com/”

[20] Agilent Technologies, “http://www.agilent.com/”

[21] OmniVision Technologies Inc.,“http://www.ovt.com/index.html”

[22] Chandrakasan A., Burstein A. and Robert BrodersenW., “A low power chipset for portable multimediaapplications”, IEEE Int. Solid-State CircuitsConference, 1994.

[23] Simon T. , Chandrakasan A.P., “An Ultra Low PowerAdaptive Wavelet Video Encoder with IntegratedMemory”, IEEE Journal of Solid-State Circuits, 2000.

[24] Collins, Lipton and Kanade, “A System for VideoSurveillance and Monitoring”, American NuclearSociety (ANS) Eighth International Topical Meeting onRobotics and Remote Systems, Pittsburgh, April 1999.

[25] Chu M., Reich J.E., Zhao F., “Distributed Attentionfor Large Video Sensor Networks”, IntelligentDistributed Surveillance Systems seminar, London, UK.2004.

[26] Kansal A., Yuen E., Kaiser W., Pottie G. andSrivastava M., “Sensing Uncertainty Reduction UsingLow Complexity Actuation”, ACM Third InternationalSymposium on Information Processing in SensorNetworks, 2004.

[27] Belongie S., Branson K., Dollr P. and Rabaud V.“Monitoring Animal Behavior in the Smart Vivarium”,Measuring Behavior, Wageningen, The Netherlands,2005.