Lecture 2: Software Platforms
Anish Arora
CIS788.11J
Introduction to Wireless Sensor Networks
Lecture uses slides from tutorials prepared by authors of these platforms
2
Outline
• Discussion includes not only operating systems but also programming methodology Some environments focus more on one than the other
• Focus here is on node centric platforms (versus distributed system centric platforms)
• Platforms TinyOS (applies to XSMs) slides from Culler et al EmStar (applies to XSSs) slides from UCLA SOS Contiki Virtual machines (Maté) TinyCLR
3
• NesC
• The Emergence of Networking Abstractions and Techniques in TinyOS
• EmStar: An Environment for Developing Wireless Embedded Systems Software
• TinyOS webpage
• EmStar webpage
References
4
Traditional Systems
• Well established layers of abstractions
• Strict boundaries• Ample resources• Independent
applications at endpoints communicate pt-pt through routers
• Well attended
User
System
Physical LayerData LinkNetwork
TransportNetwork Stack
Threads
Address Space
Drivers
Files
Application
Application
Routers
5
Sensor Network Systems
• Highly constrained resources processing, storage, bandwidth, power, limited hardware parallelism,
relatively simple interconnect
• Applications spread over many small nodes self-organizing collectives highly integrated with changing environment and network diversity in design and usage
• Concurrency intensive in bursts streams of sensor data & network traffic
• Robust inaccessible, critical operation
• Unclear where the boundaries belong
Need a framework for:• Resource-constrained concurrency• Defining boundaries• Appl’n-specific processingallow abstractions to emerge
6
Choice of Programming Primitives
• Traditional approaches command processing loop (wait request, act, respond) monolithic event processing full thread/socket posix regime
• Alternative provide framework for concurrency and modularity never poll, never block interleaving flows, events
7
TinyOS
• Microthreaded OS (lightweight thread support) and efficient network interfaces
• Two level scheduling structure
Long running tasks that can be interrupted by hardware events
• Small, tightly integrated design that allows crossover of software components into hardware
8
Tiny OS Concepts
• Scheduler + Graph of Components constrained two-level scheduling model:
threads + events
• Component: Commands Event Handlers Frame (storage) Tasks (concurrency)
• Constrained Storage Model frame per component, shared stack, no
heap
• Very lean multithreading
• Efficient Layering
Messaging Component
init
Pow
er(m
ode)
TX_p
acke
t(buf
)
TX_p
acke
t_do
ne
(suc
cess
)R
X_p
acke
t_do
ne (b
uffe
r)
Internal State
init
pow
er(m
ode)
send
_msg
(add
r, ty
pe, d
ata)
msg
_rec
(type
, dat
a)m
sg_s
end_
done
)
internal thread
Commands Events
9
Application = Graph of Components
RFM
Radio byte
Radio Packet
UART
Serial Packet
ADC
Temp Photo
Active Messages
clockbit
byte
pack
et
Route map Router Sensor Appln
appl
icat
ion
HW
SW
Example: ad hoc, multi-hop routing of photo sensor readings
3450 B code 226 B data
Graph of cooperatingstate machines on shared stack
10
TOS Execution Model
• commands request action ack/nack at every boundary call command or post task
• events notify occurrence HW interrupt at lowest level may signal events call commands post tasks
• tasks provide logical concurrency preempted by events
RFM
Radio byte
Radio Packet
bit
byte
pack
et
event-driven bit-pump
event-driven byte-pump
event-driven packet-pump
message-event drivenactive message
application comp
encode/decode
crc
data processing
11
Event-Driven Sensor Access Pattern
• clock event handler initiates data collection• sensor signals data ready event• data event handler calls output command• device sleeps or handles other activity while waiting• conservative send/ack at component boundary
command result_t StdControl.start() { return call Timer.start(TIMER_REPEAT, 200); }event result_t Timer.fired() { return call sensor.getData(); }event result_t sensor.dataReady(uint16_t data) { display(data) return SUCCESS; }
SENSE
Timer Photo LED
12
TinyOS Commands and Events
{... status = call CmdName(args)...}
command CmdName(args) {...return status;}
{... status = signal EvtName(args)...}
event EvtName(args) {...return status;}
13
TinyOS Execution Contexts
• Events generated by interrupts preempt tasks• Tasks do not preempt tasks• Both essential process state transitions
Hardware
Interrupts
even
ts
commands
Tasks
14
Tasks
• provide concurrency internal to a component longer running operations
• are preempted by events• able to perform operations beyond event context• may call commands• may signal events• not preempted by tasks
{...post TskName();...}
task void TskName {...}
15
Typical Application Use of Tasks
• event driven data acquisition• schedule task to do computational portion
event result_t sensor.dataReady(uint16_t data) {
putdata(data);
post processData();
return SUCCESS;
}
task void processData() {
int16_t i, sum=0;
for (i=0; i ‹ maxdata; i++)
sum += (rdata[i] ›› 7);
display(sum ›› shiftdata);
}
• 128 Hz sampling rate• simple FIR filter• dynamic software tuning for centering the magnetometer signal (1208 bytes)
• digital control of analog, not DSP• ADC (196 bytes)
16
Task Scheduling
• Currently simple fifo scheduler• Bounded number of pending tasks• When idle, shuts down node except clock
• Uses non-blocking task queue data structure
• Simple event-driven structure + control over complete application/system graph
instead of complex task priorities and IPC
17
Maintaining Scheduling Agility
• Need logical concurrency at many levels of the graph• While meeting hard timing constraints
sample the radio in every bit window
Retain event-driven structure throughout application Tasks extend processing outside event window All operations are non-blocking
18
RadioTimingSecDedEncode
The Complete Application
RadioCRCPacket
UART
UARTnoCRCPacket
ADC
phototemp
AMStandard
ClockC
bit
byte
pack
et
SenseToRfm
HW
SW
IntToRfm
MicaHighSpeedRadioM
RandomLFSRSPIByteFIFO
SlavePin
noCRCPacket
Timer photo
ChannelMon
generic comm
CRCfilter
19
Programming Syntax
• TinyOS 2.0 is written in an extension of C, called nesC
• Applications are too just additional components composed with OS components
• Provides syntax for TinyOS concurrency and storage model commands, events, tasks local frame variable
• Compositional support separation of definition and linkage robustness through narrow interfaces and reuse interpositioning
• Whole system analysis and optimization
20
Components
• A component specifies a set of interfaces by which it is connected to other components
provides a set of interfaces to others uses a set of interfaces provided by others
• Interfaces are bidirectional include commands and events
• Interface methods are the external namespace of the component
Timer Component
StdControl Timer
Clock
provides
uses
provides interface StdControl; interface Timer:uses
interface Clock
21
Component Interface
• logically related set of commands and events
StdControl.nc
interface StdControl { command result_t init(); command result_t start(); command result_t stop();}
Clock.nc
interface Clock {
command result_t setRate(char interval, char scale);
event result_t fire();
}
22
Component Types
• Configurations link together components to compose new component configurations can be nested complete “main” application is always a configuration
• Modules provides code that implements one or more interfaces and
internal behavior
23
Example of Top Level Configuration
configuration SenseToRfm {// this module does not provide any interface}implementation{ components Main, SenseToInt, IntToRfm, ClockC, Photo as Sensor;
Main.StdControl -> SenseToInt; Main.StdControl -> IntToRfm;
SenseToInt.Clock -> ClockC; SenseToInt.ADC -> Sensor; SenseToInt.ADCControl -> Sensor; SenseToInt.IntOutput -> IntToRfm;}
SenseToInt
ClockC Photo
Main
StdControl
ADCControl IntOutputClock ADC
IntToRfm
24
Nested Configuration
includes IntMsg;configuration IntToRfm{ provides { interface IntOutput; interface StdControl; }}implementation{ components IntToRfmM, GenericComm as Comm;
IntOutput = IntToRfmM; StdControl = IntToRfmM;
IntToRfmM.Send -> Comm.SendMsg[AM_INTMSG]; IntToRfmM.SubControl -> Comm;}
IntToRfmM
GenericComm
StdControl IntOutput
SubControl SendMsg[AM_INTMSG];
25
IntToRfm Module
includes IntMsg;
module IntToRfmM { uses { interface StdControl as SubControl; interface SendMsg as Send; } provides { interface IntOutput; interface StdControl; }}implementation{ bool pending; struct TOS_Msg data;
command result_t StdControl.init() { pending = FALSE; return call SubControl.init(); }
command result_t StdControl.start()
{ return call SubControl.start(); }
command result_t StdControl.stop()
{ return call SubControl.stop(); }
command result_t IntOutput.output(uint16_t value)
{
...
if (call Send.send(TOS_BCAST_ADDR,sizeof(IntMsg), &data)
return SUCCESS;
...
}
event result_t Send.sendDone(TOS_MsgPtr msg, result_t success)
{
...
}
}
26
Atomicity Support in nesC
• Split phase operations require care to deal with pending operations
• Race conditions may occur when shared state is accessed by premptible executions, e.g. when an event accesses a shared state, or when a task updates state (premptible by an event which then uses that state)
• nesC supports atomic block implemented by turning of interrupts for efficiency, no calls are allowed in block access to shared variable outside atomic block is not allowed
27
Supporting HW Evolution
• Distribution broken into apps: top-level applications tos:
lib: shared application components system: hardware independent system components platform: hardware dependent system components
o includes HPLs and hardware.h interfaces
tools: development support tools contrib beta
• Component design so HW and SW look the same example: temp component
may abstract particular channel of ADC on the microcontroller may be a SW I2C protocol to a sensor board with digital sensor or ADC
• HW/SW boundary can move up and down with minimal changes
28
Example: Radio Byte Operation
• Pipelines transmission: transmits byte while encoding next byte• Trades 1 byte of buffering for easy deadline• Encoding task must complete before byte transmission completes• Separates high level latencies from low level real-time rqmts• Decode must complete before next byte arrives
Encode Task
Bit transmission Byte 1
Byte 2
RFM Bits
Byte 2
Byte 1 Byte 3
Byte 3
Byte 4
start …
29
Dynamics of Events and Threads
bit event filtered at byte layer
bit event => end of byte =>
end of packet => end of msg send
thread posted to start
send next message
radio takes clock events to detect recv
30
Sending a Message
bool pending;struct TOS_Msg data;command result_t IntOutput.output(uint16_t value) {
IntMsg *message = (IntMsg *)data.data;if (!pending) {
pending = TRUE;message->val = value;message->src = TOS_LOCAL_ADDRESS;if (call Send.send(TOS_BCAST_ADDR, sizeof(IntMsg), &data))
return SUCCESS; pending = FALSE;
} return FAIL;}
destination length
• Refuses to accept command if buffer is still full or network refuses to accept send command• User component provide structured msg storage
31
Send done Event
• Send done event fans out to all potential senders• Originator determined by match
free buffer on success, retry or fail on failure• Others use the event to schedule pending communication
event result_t IntOutput.sendDone(TOS_MsgPtr msg, result_t success)
{ if (pending && msg == &data) {
pending = FALSE;signal IntOutput.outputComplete(success);
} return SUCCESS; }}
32
Receive Event
• Active message automatically dispatched to associated handler knows format, no run-time parsing performs action on message event
• Must return free buffer to the system typically the incoming buffer if processing complete
event TOS_MsgPtr ReceiveIntMsg.receive(TOS_MsgPtr m) { IntMsg *message = (IntMsg *)m->data; call IntOutput.output(message->val); return m;}
33
Tiny Active Messages
• Sending declare buffer storage in a frame request transmission name a handler handle completion signal
• Receiving declare a handler firing a handler: automatic
• Buffer management strict ownership exchange tx: send done event reuse rx: must return a buffer
34
Tasks in Low-level Operation
• transmit packet send command schedules task to calculate CRC task initiates byte-level data pump events keep the pump flowing
• receive packet receive event schedules task to check CRC task signals packet ready if OK
• byte-level tx/rx task scheduled to encode/decode each complete byte must take less time that byte data transfer
35
TinyOS tools
• TOSSIM: a simulator for tinyos programs
• ListenRaw, SerialForwarder: java tools to receive raw packets on PC from base node
• Oscilloscope: java tool to visualize (sensor) data in real time
• Memory usage: breaks down memory usage per component (in contrib)
• Peacekeeper: detect RAM corruption due to stack overflows (in lib)
• Stopwatch: tool to measure execution time of code block by timestamping at entry and exit (in osu CVS server)
• Makedoc and graphviz: generate and visualize component hierarchy
• Surge, Deluge, SNMS, TinyDB
36
Scalable Simulation Environment
• target platform: TOSSIM whole application compiled for host native instruction set event-driven execution mapped into event-driven simulator
machinery storage model mapped to thousands of virtual nodes
• radio model and environmental model plugged in
bit-level fidelity
• Sockets = basestation
• Complete application including GUI
37
Simulation Scaling
38
TinyOS Limitations
• Static allocation allows for compile-time analysis, but can make programming harder
• No support for heterogeneity Support for other platforms (e.g. stargate) Support for high data rate apps (e.g. acoustic beamforming) Interoperability with other software frameworks and languages
• Limited visibility Debugging Intra-node fault tolerance
• Robustness solved in the details of implementation nesC offers only some types of checking
39
Em*
• Software environment for sensor networks built from Linux-class devices
• Claimed features: Simulation and emulation tools Modular, but not strictly layered architecture Robust, autonomous, remote operation Fault tolerance within node and between nodes Reactivity to dynamics in environment and task High visibility into system: interactive access to all services
40
Contrasting Emstar and TinyOS
• Similar design choices programming framework
Component-based design “Wiring together” modules into an application
event-driven reactive to “sudden” sensor events or triggers
robustness Nodes/system components can fail
• Differences
hardware platform-dependent constraints Emstar: Develop without optimization TinyOS: Develop under severe resource-constraints
operating system and language choices Emstar: easy to use C language, tightly coupled to linux (devfs) TinyOS: an extended C-compiler (nesC), not wedded to any OS
41
Em* Transparently Trades-off Scale vs. Reality
Em* code runs transparently at many degrees of “reality”: high visibility debugging before low-visibility deployment
Reality
Scal
e
Pure Simulation
Data Replay
Portable Array
Deployment
Ceiling Array
42
Em* Modularity
• Dependency DAG
• Each module (service) Manages a resource & resolves
contention Has a well defined interface Has a well scoped task Encapsulates mechanism Exposes control of policy Minimizes work done by client
library
• Application has same structure as “services”
Hardware Radio
Topology Discovery
Collaborative SensorProcessing Application
NeighborDiscovery
ReliableUnicast
Sensors
LeaderElection
3d Multi-Lateration
Audio
TimeSync
AcousticRanging
StateSync
43
Em* Robustness
• Fault isolation via multiple processes
• Active process management (EmRun)
• Auto-reconnect built into libraries “Crashproofing” prevents cascading failure
• Soft state design style Services periodically refresh clients Avoid “diff protocols”
motor_x motor_y
scheduling
path_plandepth map
camera
EmRun
44
Em* Reactivity
• Event-driven software structure React to asynchronous notification e.g. reaction to change in neighbor list
• Notification through the layers Events percolate up Domain-specific filtering at every level e.g.
neighbor list membership hysteresis time synchronization linear fit and outlier rejection
motor_y
scheduling
path_plannotifyfilter
45
• Tools EmRun EmProxy/EmView
• Standard IPC FUSD Device patterns
• Common Services NeighborDiscovery TimeSync Routing
EmStar Components
46
EmView/EmProxy: Visualization
emviewEmulator
nodeNnodeNnodeN
Mote Mote … Mote
motenic
linkstat
neighbor
emproxy
…
47
EmSim/EmCee
• Em* supports a variety of types of simulation and emulation, from simulated radio channel and sensors to emulated radio and sensor channels (ceiling array)
• In all cases, the code is identical
• Multiple emulated nodes run in their own spaces, on the same physical machine
48
EmRun: Manages Services
• Designed to start, stop, and monitor services
• EmRun config file specifies service dependencies
• Starting and stopping the system Starts up services in correct order Can detect and restart unresponsive services Respawns services that die Notifies services before shutdown, enabling graceful
shutdown and persistent state
• Error/Debug Logging Per-process logging to in-memory ring buffers Configurable log levels, at run time
49
IPC : FUSD
• Inter-module IPC: FUSD Creates device file interfaces
Text/Binary on same file
Standard interface Language independent No client library required
Client Server
kfusd.o
/dev/fusd/dev/servicename
Kernel
User
50
Device Patterns
• FUSD can support virtually any semantics What happens when client calls read()?
• But many interfaces fall into certain patterns
• Device Patterns encapsulate specific semantics take the form of a library:
objects, with method calls and callback functions priority: ease of use
51
Status Device
• Designed to report current state no queuing: clients not guaranteed to see
every intermediate state
• Supports multiple clients
• Interactive and programmatic interface ASCII output via “cat” binary output to programs
• Supports client notification notification via select()
• Client configurable client can write command string server parses it to enable per-client
behavior
Status Device
Server
O I
Client1 Client2 Client3
ConfigHandler
State RequestHandler
52
Packet Device
• Designed for message streams
• Supports multiple clients
• Supports queuing Round-robin service of output
queues Delivery of messages to all/
specific clients
• Client-configurable: Input and output queue lengths Input filters Optional loopback of outputs to
other clients (for snooping)
Packet Device
Server
Client1
I O
F
Client2
I O
F
Client3
I O
F
O I
53
Device Files vs Regular Files
• Regular files: Require locking semantics to prevent race conditions between readers
and writers Support “status” semantics but not queuing No support for notification, polling only
• Device files: Leverage kernel for serialization: no locking needed Arbitrary control of semantics:
queuing, text/binary, per client configuration Immediate action, like an function call:
system call on device triggers immediate response from service, rather than setting a request and waiting for service to poll
54
Interacting With em*
• Text/Binary on same device file Text mode enables interaction from
shell and scripts Binary mode enables easy
programmatic access to data as C structures, etc.
• EmStar device patterns support multiple concurrent clients IPC channels used internally can be
viewed concurrently for debugging “Live” state can be viewed in the shell
(“echocat –w”) or using emview