Upload
felicity-pearson
View
217
Download
1
Tags:
Embed Size (px)
Citation preview
Victoria ManfrediThesis DefenseAugust 13, 2009
Advisor: Jim KuroseCommittee: Andrew Barto
Deepak Ganesan Weibo Gong Don Towsley
Sensor Control and Scheduling Strategies for Sensor Networks
2
Motivation
Data
Sensor Controls
Bursty, high-bandwidth data, many-to-one data routing to sink: congestion
Wireless, Closed-Loop Sensor Network
How to make sensing robust to delayed and dropped packets?
How to accommodate multiple users?
Multiple users making conflicting sensor requestsChanging network topology
How to make routing robust to network changes?
Where to focus sensing?
Cameras, radars: cannot collect data simultaneously
from all environment locations
3
Contributions
Adaptive sensors– where to focus sensing in adaptive meteorological radar network?
• show lookahead strategies useful when multiple small phenomena, trade-off between scan quality and re-scan interval
– accommodating multiple users?• identify call admission control problem, give complexity results
How to make sensing robust to delayed, dropped packets?– show good application-level performance possible in closed-loop
sensor network when congestion if sensor control prioritized
How to make routing robust to network changes?– propose routing algorithm, show can significantly control
overhead while minimally degrading % of packets delivered
4
Adaptive sensors– where to focus sensing?– multiple users
Prioritizing sensor control trafficRobust routing in dynamic networksConclusions
Adaptive sensors– where to focus sensing?– multiple users
Prioritizing sensor control trafficRobust routing in dynamic networksConclusions
Outline
5
small scan sectorshigh quality, butmay miss storm
large scan sectorslow quality, but
miss fewer storms
Where to focus sensing?
CASA: Adaptive Meteorological Radar Network
6
What are ``good” sensing strategies?
sit-and-spin– all radars always scan 360
myopic– consider only current environmental state
limited lookahead– Kalman filters to predict storm cell attributes k time-steps ahead
full lookahead– formulate as Markov decision process– reinforcement learning to obtain policy: Sarsa()
Sensing Strategies
7
Radar network
Storm model– storms arrive according to spatio-temporal Poisson process – storm dynamics from Kalman filters
Radar sensing model – observed attribute value = true attribute value + Gaussian noise
30 km
Storm Tracking Application
30km
max storm radius: 4km
Depends on scan quality
8
30 km
Performance Metrics
Re-scan interval– how long before storm first observed or rescanned
Scan quality– how well storm observed – function of
• scan sector size• distance from radar• % of storm scanned
– value between 0 and 1
Cost– function of re-scan interval, quality, penalty for missing storms– 2-step and full lookahead have similar cost for 2 radars
9
Optimize over all radars?
1-Step Lookahead
Myopic
Sit-and-Spin
Ave
rage
Qua
lity
Myopic
Sit-and-Spin
1-Step Lookahead
No gains in quality as optimize over more radars
Decreasing gains as optimize over more radars
Max 1 Storm Max 8 Storms
10
Where to focus sensing?
Showed lookahead strategies useful when multiple storms, storm radius (much) smaller than radar radius trade-off scan quality and frequency storm scanned may not need to optimize over all radars in network
Related work– track ground targets from airplanes [Kreucher, Hero, 2005] – our focus: track meteorological phenomena using ground radars
Summary
11
Adaptive sensors– where to focus sensing?– multiple users
Prioritizing sensor control trafficRobust routing in dynamic networksConclusions
Outline
12
QuickTime™ and a decompressor
are needed to see this picture.
Mt. Toby
MA1 Tower
CS Building
Call admission control problem
How to accommodate multiple users?
Virtualize sensing resources– virtualized private sensor network
To each user– looks like own private network– but user only has virtual slice
Users request resources– possibly conflicting requests– which requests to satisfy?
QuickTime™ and a decompressorare needed to see this picture.
QuickTime™ and a decompressorare needed to see this picture.
13Select set of non-interfering requests that maximizes utility
Call Admission Control Problem
Utility of request j– to requesting user i: uij
– to each other user i: uij
Sensor request– use sensor in particular way possibly during particular time
Sensing strategy– sequence of requests over time
Strategy for User 1
Strategy for User 2
i
i Combine into single utilityuij = uij + uij
ii
i
Scan 360° every 2 min
Scan 360°
14
Space of Problems
Divisible requests? Utility received if only part
of request satisfied? Yes
– scan x of y elevations
No– obtain full scan of storm
Shifting permitted?
Utility received if request executed at different time?
Yes – perform surveillance scan
No– sense storm expected at
location (x,y) at time t
Time
User 1 Request
User 2 Request
Interleaved Requests
User 1 Request
Shifted Request
Time
15
Complexity
Divisible requests?
Shifting permitted?
Polynomial Polynomial PolynomialNP-Complete
Yes No
Shifting permitted?
Yes No Yes No
Same as fractional knapsack problem
Interval scheduling[Arkin, Silverberg, 1987]
Interleave sensor requests
16
Capacity W
w1
v1
wN
vN
vN
T=W Time
Utility for satisfying user i’s request– to user i: vi
– to each other user: 0
Sensing Strategy for User 1 v1
Sensing Strategy for User N
Indivisible, Shifting
NP-complete– assume utility independent of when request executed– In NP: can check whether solution correct in polynomial time
Knapsack Problem
Reduction
w1
wN
17
How to accommodate multiple users?
Requests divisible or fixed in time polynomial-time algorithmsRequests indivisible but may be shifted NP-complete
Summary
Related work– adaptively select set of sensors for task [Jayasumana et al, 2007]
– our focus: virtualizing sensing resources within a sensor
Future work– online, decentralized algorithms– trade-off between maximizing utility and user fairness – implement proposed algorithms in deployed network
18
Adaptive sensors– where to focus sensing?– multiple users
Prioritizing sensor control trafficRobust routing in dynamic networksConclusions
Outline
19
Many-to-one routing to sink
Congestion
Bursty, high-bandwidth data
Wireless links
How does prioritizing sensor control traffic over data traffic impact application-level performance?
Data
Sensor Controls
Data spatially, temporally redundant
Prefer to delay, drop data rather
than control
Why prioritize sensor control traffic?
Sensor network
Closed-loop sensor network
Data >> control
20
Data
Controlk-1 k k+1
Data from control k-1
Data from control k
Data delay FIFO control delay Priority control
delay
Small data delay, large control delay more data collected in time to compute next sensor control
Update interval
Closed-loop Sensor Networks
Prioritizing sensor control – impact on packet delays?– impact on data collected?
Control loop delayControl, data share queues
e.g., wireless links
21
More data samples
Cramer-Rao bound:
SD(W) ≥ 1 / n I
– accuracy sub-linearly with n
Effect of data packet drops?– accuracy sub-linearly with n
QuickTime™ and a decompressorare needed to see this picture.
Sensing accuracy changes slowly with # of samples
Std Dev of W from
Fisher information
# of iid samples
Lower bound on std dev of unbiased estimator W (sample mean) from parameter (population mean)
Radars, Sonars, Cameras, …
Better Quality Data
22
Network model– obtain sensor control and data packet delays– CASA network is closed-loop sensor network
Sensing model– convert packet delays into sensing error
Tracking model– convert sensing error into storm location error– tracking: compute next scan for radar from 99% confidence ellipse
Bursty arrivals
Deterministic arrivals
control
data
other
Storm Tracking Application
Delays at bottleneck link dominate,
assume wireless links
23
idx = 1 idx = 25
Per-interval performance gains/losses may accumulate across multiple update intervals
t=1
# intervals
# intervals
RMSE =
(truet-obst)2
√
+
+
+
Tracking Error
idx = 55
+
24
Summary
When network congestion, prioritizing sensor control in closed-loop sensor network quantity, quality of
data, and gives better application-level performance
How to make sensing robust to delayed and dropped packets?
Related work– SS7, ATM, [Fredj et al, 2001] [Kyasanur et al, 2005]– our focus: prioritizing sensor control (not network control)
25
Adaptive sensors– where to focus sensing?– multiple users
Prioritizing sensor control trafficRobust routing in dynamic networksConclusions
Outline
26
But if frequent changes, adapting is costly: e.g., in MANET may have as much routing control traffic as data
Adapt to every change? yes: potentially perform optimally, but more overhead no: likely perform sub-optimally, but less overhead
What do we mean by robust?
network structure changing over time
protocols must adapt
Robust: solution performs well over many scenarios, solution is not fragile
27
Identify structural properties that make graph reliable, efficiently find subgraph with such properties
But, reliability #P-complete to compute, can’t search
over all sub-graphs
Robust routing: routing subgraph has path from src to dest, as links up/down
Most robust routing subgraph should contain shortest path and have large min cut
Robust Routing
src-dest reliability – prob instantaneous path in stochastic graph– want max reliability sub-graph for overhead
Effect of graph structure on src-dest reliability?– show reliability (in limits) dominated by shortest paths, smallest cuts
28
k-hop braid: most reliable path + all nodes within k-hops
Braid
d
1-Hop Braid 2-Hop BraidMost Reliable Path
d d
s ss
Given fixed amount of overhead, is braid most reliable sub-graph?
reliability simulations + theoretical analysis
(shortest)
29
Theoretical Analysis
Note: lemma does not hold when adding links
How Reliable is Braid?
N
s d
Add black node rather than blue nodes?Lemma: Suppose sub-
graph contains shortest path and 0<n<N 1-hop nodes. Given 1 or 2 extra nodes, to max reliability, use all 1-hop nodes before any 2-hop nodes
Partial braid less reliable than 2-disjoint paths for
1p√2/3ds s d
Partial Braid 2-Disjoint Paths
30
If no path from src to dest:
Step 1: Identify shortest path in network
Step 2: Build braid around shortest path
Step 3: Perform local forwarding within braid e.g., flooding, opportunistic routing,
backpressure
When link breaks, use braid path back to DSR path
Overheard RREQ and RREP contain 1-hop braid info
Use dynamic source routing (DSR)
Braid Routing
DSR vs Braid– path breaks
• use new DSR path or existing 1-hop braid path
– primary difference• control overhead incurred to find this new path
31
Simulation Set-up QualNet
Gauss-Markov mobility – BonnMotion to generate traces– min 0.5 m/s, max 2 m/s– speed, angle updates every 100s
20-80 nodes– 400m transmission radius– 2km x 2km area
1 constant bit-rate flow– 4 pkts/s, 1 million seconds
10 runs, each lasting life of flow
32
# of Nodes
Control Overhead
As node density increases, braid incurs fewer control packets than DSR
Braid
DSR
#
of C
ontr
ol P
acke
ts
33
# of Nodes
Control Overhead
Braid incurs fewer route requests, replies, errors than DSR
Braid
DSR
#
of C
ontr
ol P
acke
ts
Route Requests
Braid
DSR
Braid
DSR
Route Replies
Route Errors
# of Nodes # of Nodes
Up to ~30% fewer requests
Up to ~40% fewer replies
Up to ~25% fewer errors
34
Packets Delivered and Delay
Braid delivers slightly fewer packets, incurs higher delay than DSR
Braid
DSR
% o
f Pa
cke
ts D
eliv
ere
d
# of Nodes
Braid
DSR
De
lay
(se
con
ds)
# of Nodes
(4 million packets)
35
SummaryHow to make routing robust to network changes?
gains depend on network characteristics
Proposed routing algorithm that control overhead by (1) updating routes less frequently(2) performing local forwarding within routing sub-graph
Related work– [Shacham et al 1983] [Lee, Gerla, 2000] [Ganesan et al, 2001] [Ghosh et al, 2007]
– our work: differs in structure and/or usage of routing subgraph
Future work– which network characteristics most impact performance? – joint rate control and routing– what should be braid width (trade-off with interference)?
36
Adaptive sensors– where to focus sensing?– multiple users
Prioritizing sensor control trafficRobust routing in dynamic networksConclusions
Outline
37
Conclusions
Adaptive sensors– where to focus sensing in adaptive meteorological radar network?
• show lookahead strategies useful when multiple small phenomena, trade-off between scan quality and re-scan interval
– accommodating multiple users?• identify call admission control problem, give complexity results
How to make sensing robust to delayed, dropped packets?– show good application-level performance possible in closed-loop
sensor network when congestion if sensor control prioritized
How to make routing robust to network changes?– propose routing algorithm, show can significantly control
overhead while minimally degrading % of packets delivered
38
Thanks!
Jim Don, Deepak, Andy, Weibo Networks Lab
– Bruno, Mike, Yung-Chih, Daniel2, Majid, Yu, Bo, Patrick, Junning, Giovanni, Guto, Elisha, Suddu, Bing, Sookhyun, Chun …
ALL Lab– Sridhar, Mohammad, George, Sarah, Khash, Ash, Ozgur, Pippin …
Laurie, Tyler, ….
Questions?
39
Adaptive Sensing
40
Re-scan interval– # of decision epochs before storm cell first observed or rescanned
Quality – how well storm cell p observed
– how well sector ri scanned
sr : radar configuration, start, end angles of scan sector
Sr: set of radar configurations
Performance Metrics
Up(p, Sr) = max [ Fc(c(p, sr )) [ Fd(d(r,p)) + (1-) Fw(w(sr ) / 360)] ] sr Sr
% covered radar rotation speed
distance to storm cell
Us(ri, sr) = Fw(w(sr ) / 360)]
41
Goal: maximize quality, minimize re-scan time
Performance Metrics
Cost – Re-scan time and quality + penalty for never scanning storm cell
Pm := penalty for never scanning storm Pr := penalty for not rescanning storm
C = |dij - dij| + (Np -Np) Pm + I(tk)Pro
o
i=1 j=1 k=1
Storm scanned within Tr
decision epochs?
Difference between true and observed # of storm cells
Difference between observed and true storm attribute
Np Ndo Np
42
State
Actions– select scan action that minimizes cost– additionally scan any sector not scanned in last T=4 decision epochs
True state: true = [ x, y, x, y ]T
Observed state: obs = [ x, y ]T
(x,y) x
y
Use Kalman filters to predict storm cell attributes 1 and 2 decision epochs ahead
Limited Look-ahead Strategy
truet = A truet-1 + N[0, Q]obst = B truet + N[0,R]
A, B, Q, R initialized using prior knowledge
Assume:
43
State
Actions
Transition function – encodes observed environment dynamics, obtained from simulator
Cost function– obtained from performance metrics
Sarsa()– linear combination of basis functions to approximate value function– tile coding to obtain basis functions, one tiling for each state variable
Markov Decision Process Formulation
(x,y)
Storm radius
x
y
+# of storm cells, Up quality of storm cells, Us quality of sectors
Full Look-ahead Strategy
44
True state– storms arrivals: spatio-temporal
Poisson process – storm attributes from distributions
derived from real data– max storm radius: 4km– max number of storms
Observed state – observed attribute value = true attribute value plus noise ~ N[0, 2]
10 km or30 km
Simulation Set-up
= (1-u) Vmax /
scaling term
Largest positive value of attribute
Us(ri,sr) quality
45
Scan Quality
2-Step scans have higher quality than Sarsa(), especially when little noise in environment (when 1/ is small)
Avg
Diff
eren
ce in
Qua
lity
(25
0,00
0 st
eps)
2Step - Sarsa
SitandSpin - Sarsa
Max 1 storm
Max 4 storms
1/
46
Cost
Full lookahead and 2-step look-ahead have similar costs
1/
SitandSpin - Full Lookahead
Fullt=1
# timesteps
# timesteps
Ct - Ct2step
Average Difference in Cost
2StepLookahead - FullLookahead
2 radars
47
Re-scan IntervalSit-and-Spin
1-Step
2-Step
Sarsa()
P[X
≤ x]
x = # of decision epochs between storm scans
Sarsa() more likely than 2-step look-ahead to scan storm within
Tr=4 decision epochs
48
Related Work
2005: Mainland, Parkes, Welsh– game theory + reinforcement learning to
allocate resources – learn profit associated with different
actions, rather than profit associated with different state-action pairs
2005: Stone, Sutton, Kuhlmann– robot soccer
2004: Ng, Coates, Diel, Ganapathi, Schulte, Tse, Berger, Liang
– helicopter control 2002: Zilberstein, Washington,
Bernstein, Mouaddib– planetary rovers
Large State-space Reinforcement Learning
Sensor Networks
2005: Kreucher, Hero– look-ahead scheduling of radars on airplanes for detecting and
tracking ground targets – information-theoretic reward, Q-learning
2005: Suvorova, Musicki, Moran, Howard, Scala– target radar beams, select waveform for electronically steered
phased array radars – show 2-step lookahead outperforms one-step look-ahead for
tracking multiple targets
Radar Control
We consider tracking meteorological phenomena using ground radars
Do not consider infinite-horizon case
49
Call Admission Control Problem
50
Divisible, No Shifting
Polynomial-time– assume utility depends on how much of request executed– select max utility sensor request during each conflicting interval
Sensing Strategy for User 1
Sensing Strategy for User 2
Interleaved Sensor Requests
51
Separation of Control/Data
52
(xk-1,yk-1)
(xk,yk)
Network model: control, data delays, depend on scheduling (FIFO, priority)
Sensing model: given scan, quantity and quality of data, estimated storm location
Tracking model: predict storm location based on current, past estimates and observations using Kalman filters
Quality of estimated storm location affects tracking
Quality of tracking affects scan angle, quality of estimates
Timeliness of control, data affects amount of sensed data gathered
Storm Tracking Application: 3 Coupled Models
d
d
c
d
53
Wireless network– radar data sent to control center, sensor control back to radars– much more data traffic than sensor control traffic
Delays at bottleneck link dominate control-loop delay
Network ModelObtain sensor control and
data packet delays
d
d
c
d
Bursty arrivals
Deterministic arrivals
control
data
other
Obtain delays for FIFO, priority queuing using simulation
54
Radar– transmits pulses to estimate reflectivity at point in space
Reflectivity– # of particles in volume of atmosphere– standard deviation,
=
Sensing ModelConvert packet delays into
sensing error
sensing timescan angle width
radar SNR where Nc
Smaller angle, longer time sensing lower sensing error
55
Location of storm centroid– equals location of peak reflectivity– standard deviation,
Kalman filters– generate trajectory of storm centroid– track storm centroid
r d
30 dBzz =
z used in measurement covariance matrix
Convert sensing error into location error, perform tracking
(xk-1,yk-1)
(xk,yk)
Tracking Model
mid-range reflectivity value
distance from radar
Goal: track storm centroid with highest possible accuracy
56
Kalman filter
(xk-1,yk-1)(xk,yk)
Measure: radar data received, measured position yk, with r(,+)
Filter: estimate xk based on yk, predicted x-
k
Predict: next x-(k+1) 99%
confidence region, gives k+1 to scan next time step
Estimated state error covariance matrix, dependson velocity noise model, r(,+)
xk := estimated (location, velocity)
yk := measured (location, velocity)
noisy, with std deviation r(,+)
57
Network parameters
Kalman filter parameters– initialize based on storm data
10 NS-2 simulation runs, 100,000 sec each
Simulation Set-up
data= 2000/30
pkts/s
other= 2000/30
pkts/s
off on
r1 = 1s
r2 = 1s
1= po 2= (1-p)o
control+ data+other 133.37 pkts/s = 148.5 pkts/s
avg load 0.90
idx =
control= 1/ pkts/s
Vary burstiness of ``other” traffic,
Index of dispersion
58
Data Quantity
(seconds)
As and burstiness , gains from prioritizing increase
Number of times more voxels scanned under
priority than under FIFOidx = 55
idx = 25
idx = 1
59
Data Quality
Small decision epoch, bursty traffic: FIFO achieves ~80% as many pulses
as priority ~80% of time
idx1
idx55
= 5sec
idx55
Number of Pulses
= 30sec
idx55 = 5sec
idx55idx1
Reflectivity Standard Deviation
= 30secidx1
idx1
Small decision epoch, bursty traffic: priority has at least 90% as much
uncertainty as FIFO ~90% of the time
x = NFIFO / NPriorityx = r,Priority / r,FIFO
F(x
)
F(x
)
Assuming = 360
60
Number of Pulses
FIFO and Priority each achieve about 6x as many pulses per voxel for = 30 sec vs = 5 sec, and total
# of pulses is independent of
61
NFIFO / Npriority
Data Quantity vs QualityC
DF
r,Priority / r,FIFO
360 scans, = 5sec, very bursty traffic
FIFO achieves at least 80% as many samples as priority ~80% of time
Priority has at least 90% as much
uncertainty as FIFO ~90% of the time
**During times of congestion, prioritizing sensor
control quantity, quality of data
62
Effect of Packet Loss
As system goes into overload sensing accuracy degrades (more) gracefully when sensor control is prioritized
Capacity: when >1000, data dropped
Priority: no sensor control packets dropped
= pkts / second
r (w
ith lo
ss)
/
r (n
o lo
ss)
FIFO: sensor control packets dropped
63
Related Work
Networked Control Systems
Prioritize Network Control
Do not consider effects of prioritizing only sensor control
in a sensor network
Our focus: prioritize sensor control
Service Differentiation for Different Classes of Traffic
2001: Bhatnager, Deb, Nath– assign priorities to packets, forwarding
higher-priority packets more frequently over more paths to achieve higher delivery prob
2005: Karenos, Kalogeraki, Krishnamurthy
– allocate rates to flows based on class of traffic and estimated network load
2006: Tan, Yue, Lau– bandwidth reservation for high-priority
flows in wireless sensor networks
2008: Kumar, Crepadir, Rowaihy, Cao, Harris, Zorzi, La Porta
– differential service for high priority data traffic versus low-priority data traffic in congested areas of sensor network
SS7 telephone signaling system ATM networks, IP networks 1998: Kalampoukas, Varma, Ramakrishan,
2002: Balakrishnan et al, – priority handling of TCP acks
2005: Kyasanur, Padhye, Bahl– separate control channel for controlling
access to shared medium in wireless
data, sensor control sent over network– constrained to be feedback and
measurements of classical control system
– ratio of data to control much smaller than that of closed-loop sensor network
2001: Walsh, Ye– put error from network delays in control eqns
2003: Lemmon, Ling, Sun– drop selected data during overload by
analyzing effect on control equations
We assume amount of data sensor control
64
Robust Routing
65
What do we mean by robust?
T=1 T=2 T=3 T=4
Robust routing: routing subgraph has path from src to dest, as links up/down
2/4
3/4
4/4
66
Given graph G, src, dest, assume links iid and up with prob p
Paths Small p limit:
reliability dominated by shortest paths
Cuts Small q=1-p limit:
un-reliability dominated by smallest cuts
Most robust routing subgraph should contain shortest/most reliable path and have large min cut
What is effect of graph structure on src-dest reliability?Some Intuition
67
P({q0,q1} | {s0,s1}) P(d | {d0,d1})Product always for
adding black node
Theoretical Analysis
d
Proof:
s
P(d|s) = P(d | s0 s1) P(s0 s1|s) + P(d | s0 s1) P(s0 s1|s) + P(d | s0 s1) P(s0 s1|s)
- -- -
Recursively iterate: get eqn with 27 terms
s0
s1 s1
s0
q1
q0 q0 d0
q1 d1
d0
d1
68
Conjecture 1: N extra nodes: 1-hop braid most reliable
From lemma: true for N ≤ 5
Conjectures
Generally: conjecture no “holes” in most reliable graph
N=6 N=6
Conjecture 2: 2N extra nodes: 2-hop braid most reliable
s d s d
Experimentally: for N=6, 2-hop braid more reliable than pyramid
69
Conjectures
p
relia
bilit
y
Conjecture 2: 2N extra nodes: 2-hop braid most reliable
experimentally: for N=6, 2-hop braid more reliable than pyramid
70
Adding edges rather than nodes
Conjecture 3: N+1 extra edges: partial 1-hop braid most reliable
not true, see counterexamples
Partial braid less reliable than 2-disjoint
paths for 1p0
Partial Braid
N=3
2-Disjoint Paths
s sd d
N=4Partial braid less
reliable than 2-disjoint paths for 1p√2/3ds s d
71
Adding edges rather than nodes
link up prob above which 2-disjoint paths more
reliable than partial braid
N
Scaling behavior
As N increases, partial braid more reliable for more values of p
Conjecture 3: N+1 extra edges: partial 1-hop braid most reliable
not true, see counterexamples
72
Have intuition that braids have good reliability properties
Reliability Experiments
But,– how does reliability of braid compare with other routing subgraphs?– what is impact of time between braid re-computations T on reliability?
Experiment set-up– model
• 100 nodes, random graph• links iid, 2-state link model• src, dest randomly chosen
– Monte Carlo simulation• 500 runs, each lasting 100 time-steps
up downp
1-p
1-q
q
Link model
73
Link Failures
# of Nodes Used in Addition to Shortest PathG
ain
in R
elia
bilit
y ov
er S
hort
est P
ath
Braid reliability close to full graph, braid overhead significantly less than full graph
Rel
iabi
lity
T = length of routing update interval
1-hop braid
2-shortest disjoint paths
shortest path
full graph
1-hop braid
full graph
p=0.85, q=0.5, T=5p=0.85, q=0.5
2-shortest disjoint paths
74
Node vs Link Failures
# of Nodes Used in Addition to Shortest PathG
ain
in R
elia
bilit
y ov
er S
hort
est P
ath
All algorithms have lower reliability, braid overhead still less than full graph
Rel
iabi
lity
T = length of routing update interval
1-hop braid
2-shortest disjoint paths
shortest path
full graph
1-hop braid
full graph2-shortest disjoint paths
p=0.85, q=0.5, T=5p=0.85, q=0.5
Node failures imply correlated link failures, as in mobility
75
Routing Experiments
GloMoSim– 60 nodes, 250m transmission radius– 1km x 1km area– 1 cbr flow: 5 million pkts (~29 days)– random waypoint, Gauss Markov mobility
Compare throughput, overhead– AODV– 1-hop braid built around AODV path choose next hop based on last successful use
10 runs, each lasting life of flow
76
Random Waypoint Mobility
Packets delivered: braid delivers up to 5% more
packets than AODV
Braid overhead: ~25% more control overhead than AODV
T = routing update interval (seconds)T = routing update interval (seconds)
77
Gauss-Markov Mobility
Packets delivered: braid delivers up to 5% more
packets than AODV
Insights: braids work well when links can reappear in T Independent link failure
Braid overhead: ~40% more control overhead than AODV
T = routing update interval (seconds) T = routing update interval (seconds)
78
Reliability vs Routing
Reliability gains Throughput gains
don’t use AODV, instead estimate link reliability
Braid construction independent of “best” path algorithm
Reliability experiments iid links shortest path = most reliable path
Routing experiments non-iid links
shortest path ≠ most reliable path
79
Reliability vs Routing
Reliability gains Throughput gains
Consider link correlations, mobility characteristics
Reliability experiments iid links rate at which down links re-appear is “high”
prob down link reappears = 0.5
broken link likely re-appears during T
Routing experiments non-iid links rate at which down links re-appear is “low”
2 nodes meet on avg once every 22.7 min
broken link likely does not re-appear during T
80
Link Failures and Braid Attempts