Introduction
Jiří Navrátil
SLAC
Rice University
Richard Baraniuk, Edward Knightly, Robert Nowak, Rudolf Riedi
Xin Wang, Yolanda Tsang, Shriram Sarvotham, Vinay Ribeiro
Los Alamos National Lab (LANL)
Wu-chun Feng, Mark Gardner, Eric Weigle
Stanford Linear Accelerator Center (SLAC)
Les Cottrell, Warren Matthews, Jiri Navratil
INCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL
Project Partners and Researchers
Project GoalsINCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL
• Objectives– scalable, edge-based tools for on-line
network analysis, modeling, and measurement
• Based on– advanced mathematical theory and methods
• Designeted for– support high-performance computing
infrastructures, such as computational grids,– ESNET, Internet2 and other HPNetworking
project
Project ElementsINCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL
• Advanced techniques – from networking, supercomputing, statistical signal
processing, applied mathematics
• Multiscale analysis and modeling– understand causes of burstiness in network traffic– realistic, yet analytically tractable, statistically robust, and
computationally efficient modeling
• On-line inference algorithms – characterize and map network performance as a function of
space, time, application, and protocol
• Data collection tools and validation experiments
Scheduled AccomplishmentsINCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL
• Multiscale traffic models and analysis techniques– based on multifractals, cascades, wavelets– study how large flows interact and cause bursts– study adverse modulation of application-level traffic by
TCP/IP
• Inference algorithms for paths, links, and routers– multiscale end-to-end path modeling and probing– network tomography (active and passive)
• Data collection tools– add multiscale path, link inference to PingER suite– integrate into ESnet NIMI infrastructure– MAGNeT – Monitor for Application-Generated Network Traffic – TICKET – Traffic Information-Collecting Kernel with Exact
Timing
Future Research PlansINCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL
New, high-performance traffic models– guide R&D of next-generation protocols
• Application-generated network traffic repository – enable grid and network researchers to test and evaluate
new protocols with actual traffic demands of applications rather than modulated demands
• Multiclass service inference– enable network clients to assess a system's multi-class
mechanisms and parameters using only passive, external observations
• Predictable QoS via end-point control– ensure minimum QoS levels to traffic flows– exploit path and link inferences in real-time end-point
admission control
(From Papers to Practice)
MWFS, TOMO, TOPO
20 ms
~300 ms
40 T for new set of values (12 sec)
First results
What has been done
• Phase 1 - Remodeling
- Code separation (BW and CT)
- Find how to call MATLAB from another program
- Analyze Results and data
- Find optimal params for model• Phase 2
- Webing of BW estimate
Data Dispersions from sunstats.cern.ch
pcgiga.cern.ch
sunstats.cern.ch
ccnsn07.in2p3.fr
plato.cacr.caltech.edu
pcgiga.cern.ch
default WS
BW ~ 70Mbps
pcgiga.cern.ch
WS 512K
BW ~ 100 Mbps
Reaction to the network problems
After tuning
MF-CT Features and benefits
• No need access to routers ! – Current monitoring systems for Load of traffic are
based on SNMP or Flows (needs access to routers)
• Low cost:– Allows permanent monitoring (20 pkts/sec ~ overhead
10 Kbytes/sec)– Can be used as data provider for ABW prediction
(ABW=BW-CT)
• Weak point for common useMATLAB code
Future work on CT
• Verification model– Define and setup verification model (S+R)– Measurements (S)– Analyze results (S+R)
• On-line running on selected sites– Prepare code for automation and Webing (S)– CT-Code modificaton ? (R)
SNMP counter
SNMP counter
MF-CT Simulator
SNMP counter
SNMP counter
UDP echo
UDP echo
SLAC IN2P3
CERN
CT RE-ENGINEERING
For practical monitoring would be necessary to do modification for using it in different modes:
– Continuos mode for monitoring one site in Large time scale (hours)
– Accumulation mode (1 min, 5 min, ?) for running for more sites in parallel
– ? Solution without MATLAB ?
Rob Nowak (and CAIDA people) say:
www.caida.org
Network Topology Identification
Pairwise delay measurements reveal topology
Ratnasamy & McCanne (99)Duffield, et al (00,01,02)
Bestavros, et al (01)Coates, et al (01)
Network Tomography
Measure end-to-end (from source to receiver) losses/delays
Infer link-level (at internal routers) loss rates and delay distributions
receivers
source
router / node
link
Unicast Network Tomography
Measure end-to-end losses of packets
‘0’ loss‘1’ success
‘0’ loss‘1’ success
Cannot isolate where losses occur !
Packet Pair Measurements
measurement packet pair
cross-traffic(2)packet (1)packet
(2)packet (1)packet
nearly experience packetandpacket (2)(1)
delay
delaysand/or losses identical
Delay Estimation
Measure end-to-end delays of packet-pairs
Packets experience thesame delay on link 1
0d min2 d min3d d
Extra delay on link 3
Packet-pair measurements
)(packet (1) n)(packet (2) n
Key Assumptions:
• fixed routes
• iid pair-measurements
• losses & delays on each link are mutually independent
• packet-pair losses & delays on shared links are nearly identical
)()2( ny )()1( ny
Nnnynyy 1)2()1( )(),(
"success"1or"loss"0
units"delay " 0,1,...,K )()( ny p
}2,1{p record occurrencesof losses and delays
1 0.5
10
2
2
2
1 0.5
10
10
5
Test network showinglink bandwidths (Mb/s)
cross-traffic link 9
• 40-byte packet-pair probes every 50 ms• competing traffic comprised of:
on-off exponential (500 byte packets) TCP connections (1000 byte packets)
Kbytes/s
time (s)
ns Simulation
Future work on TM and TP
• Model in frame of Internet (~100 sites)– Define verification model (S+R)– Deploy and install code on sites (S)– First measurements (S+R)– Analyze results (form,speed,quantity) (S+R)– ? Code modificaton (R)
• Production model ? – Compete with Pinger, RIPE, Surveyor, Nimi ? – How to unify VIRTUAL structure with Real