Upload
haracha
View
31
Download
0
Tags:
Embed Size (px)
DESCRIPTION
FAST TCP. Cheng Jin David Wei Steven Low. netlab. CALTECH .edu. GNEW, CERN, March 2004. Acknowledgments. Caltech Bunn, Choe, Doyle, Jin, Newman, Ravot, Singh, J. Wang, Wei UCLA Paganini, Z. Wang CERN/DataTAG Martin, Martin-Flatin Internet2 Almes, Shalunov SLAC Cottrell, Mount - PowerPoint PPT Presentation
Citation preview
FAST TCPCheng JinDavid Wei
Steven Low
netlab.CALTECH.edu
GNEW, CERN, March 2004
Acknowledgments Caltech
Bunn, Choe, Doyle, Jin, Newman, Ravot, Singh, J. Wang, Wei
UCLA Paganini, Z. Wang
CERN/DataTAG Martin, Martin-Flatin
Internet2 Almes, Shalunov
SLAC Cottrell, Mount
Cisco Aiken, Doraiswami, Yip
Level(3) Fernes
LANL Wu
FAST project
PerformanceStability Fairness TCP/IP Noise
Random-ness Theory
Linux TCP kernel
Other platforms
MonitoringDebugging Implement
AbilenePlanetL
DummyNetHEP networks
WANin Lab
UltraLighttestbed Experiment
TeraGridHEP networks AbileneIETFGGF Deployment
NSFITR
(2001)
NSFSTI
(2002)
NSF RI(2003)
Outline
Experiments Results Future plan
Status Open issues Code release mid 04
Unified framework Reno, FAST, HSTCP, STCP, XCP, … Implementation issues
Aggregate throughput
1 flow 2 flows 7 flows 9 flows 10 flows
Average utilization
95%
92%
90%
90%
88%FAST Standard MTU Utilization averaged over > 1hr
1hr 1hr 6hr 1.1hr 6hr
DataTAG: CERN – StarLight – Level3/SLAC (Jin, Wei, Ravot, etc SC2002)
Dynamic sharing: 3 flowsFAST Linux
Dynamic sharing on Dummynet capacity = 800Mbps delay=120ms 3 flows iperf throughput Linux 2.4.x (HSTCP: UCL)
Dynamic sharing: 3 flowsFAST Linux
HSTCP STCP
Steady throughput
FAST Linux
throughput
loss
queue
STCPHSTCP
Dynamic sharing on Dummynet capacity = 800Mbps delay=120ms 14 flows iperf throughput Linux 2.4.x (HSTCP: UCL)
30min
FAST Linux
throughput
loss
queue
STCPHSTCP
30min
Room for mice !
HSTCP
Aggregate throughput
ideal performance
Dummynet: cap = 800Mbps; delay = 50-200ms; #flows = 1-14; 29 expts
Aggregate throughput
small window800pktslarge
window8000
Dummynet: cap = 800Mbps; delay = 50-200ms; #flows = 1-14; 29 expts
Fairness
Jain’s index
HSTCP
~ R
eno
Dummynet: cap = 800Mbps; delay = 50-200ms; #flows = 1-14; 29 expts
Stability
Dummynet: cap = 800Mbps; delay = 50-200ms; #flows = 1-14; 29 expts
stable indiverse
scenarios
Outline
Experiments Results Future plan
Status Open issues Code release
Unified framework Reno, FAST, HSTCP, STCP, XCP, … Implementation issues
Benchmarking TCP Not just static throughput
Dynamic sharing, what protocol does to network, … Tests to zoom in on specific properties
Throughput, delay, loss, fairness, stability, … Critical for basic design Test scenarios may not be realistic
Tests with realistic scenarios Same performance metrics Critical for refinement for deployment Just started
Input solicited What’s realistic for your applications?
Open issues: well understood baseRTT estimation
route changes, dynamic sharing does not upset stability
Small network buffer at least like TCP adapt on slow timescale, but how?
TCP-friendliness friendly at least at small window tunable, but how to tune?
Reverse path congestion should react? rare for large transfer?
Status: code release Source release mid 2004
For any non-profit purposes Re-implementation of FAST TCP completed Extensive testing to complete by April 04
Pre-release trials CFP for high-performance sites!
Incorporate into Web100 with Matt Mathis
Status: IPR
Caltech will license royalty-free if FAST TCP becomes IETF standard
IPR covers more broadly than TCP Leave all options open
Outline
Experiments Results Future plan
Status Open issues Code release mid 04
Unified framework Reno, FAST, HSTCP, STCP, XCP, … Implementation issues
Packet & flow level
ACK: W W + 1/W
Loss: W W – 0.5W
Packet levelReno TCP
Flow level Equilibrium
Dynamics
pkts (Mathis formula)
Reno TCP Packet level
Designed and implemented first Flow level
Understood afterwards Flow level dynamics determines
Equilibrium: performance, fairness Stability
Design flow level equilibrium & stability Implement flow level goals at packet level
Reno TCP Packet level
Designed and implemented first Flow level
Understood afterwards Flow level dynamics determines
Equilibrium: performance, fairness Stability
Packet level design of FAST, HSTCP, STCP, H-TCP, … guided by flow level properties
Packet level ACK: W W + 1/W
Loss: W W – 0.5W
Reno AIMD(1, 0.5)
ACK: W W + a(w)/W
Loss: W W – b(w)W
HSTCP AIMD(a(w), b(w))
ACK: W W + 0.01
Loss: W W – 0.125W
STCP MIMD(a, b)
RTT
baseRTT W W :RTT FAST
Flow level: Reno, HSTCP, STCP, FAST
Similar flow level equilibrium
= 1.225 (Reno), 0.120 (HSTCP), 0.075 (STCP)
MSS/sec
Flow level: Reno, HSTCP, STCP, FAST
Different gain and utility Ui They determine equilibrium and stability
Different congestion measure pi Loss probability (Reno, HSTCP, STCP) Queueing delay (Vegas, FAST)
Common flow level dynamics
windowadjustment
controlgain
flow levelgoal=
FAST TCP Reno, HSTCP, and FAST have common flow level dynamics
windowadjustment
controlgain
flow levelgoal=
Equation-based Need to estimate “price” pi(t)
pi(t) = queueing delay Easier to estimate at large window
(t) and U’i(t) explicitly designed for Performance Fairness Stability
Window control algorithm
Full utilization regardless of bandwidth-delay product
Globally stable exponential convergence
Intra-protocol fairness weighted proportional fairness parameter
FAST tunes to knee
TCP
oscillation
FAST
stabilized
Goal:• Less delay• Less jitter
Window adjustment
FAST TCP
FAST TCP: motivation, architecture, algorithms, performance IEEE Infocom March 2004
FAST TCP: from theory to experimentsSubmitted for publication April 2003
netlab.caltech.edu/FAST
Panel 1: Lessons in Grid Networking
Metrics Performance
Throughput, loss, delay, jitter, stability, responsiveness
Availability, reliability Simplicity
Application Management
Evolvability, robustness
Constraints Scientific community
Small & fixed set of major sites Few & large transfers Relatively simple traffic characteristics and
quality requirements General public
Large, dynamic sets of users Diverse set of traffic characteristics &
quality requirements Evolving/unpredictable applications
Mechanisms
Fiber infrastructure Lightpath configuration Resource provisioning Traffic engineering, adm control Congestion/flow control
Months - years
Mintes - days
Service: sec - hrsFlow: sec - mins
RTT: ms - sec
Timescale: desired, instead of feasible Balance: cost/benefit, simplicity, evolvability