Upload
aglaia
View
57
Download
1
Tags:
Embed Size (px)
DESCRIPTION
Maximizing End-to-End Network Performance. Thomas Hacker University of Michigan October 5, 2001. Introduction. Applications experience network performance from a end customer perspective Providing end-to-end performance has two aspects Bandwidth Reservation Performance Tuning - PowerPoint PPT Presentation
Citation preview
MaximizingEnd-to-End Network Performance
Thomas HackerUniversity of Michigan
October 5, 2001
Introduction
• Applications experience network performance from a end customer perspective
• Providing end-to-end performance has two aspects– Bandwidth Reservation– Performance Tuning
• We have been working to improve actual end-to-end throughput using Performance Tuning
• This work allows applications to fully exploit reserved bandwidth
Improve Network Performance• Poor network performance arises from a
subtle interaction between many different components at each layer of the OSI network stack
• Physical• Data Link• Network• Transport• Application
TCP Bandwidth Limits – Mathis Equation• Based on characteristics from physical
layer up to transport layer.• Hard Limits• TCP Bandwidth, Max Packet Loss
pC
RTTMSSBW
2
*
RTTBWMSSp
Packet Loss and MSS
• If the minimum link bandwidth between two hosts is OC-12 (622 Mbps), and the average round trip time is 20 msec, the maximum packet loss rate necessary to achieve 66% of the link speed (411 Mbps) is approximately 0.00018%, which represents only 2 packets lost out of every 100,000 packets.
• If MSS is increased from 1500 bytes to 9000 bytes (Jumbo frames), limit on TCP BW will rise by a factor of 6.
The Results
Web100 Collaboration
Effects of Host Tuning on EdgeWarp Data Transmission Performance
0
10
20
30
40
50
60
70
Untuned Bandw idth Tuned Bandw idth
Dat
a Tr
ansm
issi
on R
ate
(Mb/
sec)
Performance of PSockets on Abilene Network
0
20
40
60
80
0 50 100 150
Number of PSockets
Dat
a R
ate
in M
b/s
TCP WindowSize = 64K
SOURCE:Harimath Sivakumar, Stuart Bailey, Robert L. Grossman. “PSockets: The Case for Application-level Network Striping for Data Intensive Applications using High Speed Wide Area Networks,” SC2000: High-Performance Network and Computing Conference, Dallas, TX, 11/00
Parallel TCP Connections…a clue
Why Does This Work?• Assumption is that network gives best effort
throughput for each connection• But end-to-end performance is still poor,
even after tuning the host, network, and application
• Parallel Sockets are being used in GridFTP, Netscape, Gnutella, Atlas, Storage Resource Broker, etc.
Packet Loss
• Bolot* found that Random losses are not always due to congestion– local system configuration (txqueuelen in
Linux)– Bad cables (noisy)
• Packet losses occur in bursts• TCP throttles transmission rate on ALL
packet losses, regardless of the root cause• Selective Acknowledgement (SACK) helps,
but only so much* Jean-Chrysostome Bolot. “Characterizing End-to-End packet delay and loss in the Internet.”, Journal of High Speed Networks, 2(3):305--323, 1993.
Expression for Parallel Socket Bandwidth
nn
nagg pRTT
MSSpRTT
MSSpRTT
MSSBW
22
2
11
1
nagg p
MSSp
MSSp
MSSRTT
BW 21
1
Example
MSS = 4418, RTT = 70 msec, p = 1/10000 for all connections
Number of Connections Aggregate
Bandwidth
1 100 50 Mb/sec
2 100+100 100 Mb/sec
3 100+100+100 150 Mb/sec
4 4 (100) 200 Mb/sec
5 5 (100) 250 Mb/sec
n ip
1
Measurements
• To validate theoretical model, 220 4 minute transmissions performed from U-M to NASA AMES in San Jose, CA
• Bottleneck was OC-12, MTU=4418• 7 runs MSS=4366, 1 to 20 sockets• 2 runs MSS=2948, 1 to 20 sockets• 2 runs MSS=1448, 1 to 20 sockets• Iperf used for transfer, Web100 used to
collect TCP observations on sender side
Actual: MSS 1448 Bytes
Measured TCP Bandwidth vs. Number of Sockets (MSS 1448)
0
50
100
150
200
250
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Number of Parallel TCP Connections
Mea
sure
d A
ggre
gate
TC
P B
andw
idth
(Mb/
sec)
Actual: MSS 2948 Bytes
Measured TCP Bandwidth vs. Number of Sockets (MSS 2948)
050
100150200250300350400
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Number of Parallel TCP Connections
Mea
sure
d A
ggre
gate
TC
P B
andw
idth
(Mb/
sec)
Actual: MSS 4366 BytesMeasured TCP Bandwidth vs. Number of Sockets (MSS 4366)
0
50
100
150
200
250
300
350
400
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 19 18 20
Number of Parallel TCP Connections
Mea
sure
d A
ggre
gate
TC
P B
andw
idth
(Mb/
sec)
Sunnyvale – Denver Abilene Link
Initial Tests
Yearly Statistics
Abilene Weather Map
Conclusion
• High Performance Network Throughput is possible with a combination of host, network and application tuning along with using parallel TCP connections
• Parallel TCP Sockets mitigate negative effects of packet loss in random congestion regime
• Effects of Parallel TCP Sockets similar to using larger MSS
• Using Parallel Sockets is aggressive, but as fair as using large MSS