Upload
ryan-ramos
View
216
Download
2
Tags:
Embed Size (px)
Citation preview
1
Impact of Background Traffic on Impact of Background Traffic on Performance of High-speed TCPsPerformance of High-speed TCPs
Injong Rheehttp://www.csc.ncsu.edu/faculty/rhee/
North Carolina State UniversityCollaborators: Sangtae Ha, Lisong
Xu, Long Le
Microsoft WorkshopMicrosoft Workshop
2
Background
Experiment with linux 2.6.19 Iperf (1 TCP-SACK flow) 1Gbit backbone link: NC (USA) – Korea – Japan (special
thanks to research team in Japan)
202ms
48ms
Korea
Japan
NC
Slow window growth of Reno-style TCP results
in under-utilization
3
High-Speed TCP Variants
Many High-speed TCP variants have been proposed How can we evaluate these protocols? Which criteria?
BIC-TCP
CUBIC
HSTCP
H-TCP
Scalable
FAST
TCP-Westwood
TCP-Africa
CompoundTCP
NewProtocol
TCP-AReno
4
Window growth patterns
HSTCP H-TCP
BIC-TCPScalable
CUBIC
Win
dow
Siz
eBICSTCP
CUBIC
HTCPHSTCP
Time
NS2-Linux [?], 400Mbps, 160ms one-way delay,100% BDP buffer, No background traffic
5
Performance Criteria and Design Tradeoffs
There are many performance criteria Fairness
Intra-protocol fairness RTT-fairness TCP-friendliness
Scalability (High link utilization) Stability
Not all protocols satisfy all the goals. But instead, make different design
tradeoffs. For example, give up on convergence time to
gain more stability, or vice versa.
6
Performance Evaluation Methodology
Internet experiment Most realistic tests, but Hard to reproduce the results No idea on what happened in the network
Simulation or dummynet emulation Easily reproducible and verifiable Main issue: are they realistic? how to recreate
the Internet environments? Theoretical analysis
Provide important insights on the behavior of protocols
But convenient assumptions and less useful for comparison (e.g., first order behaviors).
7
Testbed emulation - recreating the Internet environment.
Topology Can’t model the complexity of the entire network. Thus, most evaluations focus on one or a few hop
environments (or dumbbell). Workload
To compensate, focus on injecting realistic background traffic into the bottleneck link. As arriving flows must have gone through many hops,
mimicking the traffic pattern seen in one core router has some effect of emulating the topology.
Not perfect as it does not allow us to see the behaviors of protocols under multiple bottlenecks. But this can be overcome by use of a “parking” lot
topology assuming bottleneck links are only a few.
8
Realistic background traffic
Hard to prove its realism, but we can make at least the statistics similar. Measure the Internet traffic in one Internet link
and extract its statistical patterns such as flow sizes, arrival rates, transmission rates, etc.
Highly detailed recreation of Internet traffic (based on these statistical patterns) are possible. Tools: HARPOON, Tmix, etc.
A quick and dirty way: just emulate the patterns generally observed in the Internet. Arrivals -- exponential, heavy-tail Flow sizes -- a varied form of heavy-tail (different
body and tail) RTT variations -- log-normal
9
Our work
We study the impact of background traffic patterns on the performance of protocols.
Important to understand their behaviors in the Internet-like environments.
This will shed lights on different tradeoffs that different protocols take.
10
Testbed (Dummynet) Setup
Totally 18 servers for generating background traffic and receiving and sending protocol flows.
Background traffic is pushed into forward/backward directions Long-lived flows: Iperf, short-lived flows: Surge (web traffic
generator) The RTT of each flow is randomly chosen based on input distribution.
Experimental parameters: RTT (40ms to 320ms), buffer sizes (1MB to 8MB).
11
Five different types of background traffic
Type I: Surge (LN Body 93%, Pareto tail
7%) Exponential arrival (0.2)
Type II: Surge (LN Body 70%, Pareto tail
30%) Minimum file size for tail - 1MB Exponential arrival (0.6)
Type III: Type I (90%), P2P traffic (10%) P2P traffic - Pareto, Minimum
3MB
Type IV: 100% log-normal body
Type V: Type II + 12 long-lived Iperf
flows
12
Link utilization and stability
No Background(Buffer 1MB)
Type II(Buffer 1MB)
Some protocols reduce utilization when the rate variance of background traffic increases.
13
Link utilization, stability and loss synchronization
Utilization
No Background Type II
High-speed TCP flows
Background traffic
High rate variations of protocol flows may cause loss synchronization and low utilization.
14
Stability vs. Link utilization
Protocol Stability(measured in CoV - StandardDeviation dividedBy mean)
Link utilization
15
Link utilization and stability under various traffic types (HTCP)
Link utilization
CoV
16
Fairness (measured in throughput ratio)
TCP friendliness(RTT 42 ms;2MB buffer)
Intra-protocolFairness(RTT 82 ms)
RTT-fairness(flow 1: 42 ms; flow 2: 162 ms)
Generally, H-TCP shows the excellent fairness regardless of traffic types. All protocols improve fairness with more variance in bg traffic, but the size of traffic makes the biggest difference (type V).
17
TCP friendliness
No background
Type V
Generally, all protocols improve fairness with type V background traffic.
18
TCP-friendliness another look
Type II traffic with varying numbers of high-speed flows (320ms RTT).
Measured the throughput of Type II traffic. We don’t find much difference in
throughput.
19
Convergence speed
Cubic
H-TCP
No background traffic Type II
20
Conclusion
Types of background traffic reveal “the beast” in disguise. E.g, Some protocols trade convergence speed for
higher stability. Some protocols trade stability for faster
convergence and fairness. Rate variance of background traffic
affects the stability and also link utilization.
All protocols improve fairness and convergence speed with more background traffic (size matters more than variance).
21
Intra-protocol fairness
No background(2 MB buffer)
Type V(2 MB buffer)
22
Intra-protocol fairness (FAST)
Wrong estimation of minimum RTT causes different flows to runat different rates
Type ITraffic;1 MB Buffer.
23
Link utilization v.s. buffer size
As the buffer space increases, the stability gets better.
320ms RTT
24
Impact of buffer sizes
Buffer size (1 – 8MB), four HS flows with the same RTT (320ms) As the buffer size increases, the CoV of all protocols decreases
25
Impact of congestion
Buffer size (2MB), two HS flows with the same RTT (40ms – 320ms), a dozen long-lived TCP flows added
Convex protocols have a large variations (convex ordering still exists)
26
NS2 Simulation Results (Loss Model)