View
328
Download
1
Category
Preview:
DESCRIPTION
Citation preview
Adaptive Delay-based Congestion Control for High
Bandwidth-Delay Product Networks
Hyungsoo Jung∗, Shin-gyu Kim†, Heon Y. Yeom†, Sooyong Kang‡, Lavy Libman∗
∗School of Information Technologies, University of Sydney, NSW 2006, Australia†School of Computer Science & Engineering, Seoul National University, Seoul, Korea
‡Division of Computer Science & Engineering, Hanyang University, Seoul, Korea
Email: {hyungsoo.jung, lavy.libman}@sydney.edu.au, {sgkim, yeom}@dcslab.snu.ac.kr, sykang@hanyang.ac.kr
Abstract—The design of an end-to-end Internet congestioncontrol protocol that could achieve high utilization, fair sharingof bottleneck bandwidth, and fast convergence while remainingTCP-friendly is an ongoing challenge that continues to attractconsiderable research attention. This paper presents ACP, anAdaptive end-to-end Congestion control Protocol that achievesthe above goals in high bandwidth-delay product networks whereTCP becomes inefficient. The main contribution of ACP is a newform of congestion window control, combining the estimationof the bottleneck queue size and a measure of fair sharing.Specifically, upon detecting congestion, ACP decreases the con-gestion window size by the exact amount required to empty thebottleneck queue while maintaining high utilization, while theincreases of the congestion window are based on a “fairnessratio” metric of each flow, which ensures fast convergence toa fair equilibrium. We demonstrate the benefits of ACP usingboth ns-2 simulation and experimental measurements of a Linuxprototype implementation. In particular, we show that the newprotocol is TCP-friendly and allows TCP and ACP flows to coexistin various circumstances, and that ACP indeed behaves morefairly than other TCP variants under heterogeneous round-triptimes (RTT).
I. INTRODUCTION
It is well recognized that the Additive Increase Multi-
plicative Decrease (AIMD) [1] congestion control algorithm
employed by TCP [2]–[4] is ill-suited to High Bandwidth-
Delay Product (HBDP) networks. As advances in network
technology increase the prevalence of HBDP networks in
the Internet, the design of an efficient alternative congestion
control mechanism gains in importance. A good congestion
control protocol aims to achieve both high utilization and
fairness while maintaining low bottleneck queue length and
minimizing congestion-induced packet drop rate.
There have been many research efforts that have pro-
posed a variety of protocols and algorithmic techniques to
approach this goal, each with its own merits and shortcomings.
These protocols can be classified into two categories: router-
supported and end-to-end approaches. Router-supported con-
gestion control schemes, like XCP [5], VCP [6], MLCP [7],
and CLTCP [8], generally show excellent performance and
efficient convergence in HBDP networks. However, the in-
cremental rollout of such router-supported protocols remains
a significant challenge, as it requires backward compatibility
with legacy TCP flows. Accordingly, end-to-end congestion
control algorithms are more attractive since they do not require
any special support from routers. However, they still have
an important requirement of TCP-friendliness; since a large
portion of Internet traffic is generated by TCP flows, any new
protocol should not gain an unfair advantage by leaving less
bandwidth to other TCP flows than TCP itself would.
A. Related Work
Previous research efforts in end-to-end congestion control
can be divided into two categories: Delay-based Congestion
Control (DCC) and Packet loss-based Congestion Control
(PCC). PCC performs congestion control reactively by consid-
ering extreme events (packet drops) only, while DCC attempts
to make proactive decisions based on variations in RTT. We
list the most prominent proposals from both categories below.
Jain first proposed DCC in [9]. In 1994, TCP-Vegas was
proposed with the claim of achieving throughput improvement
ranging from 37% to 71% compared with TCP-Reno [10],
[11]. An innovative idea in Vegas is that it detects congestion
by observing the change of a throughput rate and prevents
packet losses proactively. Vegas increases or decreases the
congestion window by a fixed amount in every control interval,
regardless of the degree of congestion. FAST [12] adopts
the minimum RTT to detect the network condition. Because
RTT reflects the bottleneck queuing delay, this mechanism
is effective in determining the network congestion status.
However, use of the minimum of all measured RTT creates
a fairness problem [13], [14]. Moreover, as shown in [15],
[16], the correlation between RTT increase and congestion that
later leads to packet loss events can be weak; in particular, the
RTT probe performed by TCP is too coarse to correctly predict
congestion events [17].
PCC uses the loss of a packet as a clear indication that
the network is highly congested and the bottleneck queue is
full. Most TCP variants belong to this class, and control the
congestion window based on the AIMD policy. Retaining the
AIMD policy guarantees TCP-friendliness. However, the pure
additive increase policy significantly degrades utilization in
HBDP networks. To improve performance in this environment,
many solutions have been proposed. HSTCP [18] extends
standard TCP by adaptively setting the increase/decrease pa-
rameters according to the congestion window size. HTCP [19]
employs a similar control policy, but modifies the increase
parameter based on the elapsed time since the last congestion
event. STCP [20] has a Multiplicative Increase Multiplicative
Decrease (MIMD) control policy to ensure that the congestion
This paper was presented as part of the main technical program at IEEE INFOCOM 2011
978-1-4244-9920-5/11/$26.00 ©2011 IEEE 2885
window can be doubled in a fixed number of RTTs. BIC [21]
and CUBIC [22] focuses on RTT fairness properties by adding
a binary search and a curve-fitting algorithm to the additive
increase and multiplicative decrease phases. LTCP [23] mod-
ifies the TCP flow to behave as a collection of virtual flows
for efficient bandwidth probing, while retaining the AIMD
features.
We point out that there exist alternative approaches and
standards for Internet congestion control, such as DCCP [24]
and TFRC [25], using equation-based methods that break
away from TCP’s concept of a congestion window altogether.
However, our work focuses on proposing an improved yet
backward-compatible congestion control protocol, and a de-
tailed discussion of the pros and cons of equation- vs window-
based congestion control is beyond the scope of this paper.
B. Motivation
Unlike router-supported approaches, both delay- and packet
loss-based end-to-end approaches have a fundamental limita-
tion in quantitatively recognizing the load status of a bottle-
neck link, which makes it difficult for them to achieve the
goal of high utilization with fairness and fast convergence.
The lack of detailed link information forces existing end-to-
end protocols to take the following philosophy on congestion
control: (1) probe spare bandwidth by increasing congestion
window until congestion occurs; and (2) decrease window
in response to an indication of congestion. We proceed to
describe our motivation for possible improvements in each of
these phases.
1) Acquiring available bandwidth: When there is no con-
gestion event, end-to-end protocols actively probe and acquire
available bandwidth. Ideally, we would like bandwidth to be
probed as quickly as possible; this is especially important to
a new flow entering the network and starting to compete for
its share of the bottleneck capacity. On the other hand, a flow
must not be too greedy in utilizing the spare bandwidth of
competing flows, which means that flows already consuming
their fair share of bandwidth should increase their congestion
window slowly even when there are no packet drops.
From this perspective, TCP with its Additive Increase (AI)
probing is very slow compared with other non-AI based
protocols such as HSTCP, STCP, and CUBIC. Multiplicative
Increase (MI) seems to be a more attractive method, because
of its fast increase rate; however, the MI policy in many
cases carries the hazard of throughput instability (i.e., large
drops near the saturation point). The fundamental reason that
probing speed and stability are hard to achieve simultaneously
is the difficulty of measuring the instantaneous fairness among
flows whose data rates change dynamically. In router-assisted
approaches, such as XCP and VCP, routers continuously
measure the degree of fairness for each flow and reflect it in
the amount of positive feedback conveyed to each flow. This
observation motivates us to apply the same approach in an end-
to-end protocol. Specifically, we propose to use the Fairness
Ratio (FR) metric (the ratio between the current bandwidth
share of a flow and its fair share in equilibrium) to adjust
the congestion window management; if a flow with a small
FR value increases its congestion window more aggressively
than one with a large FR, then the protocol will achieve fast
convergence to a fair equilibrium.
2) Releasing bandwidth when congestion occurs: When
informed of congestion, a flow should release its bandwidth
by decreasing the congestion window. PCC protocols generally
adopt the Multiplicative Decrease (MD) policy, reducing the
window to a fraction (e.g. half) of its size upon a congestion
event. This leads to under-utilization of the bottleneck link
when the capacity of its buffer is not large enough (e.g. less
than 100% of the BDP). DCC protocols fare little better, due
to the slow responsiveness of the RTT as an indicator of
the bottleneck buffer length. Clearly, if one can accurately
estimate the number of packets in the bottleneck queue, then
the congestion window size can be decreased by the exact
amount necessary when the queue starts to grow (even before
packets are lost) and lead to a perfect utilization. Motivated by
this observation, we propose to base the congestion window
management during the decreasing phase on the gap between
the sending rate (throughput) and receiving rate (goodput) of
the flow, or, in other words, the difference between the number
of packets sent and received in the duration of an RTT. This
control variable provides much more timely and fine-grained
information about the buffer status than merely variations of
the RTT itself, and can be implemented with little cost by
leveraging unused fields in the TCP header.
C. Contributions of This Work
In this paper, we describe a complete congestion control
algorithm based on the idea of using the gap between receiving
and sending rates, or the goodput-throughput (G-T) differ-
ence, as a control variable for management of the congestion
window. The G-T difference has been introduced in our
earlier work [26], which proposed the TCP-GT protocol and
showed its advantages in terms of high utilization and fast
convergence. However, TCP-GT did not concern itself on
the issues of TCP-friendliness and fairness, especially among
flows with heterogeneous RTT values. In this paper, we build
upon and significantly extend our previous work to show
how the goodput and throughput information can be used
to estimate the Fairness Ratio (FR) of the flow, leading to
a fast and precise estimation of its fair equilibrium bandwidth
share. Consequently, we design a novel end-to-end adaptive
congestion control protocol, ACP, which achieves the goals of
high utilization and fast convergence to a fair equilibrium,
and can be readily implemented in practice by leveraging
the existing TCP’s optional header fields. We demostrate the
superior performance of ACP under a wide range of scenarios,
using both simulation and experimental measurements from
its implementation in Linux.1 In particular, we show that
ACP flows and TCP flows share the bandwidth fairly if their
RTT are comparable, and even with different RTTs, ACP
flows exhibit a fairer behavior towards TCP flows than other
protocols in the literature.
1We emphasize that the source code of the ACP simulation and Linuximplementation is openly available from http://dcslab.snu.ac.kr/acp/.
2886
II. PROTOCOL DESCRIPTION
In this section, we provide a detailed description of ACP. We
begin by briefly revisiting the measurement process defined
in [26], and then describe the estimation of two values at the
core of the ACP design: the queue growth and the fairness
ratio. Importantly, these are designed to be independent of RTT
heterogeneity, which is the root cause of the RTT unfairness
problem. We then describe how these values are used in the
various states/phases of the protocol.
A. Goodput, throughput and delay variation measurement
ACP measures the sending and receiving rate (throughput
and goodput, respectively) by counting the number of packets
within a predetermined time duration at the sender, which
we call an epoch. In addition, an ACP sender keeps track
of variations in the forward packet delay, td, during the
epoch. All the above information is obtained with the help
of the receiver that timestamps each received packet and
conveys the timestamps back to the sender, piggybacked in
the acknowledgments.
As we shall explain below, the forward delay variations
are instrumental in the estimation of the bottleneck queue
length, while the difference between goodput and throughput
in an epoch (denoted by Φ, Φ = goodput − throughput)
indicates the level of congestion in the network. Indeed, in
a congested network, the bottleneck link receives packets at
a rate higher than its capacity, which is the limit on the
maximum goodput at the receiving side. Therefore, when
a network gets congested, throughput becomes larger than
goodput (Φ < 0), and the forward delay td increases. On
the other hand, when congestion is being relieved and packets
are emptied from the bottleneck buffer at a higher rate than
new packets arrive, goodput becomes larger than throughput
(Φ > 0). The absolute value of Φ corresponds to the severity
of congestion. For further details on the goodput, throughput
and delay variation measurement process and its properties
and benefits, see [26].
B. Estimation of Link Information
We use the delay variation measurement to estimate two
types of link information: the queue growth at the bottleneck
and the flow’s degree of fairness in sharing the bandwidth. All
the estimations are conducted using a common control period
independent of the flow’s RTT. We denote this common control
period as tc.
1) Queue Growth Estimation: The estimation of the num-
ber of buffered packets in a bottleneck queue is essential for
achieving the goal of high link utilization during periods of
congestion, since it enables to reduce the congestion window
as much as possible without allowing the queue to underflow.
This is similar to the efficiency control in XCP, which is
achieved with the help of explicit router feedback [5].
Consider, initially, a network with a single bottleneck link
and a single flow with a round trip delay of RTT . Suppose
that, during a period of congestion (i.e. the bottleneck queue
is non-empty and the link is transmitting at its maximum
capacity), the sender increases the congestion window cwnd to
cwnd+∆cwnd over a period of one RTT , corresponding to
a throughput of cwnd+∆cwndRTT
. Since the goodput is limited by
the bottleneck capacity, the queueing delay must grow by some
amount, ∆RTT , so as to limit the goodput to cwnd+∆cwndRTT+∆RTT
.2
Thus, from ∆cwnd (which is known) and ∆RTT (which is
measured by the sender), one can obtain QRTT , the queue
growth (i.e. number of additional packets enqueued) during
the RTT period, as follows:
QRTT =cwnd+∆cwnd
RTT +∆RTT·∆RTT (1)
We further generalize equation (1) to allow the estimation
of queue growth during an arbitrary period of T , denoted by
QT . This involves using the goodput and delay variation (on
the right-hand side of (1)) over a period of T , instead of over
a single RTT . Denote the goodput measured over a period T
by GT , the total number of packets sent during T by PT , and
the corresponding increase in forward queuing delay at the
bottleneck by ∆td. The quantities PT and ∆td are obtained
by the ACP measurement process, and from that, we can find
GT and the value of QT as follows:
GT =PT
T +∆td, QT = GT ·∆td (2)
2) Fairness Ratio Estimation: The purpose of the fairness
ratio estimation is to speed up the convergence to a fair band-
width allocation, by having flows increase their congestion
window more aggressively when their current bandwidth share
is below its fair level. We first define the real fairness ratio as
follows.
Definition 1: Suppose there are n flows in a bottleneck link
whose capacity is C. Then the real fairness ratio of flow i,
denoted by Fi, is the ratio of the throughput of flow i at time
t, wi(t), to its fair share of link capacity, Cn:
Fi =n · wi(t)
C
We now illustrate the intuition behind the fairness ratio
estimation. Consider a single link bottleneck traversed by n
flows with the same RTT that is equal to tc. At the onset of
congestion, each flow i has a throughput of wi(t) =cwndi(t)
tc.
If any flow increases its congestion window, the excess packets
will join the queue, causing an increase in the queuing delay.
Suppose that all flows increase the congestion window by the
same amount ∆cwnd at the same time; then, at the end of
one RTT, the bottleneck queue will have n · ∆cwnd packets
queued up. However, the actual number of queued packets
belonging to each flow will not be the same unless all flows
had the same congestion window size originally. Specifically,
the number of new packets each flow contributes to the queue
is proportional towi(t)∑
nj=1
wj(t). This observation can be used to
estimate the fairness ratio. If a flow i increases the congestion
window when the link is fully utilized, it expects to see an
increase in the queuing delay; if the link is shared perfectly
fairly, this increase should be the same as if the flow had been
alone in a bottleneck link of capacity equal to wi(t). Therefore,
2For simplicity, we assume that the entire increase of ∆RTT correspondsto the forward (data) queuing delay only, while the reverse (ACK) queuingdelay, if any, is stable and does not change during congestion.
2887
TABLE IFLOW STATES & CONTROL POLICY
Flow StatesPolicy
Congestion State Fairness State Loss Event
Φ ≥ 0 N/A
NoAI
Φ < 0
F̂ ≥ 1
F̂ < 1Q ≥ γ · cwnd
ADφ > 0N/A Yes
by comparing the actual delay increase with the expected one,
the flow can deduce the status of the link. If the observed
delay increase is greater than expected, the flow is currently
using more than its fair share, and conversely a smaller than
expected delay increase indicates a throughput below the fair
share of the flow.
Based on the above intuition, we now define the estimated
fairness ratio for flow i as the ratio of the queue growth to the
window increase:
F̂i =Qtc
i
∆cwnd(3)
where Qtci is the queue growth during a single tc for flow
i, estimated as described by equation (2). The validity of the
estimation F̂i is established by the following theorem.3
Theorem 1: Assume that a bottleneck link is fully saturated
with traffic from ACP flows (and no other traffic), and that all
ACP flows increase their window by the same amount in a
control period tc. Then, for every ACP flow i, the estimated
fairness ratio is equal to the real fairness ratio during the
control period: F̂i = Fi.
Thus, a sender whose current sending rate is relatively
slow, will find upon increasing its congestion window that the
correspondingQtc is smaller than ∆cwnd, so the ratio Qtc
∆cwnd
is less than 1. We discuss how increasing the congestion
window is based on the fairness ratio in the next section.
C. Flow States and the Congestion Window Control Policy
As explained above, the congestion window control policy
of ACP is based on two parameters. The first is the goodput-
throughput difference Φ, the sign of which indicates whether
or not the network is congested; this is similar to the load
factor congestion signal proposed in [28] and computed by
the routers in XCP/VCP, except that here we estimate it using
end-to-end measurements only. The second is the fairness
ratio estimate of the flow, F̂ . We now proceed to describe
in detail the actions of ACP under different combinations of
these parameters. Overall, we define six possible states the
flow can be in; these are summarized in Table I. The first
three states are for the increase phase and the remaining three
states are for the decrease phase.
Because ACP is adaptively adjusting the congestion window
based on these state combinations, we refer to the congestion
control policy of ACP as an Adaptive Increase Adaptive De-
crease (AIAD) algorithm. We express the congestion window
and the estimated number of queued packets as a function of
3Due to space constraints, the proofs of all theorems are omitted from thispaper and can be found in [27].
time t, denoted respectively by cwnd(t) and Q(t). An ACP
sender applies one of the following steps based on the flow
state combination, as shown in Table I:
AI : cwnd(t+ tc) = cwnd(t) + fAI(t)
AD : cwnd← cwnd(t) −Q(t)
where fAI(t) > 0 is defined further below. Thus, while
window increases happen with every control period tc, window
decreases are made as soon as the flow is detected to enter
an AD state. The window decrease amount is always the
estimated amount of the flow’s queued packets from the start
of the epoch, leading to a fast drop of the excess congestion
while maintaining a high utilization of the bottleneck link.
1) Setting the AI parameters: Fast convergence to the fair
equilibrium is one of the unique characteristics of ACP. This
is achieved by the fAI(t) function that governs the congestion
window increases in the different states. The key requirements
for the choice of this function are as follows:
• If Φ ≥ 0, increase the congestion window to acquire
spare bandwidth quickly (fast probing).
• If Φ < 0 and F̂ ≥ 1, increase the congestion window by
a constant amount (humble increase).
• If Φ < 0 and 0 ≤ F̂ < 1, increase the congestion
window according to F̂ so that the window approaches
the fairness point quickly (fairness claiming).
Accordingly, we choose the AI function as follows:
fAI(t) =
α · ⌊ t−t0tc⌋ if Φ ≥ 0
α if Φ < 0, F̂ ≥ 1
α+ κ · (1− F̂)2 if Φ < 0, 0 ≤ F̂ < 1
where α > 0, κ > 0, and t0 denotes the time that Φ ≥ 0 is
first detected.
The goal of the first requirement is to allow a flow to acquire
spare bandwidth in a non-congested network quickly. If Φ ≥ 0(i.e., the network is not congested), we increase ∆cwnd by α
segments per control period tc until the congestion signal is
detected. Hence this is called the fast probing phase.
When Φ < 0 (i.e., network congestion is imminent), the
function fAI(t) depends on the fairness ratio. The humble
increase phase is entered when F̂ ≥ 1. In this case, we
increase cwnd by α segments; throughout this paper we set
α = 1 so that ACP behaves similarly to TCP-NewReno during
this phase. Otherwise, when 0 ≤ F̂ < 1, the flow moves to
a fairness claiming phase, in which it increases its congestion
window with greater steps that depend on F̂ (becoming larger
as F̂ approaches 0). The fairness convergence time is primarily
determined by the fairness claiming parameter κ. If κ is too
small, convergence will take a long time, while setting κ too
large may cause unstable fluctuations. For a moderate and
stable convergence, we choose to set κ so that κ ·(1−F̂)2 = 1when F̂ = 0.8, which leads to κ = 25. Thus, when a flow
reaches 80% of its fair share, its fairness claiming window
increase rate will be double that of TCP-NewReno. Since
an ACP sender keeps measuring F̂ in every control period
tc, the difference between the flows’ window increase steps
is reduced as F̂ approaches 1, ensuring a fast and smooth
convergence.
2888
TABLE IIACP PARAMETER SETTING
Para Value Meaning
tc 200ms the control interval
α 1 the AI parameter when F̂ ≥ 1
κ 25 the parameter for fAI(t) when F̂ < 1γ 0.2 the threshold parameter for the early control
2) Early control state for TCP-friendliness: Generally, it
is very difficult to achieve a high link utilization while being
friendly to TCP, a protocol that responds to packet loss events
by giving up a significant amount of link capacity. In order
to maintain TCP-friendliness, we introduce a state with the
aim of reducing the congestion window and yielding a certain
portion of occupied bandwidth to TCP before a packet loss
occurs. We call this state an early control state. The early
control is invoked in two cases. The first case is when a
flow detects that Q reaches a predefined ratio of the current
congestion window, γ · cwnd, where γ > 0. The second case,
designed to synchronize the bandwidth yielding across all
ACP flows, occurs when a flow detects that other flows have
commenced a window reduction. When another flow reduces
its congestion window, the total bottleneck queue level drops
suddenly. This is reflected by a sudden increase in goodput,
leading to a positive goodput-throughput difference over a
control period tc (which we denote by φ, as opposed to Φwhich is the difference over a longer epoch); therefore, a
combination of positive φ while Φ < 0 is an indication that
another flow must have reduced its window.
The choice of the parameter γ is influenced by two factors:
the size of the congestion window and the size of router
buffers. If an ACP flow has a long RTT, then it has a large
window, raising the threshold of γ · cwnd(t). This implies
that a flow with a short RTT usually has more chance to
invoke the early control. When a bottleneck buffer capacity
is smaller than γ · cwnd(t), the window will be decreased not
by the early control but by a packet loss event. To make an
ACP flow reduce the congestion window by the early control,
γ ·cwnd(t) should be less than the queue capacity. However, if
γ is too small, the early control becomes sensitive to the delay
variation. Because TCP adopts a window backoff factor of 12 ,
this should be an upper bound of γ as well. In our simulations
and experiments, our choice of γ is 0.2, which operates well
in a variety of network environments.
Finally, we set the common control period to be tc = 200ms, which is a conservatively long interval. Similarly to XCP,
choosing a conservative control period involves a tradeoff of
a somewhat slower response for greater stability of estimates;
we are able to afford this choice considering that the ACP
congestion control algorithm employs many other techniques
to bring about a very fast convergence anyway.
Table II summarizes the set of ACP parameters and their
typical values, and the overall congestion control algorithm of
ACP is summarized in Algorithm 1. We emphasize that the
same parameter values are used in all the simulations reported
below, which highlights the robustness of the congestion
control algorithm in a wide variety of scenarios.
To conclude this section, we state the following TCP-
Algorithm 1 The Congestion Control Algorithm of ACP
1: Input: Φ, φ, F̂ , Q(t), and cwnd(t)2:3: if DUPACKs then
4: cwnd(t) = cwnd(t) - Q(t);5: else
6: if Φ ≥ 0 then
7: /* Fast probing */
8: cwnd(t) = cwnd(t) + α · ⌊t−t0tc
⌋;
9: else if Φ < 0 then
10: if (φ > 0) or (Q(t) > γ · cwnd(t)) then11: /* Early control */
12: cwnd(t) = cwnd(t) - Q(t);13: else if φ < 0 then
14: if F̂ ≥ 1 then
15: /* Humble increase */
16: cwnd(t) = cwnd(t) + α;
17: else if 0 ≤ F̂ < 1 then
18: /* Fairness claiming */
19: cwnd(t) = cwnd(t) + α + κ · (1 − F̂)2;20: end if
21: end if
22: end if
23: end if
friendliness property of the ACP algorithm, which is proved
analytically in the extended version of the paper [27].
Theorem 2: Assume that one TCP flow and one ACP
flow share a network with a bottleneck capacity of C, and
their round trip delays are RTTtcp and RTTacp, where
RTTtcp, RTTacp 6= tc. Let wtcp(t) and wacp(t) denote the
throughput of TCP and ACP at time t. Then the (real) fairness
ratio of the ACP flow, F(t) =2wacp(t)
C, converges to one of
the following values:
Case 1: If the buffer is sufficiently large for the window to
be reduced by early control (avoiding packet losses):
F(t)→min {2 ·
α·RTTtcp
tc+ 1−
√
α·RTTtcp
tc+ 1
α·RTTtcp
tc+ 1
, 1};
Case 2: If the buffer is too small for early control and the
window is reduced by a packet loss indication:
F(t)→2 ·
α·RTTtcp
tcα·RTTtcp
tc+ 1
.
III. PERFORMANCE EVALUATION
We now conduct an extensive simulation study to compare
the performance of ACP with that of different TCP flavours
both in conventional and high bandwidth-delay environments.
In particular, we pay special attention to the fairness ACP
achieves in scenarios with different RTTs. Experimental mea-
surement results with our Linux implementation of ACP will
be presented in the last subsection.
Our simulations use the popular packet-level simulator
ns-2 [29], which we have extended with an ACP module.
We compare ACP with TCP-NewReno, FAST, CUBIC, and
HSTCP over the drop tail queuing discipline. Unless stated
otherwise, we use a dumbbell topology with the bottleneck
queue size set to be equal to 100% BDP of the shortest RTT
and link capacity.
A. The Dynamics of ACP
This section presents the short term dynamics of ACP. In
particular, we show that ACP’s throughput, congestion win-
dow, utilization, and queue size show better performance than
2889
0
50
100
150
200
250
300
350
0 200 400 600 800 1000 1200 1400 1600 1800
Thro
ughput (M
bps)
Time (sec)
Flow 1 (RTT=20ms)Flow 2 (RTT=65ms)
Flow 3 (RTT=110ms)Flow 4 (RTT=155ms)Flow 5 (RTT=200ms)
0
500
1000
1500
2000
0 200 400 600 800 1000 1200 1400 1600 1800
Congestion W
indow
(pkts
)
Time (sec)
Flow 1 (RTT=20ms)Flow 2 (RTT=65ms)
Flow 3 (RTT=110ms)
Flow 4 (RTT=155ms)Flow 5 (RTT=200ms)
0
200
400
600
800
0 200 400 600 800 1000 1200 1400 1600 1800
Bottle
neck
Queue (
pkts
)
Time (sec)
Queue size: 750 0
20
40
60
80
100
Bottle
neck
Utiliz
ation (
%)
Queue size: 750
Fig. 1. ACP convergence, utilization and fairness with delayed start of flows with wide range of RTT (20-200 ms).
0
200
400
600
800
1000
0 100 200 300 400 500 600
Congestion W
indow
(packets
)
Time (sec)
RTT=62msRTT=77msRTT=92ms
RTT=107msRTT=122ms
0
20
40
60
80
100
0 100 200 300 400 500 600
Bottle
neck U
tiliz
ation (
%)
Time (sec)
0
1000
2000
3000
4000
5000
6000
7000
0 100 200 300 400 500 600
Bottle
neck Q
ueue (
packets
)
Time (sec)
Queue size: 6250
Fig. 2. ACP is robust against sudden changes in traffic demands. We started 50 FTP flows sharing a bottleneck. At t = 200s, we started 150 additionalflows. At t = 400s, these 150 flows were suddenly stopped and the original 50 flows were left to stabilize again.
any of the existing end-to-end protocols has ever achieved.
Therefore, the average behavior presented in the section above
is highly representative of the general behavior of ACP.
Convergence Behavior: To study the convergence of ACP,
we conducted a simulation with a single bottleneck link with
a bandwidth of 300Mbps in which we introduced 5 flows
into the system, joining the network 300s apart from each
other. The RTT values of the five flows are spread evenly
between 20ms and 200ms. Figure 1 clearly shows that each
ACP flow reacts accurately to the changing circumstances,
with only small and temporary disturbance to fairness and
high link utilization, despite the high bandwidth and very large
differences of RTT.
Sudden Demand Change: In this simulation, we examine
performance as traffic demands and dynamics vary consider-
ably. We start the simulation with 50 long-lived FTP flows
sharing a 1Gbps bottleneck with RTTs evenly spread between
50ms and 200ms. Then, we add WWW background traffic
consisting of 10 users, 200 sessions, page inter-arrival time of
4s, and 10 objects per page with inter-arrival time of 0.01s. The
transfer size of each object is drawn from a Pareto distribution
with an average of 3000 bytes. At t=200s, we start 150
new FTP flows, again with RTTs spread evenly in the range
[50ms,200ms], and let them stabilize. Finally, at t=400s, we
terminate these 150 flows and leave only the original 50 flows
in the system. Figure 2 shows that ACP adapts well to these
sudden traffic changes, and quickly reaches high utilization
and fair allocation with different RTTs.
B. Efficiency
To evaluate the efficiency of the various protocols, we con-
sider two flows with the same propagation delay (RTT=50ms)
and measure average throughput over a simulation time of
500s, for various buffer capacities ranging from 10% to 100%
of the bandwidth-delay product. Figure 3 shows the results for
a 100Mbps and a 1Gbps link. For both link capacities, at lower
buffer sizes ACP and CUBIC achieved greater link utilization
than TCP-NewReno, FAST, and HSTCP. This is a result of
the accurate window downsizing of ACP, as opposed to the
backoff factor of 0.5 used by TCP-NewReno and HSTCP. In
0
20
40
60
80
100
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Bo
ttle
ne
ck U
tiliz
atio
n (
%)
Bottleneck Queue Size (Fraction of BDP)
ACP 100MbpsFAST 100Mbps
NewReno 100MbpsHSTCP 100MbpsCUBIC 100Mbps
ACP 1GbpsFAST 1Gbps
NewReno 1GbpsHSTCP 1GbpsCUBIC 1Gbps
Fig. 3. Aggregate throughput of two competing flows with 100Mbps and1Gbps bottleneck bandwidths.
0.01
0.1
1
20 40 60 80 100 120 140 160 180
No
rma
lize
d T
hro
ug
hp
ut
RTT (ms)
ACPFAST
NewRenoHSTCPCUBIC
Fig. 4. Ratio of throughputs of two competing flows as the propagationdelay of the second flow is varied.
addition, we note the inability of FAST to achieve a reasonable
utilization with small buffers, i.e. around 20% of BDP or less,
although the cause of that effect requires further investigation.
C. RTT Fairness
1) Dumbbell topology: Two Flows: In this experiment, we
measure the RTT fairness of two competing flows that use the
same protocol with a 200Mbps bottleneck. We fix the RTT
of one flow to 200ms and vary the RTT of the other flow in
the range [20ms,180ms] with a 10ms step. Figure 4 displays
the throughput ratio between the flows, showing that ACP and
FAST are the most fair; among the other protocols, CUBIC is
better than TCP-NewReno and HSTCP because of its linear
RTT fairness feature.
Multiple Flows: This experiment tests the fairness of
ACP and other protocols with multiple competing flows with
different RTTs. We have 20 long-lived FTP flows sharing a
single 1Gbps bottleneck. We conduct three sets of simulations.
In the first set, all flows have a common round trip propagation
delay of 50 ms. In the second set of simulations, the flows
have different RTTs in the range [20ms,115ms] (evenly spaced
at increments of 5ms). In the third set, the flows again have
different RTTs in a wider range of [20ms,210ms] (increments
2890
30
40
50
60
70
1 3 5 7 9 11 13 15 17 19
Flo
w T
hro
ug
hp
ut
(Mb
ps)
Flow ID
ACPFAST
NewRenoHSTCPCUBIC
(a) Equal RTT
0
30
60
90
120
150
1 3 5 7 9 11 13 15 17 19
Flo
w T
hro
ug
hp
ut
(Mb
ps)
Flow ID
HSTCP: (1, 164)HSTCP:(18, 240)
ACPFAST
NewRenoHSTCPCUBIC
(b) Different RTT (20-115ms)
0
30
60
90
120
150
1 3 5 7 9 11 13 15 17 19
Flo
w T
hro
ug
hp
ut
(Mb
ps)
Flow ID
HSTCP: (1, 465)NewReno: (1, 215)
ACPFAST
NewRenoHSTCPCUBIC
(c) Very Different RTT (20-210ms)
Fig. 5. Bandwidth share among multiple competing flows with either equal or different RTTs.
����������
�� ����� ��� ����� �� ����� 0 10 20 30 40 50 60 70 80 90
100
1 2 3 4 5 6 7 8 9B
ott
len
eck U
tiliz
atio
n (
%)
Link ID
ACPFAST
NewRenoHSTCPCUBIC
0
10
20
30
40
50
0 50 100 150 200 250 300 350 400 450 500
Th
rou
gh
pu
t (M
bp
s)
Time (sec)
ACPFAST
NewRenoHSTCP
CUBIC
Fig. 6. A string of multiple congested queues, shared by an all-hop flow and separate flows over individual links. Link 5 has a lower capacity than the rest.
of 10ms). In all the scenarios, the buffer capacity is set to
100% of the BDP of the flow with the lowest RTT.
As presented in [30], multiple TCP flows with heteroge-
neous RTTs generally achieve bandwidth allocation that is pro-
portional to 1RTT z , where 1 ≤ z ≤ 2. CUBIC [22] improves
this ratio by featuring a linear RTT fairness mechanism. We
indeed observe the fairer bandwidth distribution of CUBIC
in Figure 5. However, ACP avoids the RTT fairness problem
altogether and consistently achieves an even distribution of
capacity among the competing flows, even when those flows
have significantly different RTTs.
2) A more complex topology: We now test the various
protocols using the 9-link topology shown in Figure 6. Here,
link 5 has the lowest capacity of 100Mbps, whereas all others
are 200Mbps links. Every link has the same round trip delay of
20ms. There is one flow traversing all 9 hops with an RTT of
180ms, and nine additional cross-flows (denoted by the small
dashed arrows) that only traverse one individual link each.
Figure 6 shows the average utilization and throughput of the
all-hop flow in this scenario. Here, ACP and FAST are the
only protocols that guarantee a fair throughput to the all-hop
flow; with all other protocols, the all-hop flow suffers from a
low bandwidth share due to packet losses in multiple links.
D. TCP-Friendliness of ACP
Figure 7 shows the throughput obtained by competing TCP
and ACP flows, with various combinations of number of flows
of each type. Here, the bottleneck capacity was set at 500Mbps
and the round trip propagation delay was 200ms for all flows.
For convenience, the throughput of each flow in all cases
is shown as normalized to the fair share value,500Mbps
# of flows.
This simulation demonstrates that ACP is as TCP-friendly as
other Internet congestion control protocols under considera-
tion. Moreover, Figure 7(b) verifies an additional desirable
effect when the bottleneck buffer is lower than the bandwidth-
delay product. In that case, TCP flows cannot utilize the link
in full (especially when the number of TCP flows grows), due
to the overly aggressive decrease of congestion window in
0
0.5
1
1.5
2
2.5
3
3 TCP30 ACP
5 TCP30 ACP
15 TCP30 ACP
30 TCP30 ACP
30 TCP15 ACP
30 TCP5 ACP
30 TCP3 ACP
No
rma
lize
d T
hro
ug
hp
ut
ACPTCP
(a) Queue size (100% of BDP)
0
0.5
1
1.5
2
2.5
3
3 TCP30 ACP
5 TCP30 ACP
15 TCP30 ACP
30 TCP30 ACP
30 TCP15 ACP
30 TCP5 ACP
30 TCP3 ACP
No
rma
lize
d T
hro
ug
hp
ut
ACPTCP
(b) Queue size (50% of BDP)
Fig. 7. TCP-friendliness of ACP.
response to packet losses; consequently, ACP flows consume
the additional bandwidth and thereby retain a high utilization
of the link without adversely affecting the amount that TCP
flows would have achieved competing on their own.
Finally, we extend the TCP-friendliness evaluation by vary-
ing the RTT of a TCP flow in the range [20ms,200ms] and
the capacity of the bottleneck buffer in the range [20%,100%]
of the respective BDP. The TCP flow competes with one ACP
flow with RTT equal to 200ms. Figure 8 shows the ratio of
the TCP flow throughput to the fair share (i.e. one half of the
bandwidth). This demonstrates the TCP-friendliness resulting
from the early control mechanism of ACP when competing
with a TCP flow, as indicated by Theorem 2.
E. Linux Implementation
We have implemented a prototype of ACP by modifying
the congestion window management functionality in the Linux
kernel and conducted several experiments on a dummynet
testbed [31], as shown in Figure 9, to compare the performance
of ACP and the Linux TCP implementation, focusing mainly
on ACP’s TCP-friendliness and RTT fairness. The results of
2891
No
rma
lize
dT
hro
ug
hp
ut
0.66
0.91
1.42
20 40 60 80 100 120 140 160 180 200RTT of TCP (ms)
0.2
0.4
0.6
0.8
1
Queue Capacity(Fraction of BDP)
0
0.5
1
1.5
2
Fig. 8. Normalized throughput of TCP
��������
��������
��� ���� ��� ������������
������
�������
���������
���������
Fig. 9. Our dummynet testbed.
these experiments are presented below.
We have configured the following options in the ACP Linux
implementation:
• TCP Segmentation Offload (TSO): We disabled the
TSO option because a TSO-enabled sender often gen-
erates packets whose size is smaller than the Maximum
Transfer Unit (MTU). This causes inaccuracy in estimat-
ing the fairness ratio.
• TCP-SACK: We implemented ACP with the TCP-SACK
option to recover efficiently from multiple packet losses.
• RTTM: We implemented the Round-Trip Time Measure-
ment (RTTM) option, to allow delay measurements to be
taken with high precision (microsecond resolution).
1) Testbed setup: Our testbed consists of two senders and
two receivers running Linux kernel 2.6.24, connected via an
emulated router running dummynet under FreeBSD-7.0. Each
testbed machine has a single Intel Core2 Quad 2.83GHz
CPU, 8 GB of main memory, and an Intel PRO/1000 Gigabit
Ethernet interface. We configured dummynet to have a fixed
buffer size of 1MB and a 200Mbps bottleneck link in all
experiments. We used iperf for bandwidth measurement; the
bandwidth in all plots is the running average over a time
window of 4 seconds. We also used the TcpProbe kernel
module to trace the congestion window of a TCP connection.
2) TCP-friendliness: To assess ACP’s TCP-friendliness, we
conduct three sets of experiments, all of which have one ACP
flow joined by one TCP flow after 30 seconds. The ACP flow
has a 40ms RTT in all experiments, while TCP is set to have
RTT of 20ms, 40ms, and 160ms in the three experiments,
respectively. Figure 10 shows three pairs of graphs, namely,
the congestion windows and throughputs of the TCP and ACP
flows in each of the three experiments.
We observe that in the first experiment (Figure 10(a)), rather
than being throttled by the TCP flow, the ACP flow tries
to converge to the fair share through the aggressive adaptive
increase (fairness claiming) of ACP. Figure 10(b) shows that
when both flows had the same RTT in the second experiment,
they coexisted well as expected. Finally, Figure 10(c) shows
the most interesting result. Even though TCP has a much
longer RTT in this experiment, ACP does not take advantage
of that fact. Instead, ACP continuously yields its bandwidth
portion to TCP in order to redistribute bandwidth for fair
sharing. When TCP eventually reduces its congestion window,
making the network underutilized, then ACP acquires the spare
bandwidth quickly by entering its fast probing phase. Note that
the overall convergence time of the system is rather slow in
this case, which is entirely due to the slow window increase
of the TCP flow.
3) RTT fairness: This experiment measures the RTT fair-
ness between two competing ACP flows. We fix the RTT of
one flow to 150ms, and vary the RTT of the other in the range
[40ms,120ms] in 10ms increments. Figure 11(a) shows that the
normalized throughput, i.e. the ratio between the throughputs
achieved by both flows, is close to 1 in all cases considered.
Figure 11(b) shows a sample of the throughput evolution over
time for a case with a large RTT difference, namely 50ms and
150ms. We observe that the flow with the shorter RTT keeps
releasing its bandwidth portion whenever it exceeded its fair
share (i.e., the early control phase), while the flow with a
longer RTT claims its fair share based on the fairness ratio.
These figures support the simulation outcomes and illustrate
the fairness properties of ACP in a real implementation.
IV. CONCLUSION AND FUTURE WORK
End-to-end congestion control in the Internet has many chal-
lenges that include high utilization, fair sharing, RTT fairness,
and TCP-friendliness. We have described an adaptive end-to-
end congestion control protocol, ACP, that deals successfully
with these challenges without requiring any router support.
ACP uses two measurements that estimate important link
information; queue growth estimation is used to downsize the
congestion window so as to empty the bottleneck queue and
retain high link utilization, and fairness ratio estimation ac-
complishes fast convergence to a fair equilibrium by allowing
flows to increase their window more aggressively when their
share is below the fair level. To resolve the RTT unfairness
problem, all estimations and window increases are performed
with a fixed control period independent of the RTT, while the
early control phase of ACP releases a portion of bandwidth
before packet losses to maintain fair sharing with TCP flows.
Our extensive simulations and experiments demonstrate the
superior characteristics of ACP in comparison with TCP-
NewReno, FAST, CUBIC, and HSTCP under a drop tail
queuing discipline. In most scenarios, ACP was able to retain
its desirable characteristics as the per-flow delay-bandwidth
product became large, whereas TCP variants suffered severely.
We therefore believe that ACP is a very attractive end-to-
end congestion control mechanism for HBDP flows that are
becoming increasingly prevalent in the Internet. Further theo-
retical modeling efforts, as well as more extensive evaluation
in real networks, are the subject of ongoing work.
REFERENCES
[1] D.-M. Chiu and R. Jain, “Analysis of the increase and decrease al-gorithms for congestion avoidance in computer networks,” Computer
Networks and ISDN Systems, vol. 17, no. 1, pp. 1–14, 1989.
2892
0
300
600
900
1200
1500
0 50 100 150 200 250 300
Co
ng
estio
n W
ind
ow
(p
acke
ts)
time (sec)
ACP (RTT=40ms)TCP (RTT=20ms)
0
300
600
900
1200
1500
0 50 100 150 200 250 300
Co
ng
estio
n W
ind
ow
(p
acke
ts)
time (sec)
ACP (RTT=40ms)TCP (RTT=40ms)
0
500
1000
1500
2000
2500
3000
0 300 600 900 1200 1500
Co
ng
estio
n W
ind
ow
(p
acke
ts)
time (sec)
ACP (RTT=40ms)TCP (RTT=160ms)
0
40
80
120
160
200
0 50 100 150 200 250 300
Th
rou
gh
pu
t (M
bp
s)
time (sec)
ACP (RTT=40ms)TCP (RTT=20ms)
(a) TCP has 20ms RTT
0
40
80
120
160
200
0 50 100 150 200 250 300
Th
rou
gh
pu
t (M
bp
s)
time (sec)
ACP (RTT=40ms)TCP (RTT=40ms)
(b) TCP has 40ms RTT
0
40
80
120
160
200
0 300 600 900 1200 1500
Th
rou
gh
pu
t (M
bp
s)
time (sec)
ACP (RTT=40ms)TCP (RTT=160ms)
(c) TCP has 160ms RTT
Fig. 10. Dynamics of competing TCP and ACP flows.
0
0.2
0.4
0.6
0.8
1
1.2
40 50 60 70 80 90 100 110 120
No
rma
lize
d T
hro
ug
hp
ut
RTT (ms)
ACP
(a) Normalized throughput
0
40
80
120
160
200
0 100 200 300 400 500 600
Th
rou
gh
pu
t (M
bp
s)
time (sec)
ACP flow 1 (RTT=50ms)ACP flow 2 (RTT=150ms)
(b) Example throughput time history
Fig. 11. Throughput performance and fairness between two ACP flows.
[2] V. Jacobson, “Congestion avoidance and control,” SIGCOMM Computer
Communication Review, vol. 18, no. 4, pp. 314–329, 1988.
[3] M. Allman, V. Paxson, and W. Stevens, “RFC 2581: TCP congestioncontrol,” Apr. 1999.
[4] S. Floyd, T. Henderson, and A. Gurtov, “RFC 3782: The NewRenomodification to TCP’s fast recovery algorithm,” Apr. 2004.
[5] D. Katabi, M. Handley, and C. Rohrs, “Congestion control for highbandwidth-delay product networks,” in Proc. of ACM SIGCOMM, Pitts-burgh, PA, USA, Aug. 2002.
[6] Y. Xia, L. Subramanian, I. Stoica, and S. Kalyanaraman, “One more bitis enough,” in Proc. of ACM SIGCOMM, Philadelphia, PA, USA, Aug.2005.
[7] I. A. Qazi and T. Znati, “On the design of load factor based congestioncontrol protocols for next-generation networks,” in Proc. of IEEE
INFOCOM, Phoenix, AZ, USA, Apr. 2008.
[8] X. Huang, C. Lin, F. Ren, G. Yang, P. Ungsunan, and Y. Wang, “Im-proving the convergence and stability of congestion control algorithm,”in Proc. of IEEE ICNP, Beijing, China, Oct. 2007.
[9] R. Jain, “A delay-based approach for congestion avoidance in inter-connected heterogeneous computer networks,” SIGCOMM ComputerCommunication Review, vol. 19, no. 5, pp. 56–71, 1989.
[10] L. S. Brakmo, S. W. O’Malley, and L. L. Peterson, “TCP Vegas: Newtechniques for congestion detection and avoidance,” in Proc. of ACM
SIGCOMM, London, UK, Aug. 1994.
[11] J. S. Ahn, P. B. Danzig, Z. Liu, and L. Yan, “Evaluation of TCP Vegas:Emulation and experiment,” in Proc. of ACM SIGCOMM, Cambridge,MA, USA, Aug. 1995.
[12] C. Jin, D. X. Wei, and S. H. Low, “FAST TCP: Motivation, architecture,algorithms, performance,” in Proc. of IEEE INFOCOM, Hong Kong,China, Mar. 2004.
[13] J.-S. Li and C.-W. Ma, “Improving fairness of TCP Vegas,” InternationalJournal of Network Management, vol. 15, no. 1, pp. 3–10, 2005.
[14] Hasegawa, G. and Murata, M. and Miyahara, H., “Fairness and stabilityof congestion control mechanisms of TCP,” in Proc. of IEEE INFO-
COM, New York, NY, USA, Mar. 1999.[15] J. Martin, A. Nilsson, and I. Rhee, “Delay-based congestion avoidance
for TCP,” IEEE/ACM Transactions on Networking, vol. 11, no. 3, pp.356–369, 2003.
[16] R. S. Prasad, M. Jain, and C. Dovrolis, “On the effectiveness ofdelay-based congestion avoidance,” in Proc. of the PFLDNet Workshop,Argonne, IL, USA, Feb. 2004.
[17] H. Jiang and C. Dovrolis, “Passive estimation of TCP round-trip times,”SIGCOMM Computer Communication Review, vol. 32, no. 3, pp. 75–88,2002.
[18] S. Floyd, “RFC 3649: HighSpeed TCP for large congestion windows,”Dec. 2003.
[19] D. Leith and R. Shorten, “H-TCP: TCP for high-speed and long-distancenetworks,” in Proc. of the PFLDNet Workshop, Argonne, IL, USA, Feb.2004.
[20] T. Kelly, “Scalable TCP: Improving performance in highspeed wide areanetworks,” SIGCOMM Computer Communication Review, vol. 33, no. 2,pp. 83–91, 2003.
[21] L. Xu, K. Harfoush, and I. Rhee, “Binary increase congestion controlfor fast long distance networks,” in Proc. of IEEE INFOCOM, HongKong, China, Mar. 2004.
[22] I. Rhee and L. Xu, “CUBIC: A new TCP-friendly high-speed TCPvariant,” in Proc. of the PFLDNet Workshop, Lyon, France, Feb. 2005.
[23] S. Bhandarkar, S. Jain, and A. Reddy, “Improving TCP performance inhigh bandwidth high RTT links using layered congestion control,” inProc. of the PFLDNet Workshop, Lyon, France, Feb. 2005.
[24] E. Kohler, M. Handley, and S. Floyd, “Designing DCCP: Congestioncontrol without reliability,” in Proc. of ACM SIGCOMM, Pisa, Italy,Sep. 2006.
[25] S. Floyd, M. Handley, J. Padhye, and J. Widmer, “RFC 5348: TCPFriendly Rate Control (TFRC): Protocol specification,” Sep. 2008.
[26] H. Jung, S. Kim, H. Yeom, and S. Kang, “TCP-GT: A new approachto congestion control based on goodput and throughput,” Journal of
Communications and Networks, vol. 12, no. 5, pp. 499–509, Oct. 2010.[27] H. Jung, S. Kim, H. Yeom, S. Kang, and L. Libman, “SNU
technical report: Adaptive delay-based congestion control for highbandwidth-delay product networks,” 2010. [Online]. Available: http://dcslab.snu.ac.kr/acp/tr-dcs-0901.pdf
[28] R. Jain, S. Kalyanaraman, and R. Viswanathan, “The OSU scheme forcongestion avoidance in ATM networks: Lessons learnt and extensions,”Performance Evaluation, vol. 31, no. 1, pp. 67–88, 1997.
[29] “The network simulator — ns-2.” [Online]. Available: http://www.isi.edu/nsnam/ns
[30] T. V. Lakshman and U. Madhow, “The performance of TCP/IP for net-works with high bandwidth-delay products and random loss,” IEEE/ACMTransactions on Networking, vol. 5, no. 3, pp. 336–350, 1997.
[31] L. Rizzo, “Dummynet.” [Online]. Available: http://info.iet.unipi.it/∼luigi/ip dummynet
2893
Recommended