View
1
Download
0
Category
Preview:
Citation preview
Study of MultiPath TCP: Experimental
Performance Evaluation of Simultaneous Data
Transmission over Multiple Wireless Interfaces
Cañizares Andrés, Guillem
Curs 2016-2017
Director: Boris Bellalta Jimenez
GRAU EN ENGINYERIA TELEMÀTICA
Treball de Fi de Grau
GRAU EN ENGINYERIA EN
xxxxxxxxxxxx
Study of MultiPath TCP: Experimental Performance
Evaluation of Simultaneous Data Transmission over
Multiple Wireless Interfaces
Guillem Cañizares Andrés
TREBALL FI DE GRAU
GRAU EN ENGINYERIA TELEMÀTICA
ESCOLA SUPERIOR POLITÈCNICA UPF
2017
DIRECTOR: Boris Bellalta Jimenez
ii
iii
For Laura, who have always been there
v
Acknowledgements
I would like to thank Professor Boris Bellalta for giving me this opportunity, his warm
encouragement and guidance.
Special thanks to Albert Bel and Maddalena Nurchis for the advises when I needed
them and make me feel one more in their team. As well, to the whole Wireless
Networking Group, who always have been receptive and affable with me.
I would like to express my deeply appreciation to everyone I have met during the
degree, either professors and colleagues; without them, anything would have been
possible. Specially to my friend Alex, for his inestimable help in this project.
My deepest heartfelt appreciation goes to my family, for their unconditional support.
vii
Abstract
It is known that there are more mobile devices than people in our world. Considering
the number of wireless gadgets, like smartphones or tablets, and the huge amount of
data we currently produce, telecommunication science is working to provide new
solutions to make a better and more efficient user experience.
Our proposal in this dissertation is to test and analyse the advantages and disadvantages
of transmitting data over multiple wireless interfaces, more precisely the study of
Multipath Transmission Control Protocol (MPTCP), a newly Transport Layer protocol
that allows mobile devices to establish multiple connections simultaneously. This fact
could provide the user with a robust and seamless connection while having redundancy,
and a better load balancing in the network, spreading the data among various
information flows.
We set up a physical WLAN client-server connection -composed of two computers and
multiple access points- and developed a Java Application to test simultaneous
transmissions in different network scenarios, with the objective of emulate the
performance of MPTCP in an offline controlled environment, for posterior study.
The implementation of this technique in our nowadays networks could mean a step
forward in the telecommunications world, as it would exploit wireless links and change
our connection concept with the arrival of 5G in the following years.
Resum
És un fet que hi ha més dispositius mòbils que persones al món. Tenint en compte el
nombre d'aparells sense fil, com telèfons intel·ligents o tauletes, i l'enorme quantitat de
dades que produïm actualment, la ciència de les telecomunicacions està treballant per
oferir noves solucions per fer una millor i més eficient experiència d'usuari.
La nostra proposta en aquesta tesi consisteix en provar i analitzar els avantatges i
inconvenients de la transmissió de dades a través de múltiples interfícies sense fils, més
precisament l'estudi de Multipath Transmission Control Protocol (MPTCP), un nou
protocol de la Capa de Transport que permet als dispositius mòbils per establir
múltiples connexions simultàniament. Aquest fet podria proporcionar a l'usuari una
connexió robusta i sense interrupcions mentre que el dota de redundància; i un millor
equilibri de càrrega a la xarxa, repartint les dades entre els diversos fluxos d'informació.
Hem creat una xarxa WLAN amb model client-servidor -format per dos ordinadors i
múltiples punts d’accés- i desenvolupat una aplicació Java per provar transmissions
simultànies per diferents escenaris de la xarxa, amb l'objectiu de reproduir el rendiment
de MPTCP en un ambient controlat fora de línia, per posterior estudi.
L'aplicació d'aquesta tècnica en les xarxes d'avui en dia podria significar un pas
endavant en el món de les telecomunicacions, ja que explotaria les capacitats dels
enllaços sense fil i canviaria el nostre concepte de connexió amb l'arribada de 5G en els
següents anys.
ix
Preface
Based on Cisco analyses [1], global mobile data traffic has grown 63 percent in 2016,
reaching 7,2 exabytes (ten to the power of eighteen bytes) per month, where the 69%
was generated by 4th generation (4G) connections, although it only represented the 26
percent of mobile connections. Sixty percent of total mobile data traffic was offloaded
onto the fixed network through Wi-Fi or femtocell, and its network downstream speed
was 6.8 Megabits per second (Mbps) on average.
Additionally, 429 million mobile devices and connections were joined last year, where
46% represented smart devices over the total amount, and resulted 89% of the mobile
data traffic. On average, a smart device generated 13 times more traffic than a non-
smart device. Smartphones represented only 45 percent of total mobile devices and
connections, but generated 81% of total traffic, while the number of mobile-connected
tablets and computers increased 26% and 8% respectively, generating 3.392 MB per
months, compared to 1.614 MB by smartphone. The 43% of these devices were
potentially IPv6-capable.
Statistics predict that in 2021, the monthly global mobile data traffic will be 49
exabytes, and annual traffic will exceed a half zettabyte (ten to the power of twenty-one
bytes), representing the 20 percent of total IP traffic, while the average global mobile
connection speed will surpass 20 Mbps. It is also though that the number of mobile-
connected devices per capita will reach 1.5 by that year, (around 11,6 billion) where the
total number of smartphones will be over 50 percent of this amount, generating 6,8 GB
per month and representing the 86% of the data traffic. 4G technologies will represent
53% of connections, but 79% of total traffic, while a starting 5th communications
generation will be 0,2 percent of connections, but 1,5% of total traffic; generating 4,7
times more traffic than the average 4G connection. There will be 8.4 billion IPv6-
capable devices, representing 73% of all global mobile devices.
Of all IP traffic (fixed and mobile) in 2021, 50% will be Wi-Fi, 30% will be wired, and
20% will be mobile, where 78% of the world mobile data traffic will be video.
x
xi
Contents
Abstract ....................................................................................................... vii
Preface ........................................................................................................ ix
1. Introduction ..........................................................................................1 1.1. Motivation............................................................................................. 1
1.2. Objectives ............................................................................................ 2
1.3. Structure .............................................................................................. 3
2. MultiPath TCP ......................................................................................5 2.1. Definition .............................................................................................. 5
2.2. Connection Establishment .................................................................... 7
2.3. Operational .......................................................................................... 8
2.3.1. Scheduling Policies........................................................................ 9
2.3.2. Congestion Control ...................................................................... 11
2.3.3. Packet Reordering Recovery Mechanisms .................................. 13
3. State of the Art ...................................................................................17 3.1. Theoretical Researches and Experimental Methods .......................... 17
3.2. Future Implementation Ideas and Uses .............................................. 19
4. Networking through Multiple Interfaces ...........................................21 4.1. Testbed Setup .................................................................................... 21
4.2. MPTCP Implementation ..................................................................... 22
4.3. Application Development.................................................................... 23
5. Performance Evaluation ....................................................................27 5.1. Test Bench ......................................................................................... 27
5.2. Software Implementation.................................................................... 28
5.2.1. MPTCP ........................................................................................ 28
5.2.2. Java App ..................................................................................... 29
5.3. Results ............................................................................................... 32
5.3.1. Basic Upload Test ....................................................................... 32
5.3.2. Heterogeneous Transmission-Rate Test...................................... 34
5.3.3. Background Traffic Test ............................................................... 36
xii
6. Conclusions .......................................................................................39 6.1. Discussion.......................................................................................... 39
6.2. Future Work ....................................................................................... 40
References ..................................................................................................41
List of Figures
Figure 1: Download time over Multipath TCP vs Single Path (SP) standard connection ............ 6
Figure 2: Traditional WAN backup solution with two networks available .................................. 6
Figure 3: WAN backup solution using Multipath TCP ............................................................... 6
Figure 4: Multiple connections establishment for MPTCP .......................................................... 8
Figure 5: Multipath TCP protocol stack ...................................................................................... 9
Figure 6: Round Robin scheduling mechanism ..........................................................................10
Figure 7: TCP Slow-Start and Congestion Avoidance stages .....................................................11
Figure 8: Out-Of-Order packet algorithm ...................................................................................14
Figure 9: Packet reordering example ..........................................................................................15
Figure 10: Testbed setup ............................................................................................................21
Figure 11: Routing IP rules and routes for MPTCP configuration. ............................................23
Figure 12: Wireshark capture of an MPTCP packet ...................................................................28
Figure 13: Multi-interfaced file transmission system flux diagram ............................................29
Figure 14: Capture of a connection simulation using our Java App ...........................................30
Figure 15: Connection capture using ifstat .................................................................................31
Figure 16: Link monopole effect ................................................................................................31
Figure 17: File downloading times expressed in logarithmic scale ............................................32
Figure 18: First set of results for Basic Upload Test ..................................................................33
Figure 19: Second set of results for Basic Upload Test. .............................................................33
Figure 20: MP/SP Download Time Ratio ...................................................................................34
Figure 21: First set of tests for 8 MB sized file in Heterogeneous Transmission-Rate Test .......35
Figure 22: Second set of tests for 64MB sized file in Heterogeneous Transmission-Rate Test ..35
Figure 23: Third set of tests for 512 MB sized file in Heterogeneous Transmission-Rate Test ..35
Figure 24: First set of tests for 8 MB sized file in Background Traffic Test ..............................37
Figure 25: Second set of tests for 64 MB sized file in Background Traffic test .........................38
List of Tables
Table 1: Theoretical researches and experimental methods of MPTCP. ....................................18
Table 2: Future implementation ideas and uses for MPTCP. .....................................................20
Table 3: Commands for MPTCP rules and routing configuration. .............................................23
Table 4: Results for the Basic Upload Test. ...............................................................................33
Table 5: Results for the Heterogeneous Transmission-Rate Test. ..............................................36
Table 6: Results for the Background Traffic Test for 8 MB file. ................................................37
Table 7: Results for the Background Traffic Test for 64 MB file. ..............................................38
xiii
1
Section 1
1. Introduction
In the technology era, the world is becoming increasingly more complex and
interconnected. Information has become a basic element in our daily life: since we get
up in the morning till we go to sleep at night, we are connected to the internet through
smartphones, tablets or computers. Everything is following the digital way, and having
internet access will become a basic need in a not far future.
The way we interact with each other is evolving. In fact, the way we interact with
everything is in continuous evolution. Services, companies and even cities are
becoming digital, into the cloud. In a few years, we will communicate with every object
that surrounds us and get information from it. For this to happen, technology has to
continue growing, investigating new systems and improving the telecommunication
infrastructures. Either in big crowded environments like cities or in small and familiar
spaces like homes, technology will have a huge impact in the way we live today.
Internet in its origins, was thought to be carried through wires, where data was taped;
however, the emergence of wireless technologies has altered this order, turning it into
the chaotic freedom of data floating at free will. Technology is turning into the wireless
world, and the Internet of Things (IoT) will come to open a new window of
possibilities, changing the way we conceive everything and building a fully
communicated world.
We already live surrounded by devices that generate and spread data to the air, where
other devices receive it to process and send it again. Even though, we do not take the
maximum profit we could. Nowadays, wireless terminals are already equipped with
multiple network interfaces that allows the user to connect to different technologies
such as WiFi or 4G LTE, but we only can use one at the time. We think that, using
multiple interfaces simultaneously to perform data transmissions not only would highly
improve user experience, achieving a better connection with higher throughput and
resilience; but would mean a step forward to this globally connected world.
1.1. Motivation We live in a period of time where technology is evolving faster and faster.
It is thought that in a few decades network topologies will be extended and cities will
have a wide deployment of access points where we will be able to connect, and get
information from everything and everywhere using our wireless devices at high
download capacities around the gigabyte per second.
Today mobile devices such as laptops and smartphones are already equipped with
multiple wireless interfaces belonging to different access technologies. Considering the
2
fact that cities tend to evolve to a full technologic system based on communication, we
will be surrounded by access points -among others- where people will be able to
connect to. Taking advantage of these facts, the implementation of a protocol such as
Multipath TCP (MPTCP) would become crucial, in order to exploit the possibilities of
the network and provide a better user experience.
With this dissertation, we would like to contribute to the telecommunication science
with a step forward, testing and experimenting an internet protocol which impact and
use we think will be significant on the future networks, as people will be able to
establish several connections simultaneously, providing a seamless connection gifted
with higher download speed and connection redundancy.
1.2. Objectives
The main goal of this project is to analyse and observe the performance of MPTCP and
to validate the following hypothesis: a device connected to different links
simultaneously using multiple interfaces will have a better connection performance than
a single interfaced terminal.
To achieve this objective, we firstly need to study Multipath TCP, the flagship protocol
in this field, inquire into its implementation, as well as all the development and research
already done over this protocol. How does it work, how connections are established or
how traffic is spread over the subflows; are the main question to ask before testing the
MPTCP. Information is needed for this point, as we want to understand the protocol
theoretically in its wholeness.
Once we understand MPTCP, we will go on with the following steps:
- Design and configure the Wireless Local Area Network (WLAN) where tests
will be performed. This network will be based in a client-server connection,
built up over an access point mesh that connects both terminals. Our network
will operate offline, since we will control all the traffic transmitted without
interferences, for further analysis.
- Installation and configuration of Multipath TCP in both client and server. The
protocol will be installed in the machines with the aim of observing its
performance in our network.
- Development of a Java Application that reproduces in a basic way the
performance of MPTCP. Our App will connect client and server using two
interfaces simultaneously, and splitting the traffic over both paths, as it will be
built up over the test network and configured the same way as the protocol. Our
intention with this, is to validate our hypothesis and control the performance of
the transmission in its wholeness.
3
1.3. Structure
This document is organized as follows: section 2 introduces the MPTCP protocol; here
we show its main characteristics and how it connects and works on the network, as well
as the different schedulers, congestion controllers and packet reordering mechanisms
proposed. Section 3 contains a sum up of other scientific investigations that not only
have been important in our research and comprehension, but proposes further
implementations and uses of the protocol. In section 4 we introduce the network we
developed specifically for the experiments and how MPTCP and the Java App were
installed and configured. The test bench of tests performed and solutions of these will
be extensively explained in section 5. Finally, we conclude our research in section 6.
4
5
Section 2
2. MultiPath TCP
Transmission Control Protocol (TCP) [2] has been the main transport protocol since its
beginning on the internet. We understand the internet as it is nowadays because of the
TCP/IP architecture, where some of the main connection characteristics belong to TCP,
those features that have made this protocol basic in the model:
- Connection setup handshake and state machine
- Reliable transmission and acknowledgment of data
- Congestion control
- Flow control
- Connection teardown handshake and state machine
However, it is now starting to show some weaknesses against wireless networks and its
devices mobility. TCP design was oriented to the wired network; thus, it was not
thought for the IP address to change during the connection, an action that occurs
frequently in wireless networks [3]. Whenever this address is modified, the running
application must know that the network has been changed and re-establish the
connection with the server to get a new IP address and recover the connection.
In order to avoid this kind of problems, we propose Multipath TCP [4], a new protocol
built up from its predecessor, which offers the user the ability to connect multiple
interfaces simultaneously.
MPTCP is defined in RFC 6824 [5], as an Internet Engineering Task Force (IETF)
experimental standard.
2.1. Definition
Multi-Path TCP [5] is a set of extensions to the standard TCP that provides multiple
end-to-end connections simultaneously. The process in which the device combines
different information flows or paths -also known as subflows- and combines them at the
Transport Layer is called multihoming. For this protocol to work, at least one end node on the connection must have two IP addresses running at its own.
Each flow behaves like a regular TCP connection, so MPTCP can work in all nowadays
networks where standard TCP stands [4], [6], even any application that use the common
TCP API can support this protocol without the necessity of being modified; the
Transport Layer does this work esoterically and transparently for the rest of the layers.
The protocol also must function through most existing internet middleboxes, such as
firewalls or NATs.
6
Figure 1: Download time over Multipath TCP vs Single Path (SP) standard connection. MPTCP shows a shorter
downloading time because of its higher throughput. The MP protocol clearly outstands SP performance theoretically.
Figure based on the works of Ericsson [6].
Multiple alternative routes give redundancy to the established connection, moreover
using path diversity, the protocol can redirect the traffic and remove or add new
subflows in case of network congestion, providing a better load balance. It has been
shown [7] that MPTCP improves network resilience and increases the throughput, as
well as it can benefit multi-homed servers and data centres efficiency.
Figure 2: Traditional WAN backup solution with two networks available. In a standard SP connection, when there is a
failure on the established link, we are offline a few seconds till the connection is re-established. Figure based on the
works of Ericsson [6].
On one hand, when the ongoing session breaks down in a regular TCP connection,
service will be interrupted for several seconds, as the client tries to reconnect to the
original server. It may occur that coexist diverse available networks; in this case, the
client could connect to a different one while it waits to reconnect to the original network
which was firstly connected to.
Figure 3: WAN backup solution using Multipath TCP. In a Multipath (MP) connection, when we have multiple links
active, if a connection goes down we will still have others online. This mechanism can maintain the user connected
without being aware that a link has suffered a failure. Note that the bandwidth is higher because of the aggregate
throughput. Figure based on the works of Ericsson [6].
On the other hand, if it is possible to connect to multiple networks simultaneously using
MPTCP, when a connection crashes, the client would be still connected by the second
7
connection, slowing down the connection link but remaining online. This way, when
connection 1 is established again, the client will recover the optimal speed. Moreover,
by the time the client is connected to both links, the total bandwidth will result of the
combination of the two on them [8], improving the connection throughput.
WiFi and cellular data networks such as 3G or 4G, are usually the most common
decision when establishing a connection. Even though, since the engineering point of
view, there are significant differences between these two technologies.
Cellular networks provide the user with a broader signal coverage in comparison with
WiFi. Moreover, cellular data carriers have improved their systems with extensive local
retransmission mechanisms, which mitigate TCP retransmissions and optimizes the
network resources, offering a more reliable connectivity under mobility. These
mechanisms reduce the impact of losses and improve TCP throughput; however, it
supposes a cost of increased rate variability and delay.
WiFi provides higher loss rates than cellular networks, but shorter packet Round-Trip
Times (RTTs). Studies [9] have shown that the loss rates over 3G/4G networks are
generally lower than 0.1%, while those of WiFi vary from 1% to 3%. The average RTT
for WiFi networks is about 30 ms, while that of 4G cellular carriers usually has base
RTTs of 60 ms, and can increase by four to tenfold in a single 4G connection -
depending on the carrier and the flow sizes-, and 20-fold in 3G networks.
Although cellular networks in general have larger packet RTTs, generally WiFi is no
longer faster than 4G LTE, and this fact provides greater incentive to use MPTCP for
robust data transport and better throughput.
2.2. Connection Establishment
MPTCP connection establishment procedure follows the same way as standard TCP
connection.
Multiple subflows are not added all at the same time, the client establishes the
connection with the server through the regular three-way handshake, using an additional
MP_CAPABLE option in the SYN segment. With this option, the client notifies the
server that it supports multipath connections. If the server also supports this protocol,
replies with an ACK which contains the MP_CAPABLE option. Finally, the client
confirms the multipath connection by sending an ACK with this option again,
generating the MPTCP meta-socket, which contains the specific variables of the
protocol and the master subsocket, corresponding to the first subflow established.
During this process, server and client exchange different security keys chosen randomly
to generate tokens that will be used for the authentication when adding news subflows.
8
Figure 4: Multiple connections establishment for MPTCP. The figure shows a client-server communication, and the
process while the main subflow is connected using the three-way handshake. After that a second subflow can be
added using another IP address.
Once the connection is established, it is possible to add new subflows using another
wireless interface and IP address. This time, again in the three-way handshake, the
client will add the MP_JOIN option in the SYN packet to make the server know it is a
MPTCP subflow. During this process, the already created tokens in the first connection,
will be exchanged to make secure the establishment. This will generate a slave-
subsocket, correlated to the added subflow. This process will be the same for all the
adding subflows, where for mobility support, ADD_ADDR and REMOVE_ADDR
additional options allow a host to inform the other about the IP address changes.
2.3. Operational When the communication starts and the host sends the MP_CAPABLE option, the
Application Layer generates a regular TCP socket. When the server replies with the
ACK + MP_CAPABLE option, the Meta-Socket is generated between the App and the
MPTCP subflows. This Meta-Socket contains specific information and variables to
establish the multipath connection, as it acts as a delayer to reorder the incoming
packets from the different subflows. The Application Layer talks to the MPTCP Meta-
Socket and it sees as a regular TCP connection, being this MP process transparent for
the top layer, it sees a standard single flow establishment. The data created by the App
Socket will be distributed by the scheduler in the Meta-Socket over the different
subflows, so the traffic can be spread on all of them.
9
Figure 5: Multipath TCP protocol stack.
2.3.1. Scheduling Policies
Multipath TCP was thought with two major objectives [3], [4]: to increase the
throughput by bandwidth aggregation through multiple paths and to improve the
resilience by providing traffic switching upon path failure. Thus, choosing the best path
to send the data information is a determinant factor for an optimal performance of
MPTCP.
It has been proven [10] that choosing a wrong path at the beginning of the connection
can have a negative impact on its performance, since it will take longer for the optimal
path to grow its congestion window (CWND) to the maximum possible value.
Moreover, MPTCP uses a connection level receiver buffer, where segments are stored
until they are placed in the correct order, before being sent to the application. The
receiver buffer should be large enough to be able to contain all incoming data until the
Out-Of-Order or missing packets arrive at the destination. The recommended buffer
[11], [12] size is the following equation:
𝐵𝑢𝑓𝑓𝑒𝑟𝑆𝑖𝑧𝑒 = 2 ∗ ∑ 𝐵𝑊𝑖𝑖 ∗ 𝑅𝑇𝑇𝑀𝑎𝑥 Equation 1
Let us denote 𝐵𝑊𝑖 as the bandwidth of each subflow i and 𝑅𝑇𝑇𝑀𝑎𝑥 as the largest RTT
across all subflows. Since the receive buffer can be very large, when a high throughput
path is used in conjunction with a slow path, it is recommended to use the slow one for
backup only to increase the buffer. Otherwise, the slow path will be source of packet
loss events.
When a segment is lost during the transmission, the packet will be retransmitted on the
original path [11]. Furthermore, MPTCP proposes as retransmission strategy to send the
lost segment through a different one, ensuring an improvement of the resilience in case
of the original subflow failure. Duplication will be detected at the receiver.
10
The main responsible to coordinate the packet spread over the multiple subflows is the
Meta-Socket scheduler [13], which function is to decide and manage the subflow which
the next segment should be sent on.
In this section, we introduce three schedulers proposed for MPTCP:
• Low-RTT Scheduler
RTT-Aware or Lowest-RTT-First, gives priority to paths with lower RTT. When a
segment is ready for transmission, the assigned path will be the one of minimum
RTT, out of all those subflows whose sending congestion window is not already
full. If there is more than one such path, then the scheduler makes a choice with a
systematic preference towards one of the paths, and continues to favour this path
until its congestion window becomes full; and it will again return to this path once
space is available. A modification of this scheme can be applied, trying to eliminate
this fixed-path effect. Using a random tie breaker in cases where different subflows
have the same RTT.
• Round Robin Scheduler
The scheduler choses paths using the Round-Robin algorithm out of those whose
congestion window is not yet full. In case of bulk data transmission, the scheduling is not really round-robin, since the application is able to fill the congestion window
of all subflows and then packets are scheduled as soon as space is again available in
each subflow’s congestion window, this effect is commonly known as ack-clock.
Such an approach might guarantee that the capacity of each path is fully utilized as
the distribution across all subflows is equal.
Figure 6: Round Robin scheduling mechanism. The scheduler choses one flow per turn to send the incoming segment,
out of those whose CWND is available.
• Random Scheduler
Packets may be assigned to subflows randomly without any consideration, having
all of them the same probability to receive the next data segment.
11
Although Round Robin and Random schedulers are oblivious to RTT differences, their
more evenly distributed transmission decisions make for a steadier transmission rate,
while benefiting from the fact that the window control mechanism is already accounting
for RTT differences [12].
On the other hand, RTT-Aware scheduler minimizes the use of the slower path, which
reduces the number of packets the application would need to wait for before reading
from the receive buffer. However, RTT awareness in scheduling can amplify path
heterogeneity [13], and offer therefore limited benefits unless paths heterogeneity is
substantial. Furthermore, this scheduling algorithm uses the network resources
efficiently.
In general, the optimal scheduling strategy depends on the characteristics of the paths
being used.
2.3.2. Congestion Control
Considering that the purpose of MPTCP is to outperform standard TCP using multiple information paths, congestion controllers (CC) algorithms become crucial to reach this
objective.
Different from using one congestion window as in a single path TCP connection;
MPTCP sender uses as many CWNDs as active subflows, each one corresponding to its
path, in order to control the load in all of them, while MPTCP receiver has its own
global receiving window shared between all subflows.
Each subflow in the connection behave as a regular TCP flow, in other words, each path
transmits data independently from the others and is subjected to regular congestion
control stages. When the connection is established after the three-way handshake, each
path starts transmitting in Slow-Start phase, doubling the CWND per RTT -i.e.
acknowledgement (ACK) received- before entering in Congestion Avoidance phase
when an error occurs. Since the sender cannot distinguish between a delayed packet
from a lost one, in any of both cases will make the subflow window reduce, entering at
Congestion Avoidance phase. Each path maintains its corresponding CWND and
retransmission scheme till the end of data transfer.
Figure 7: TCP Slow-Start and Congestion Avoidance stages. Source:
https://commons.wikimedia.org/wiki/File:TCP_Slow-Start_and_Congestion_Avoidance.svg
12
The goals [13] that CC try to reach in MPTCP are the following:
- Improve throughput: A multipath connection should perform at least as well as a
single flow TCP connection, leveraging the path diversity by adding or
removing subflows to make the best of them.
- Balance congestion: Data traffic through the subflows must be managed and
redirected in case of network congestion, the affected paths should be removed
and move the traffic to optimal subflows.
- Fairness condition: Each subflow must be restricted in capacity in order to not
harm other active subflows, taking more network resources than it should and
affecting the wellness of the connection.
- Quick adaptation: A multipath flow must adapt quickly in presence of
congestion changes in the network changing data paths with stability, without
flapping i.e. changing of path constantly.
For this dissertation, we show three congestion controllers that has been already proved
and analysed by other studies [3], [14]: Uncoupled, Fully Coupled and Coupled CC
algorithm.
Let us now denote by 𝑊 the total congestion window size and 𝑊𝑖 the congestion
window in each path, where 𝑖 is corresponding subflow.
• Uncoupled-CC
Increases 𝑊𝑖 = 𝑊𝑖 +
1
𝑊𝑖 Equation 2
Decreases 𝑊𝑖 =𝑊𝑖
2 Equation 3
Uncoupled-CC (Un-CC) uses Additive-Increase/Multiple-Decrease (AIMD)
congestion control used in regular-TCP in each path independently. We consider
this CC invalid as does not satisfy the fairness condition.
• Fully Coupled-CC
Increases 𝑊𝑖 = 𝑊𝑖 +1
𝑤 Equation 4
Decreases 𝑊𝑖 = max (𝑊𝑖 −𝑊
2, 1) Equation 5
13
Fully Coupled-CC (FC-CC) takes total CWND of all paths in consideration, in
order to couple both the increase and decrease cases for each path using the
above set of equations. The connection suffers from flapinness when using this
algorithm
• Coupled-CC
Increases 𝑊𝑖 = 𝑊𝑖 + min (𝛼
𝑊,
1
𝑊𝑖) Equation 6
Decreases 𝑊𝑖 =𝑊𝑖
2 Equation 7
Finally, the Coupled-CC (Co-CC) solves the previous problems because it deals
with different RTTs for different paths. The Co-CC couples only the increase
case for each path and keeps the decrease like regular-TCP. The algorithm
increases CWND of each path by and decreases by the above equations, where α
is calculated using:
𝛼 = 𝑊 ∗𝑚𝑎𝑥
𝑊𝑖(𝑅𝑇𝑇)2
(∑𝑊𝑖
𝑅𝑇𝑇𝑖 )2 Equation 8
Although Co-CC adjusts CWND size for each path taking in consideration RTT
measurement, MPTCP cannot saturate link with higher RTT, because Out-Of-Order
data arrival on the receiver endpoint at the connection level causes a bottleneck in data
re-sequencing process.
Studies [14] show that, whilst the Coupled Congestion Control algorithm provides a
robust data transmission and solves the fairness and floppiness problems that exist in
MPTCP with CC methods, this algorithm sends most of the data using the best path and
uses ineffectively the other subflows, limiting the aggregate throughput to a small
improvement the of the best path throughput. However, this would result a valid
solution, as it lightly outperforms standard TCP while adding redundancy and
robustness, until a most suitable algorithm is designed for this specific protocol.
2.3.3. Packet Reordering Recovery Mechanisms
One of the most important contributions of MPTPC is its ability to redirect the traffic to
the most optimal paths, as well as removing or adding new ones in situations of
congestion in the network, using CC. Sending data through multiple flows of
information provides an improvement on the connection quality; leveraging path
diversity to exploit the network resources, increasing the bandwidth and balancing the
load over the different paths. However, it also increases the probability of appearing
several errors such as packet loss, packet repetition and basically Out-of-Order (OOO)
events.
14
The sequences of packets received at the destination may differ from the original order
when the sender transmitted them. Router forwarding lulls, Link Layer retransmission,
route fluttering or inherent parallelism in modern high-speed routers are the more
common causes for this event to happen.
To try to avoid this effect, MPTCP was designed [3] with two different levels of
sequence spacing: a connection-level sequence number and another sequence number
associated to each path, called subflow-level sequence number. The key of success in
this system is the mapping between the two sequences, as it correlates the logical order
of the data packet with its subflow carrier, the Data Sequencing Mapping (DSM)
converts between the two sequence spacing.
The connection-level sequence number is the same used in regular TCP, which is the
data sequence seen by the Application Layer. Moreover, each subflow has its own
sequence number to keep the packets in the correct order when arriving at the receiver.
Figure 8: Out-Of-Order packet algorithm. The algorithm shows the procedure that uses the buffer to classify an
incoming packet according to its sequence numbers. Figure based on the works of [13].
MPTCP destination node examines the arrived packet to decide whether to store it in
the receiver buffer, treating it like an in-order packet, or in the OOO-buffer becoming an
Out-Of-Order packet. Otherwise it will be rejected, considering it most likely a
duplicate packet.
The receiver first checks the sequence number of the subflow then the sequencing of the
connection. If the subflow sequence number of the received packet (SF_RecSeq) is
equal to the expected subflow sequence number, (SF_ExpectSeq) and the connection or
Data sequencing (D_RecSeq) is equal to the expected Data sequence number
(D_ExpSeq) then the packet is considered in-sequence. However, the received packet
will be considered as Out-Of-Order if the sequence numbers is greater than the
expected, otherwise it will be rejected.
In figure 9 we can see an example of multipath communication, and how an Out-of-
Order packet can make the destination node respond with a double ACK, inducing the
sender to think wrongly that the packet sent has been lost. As a result of this situation,
the path will enter in Congestion Avoidance stage, harming the whole connection
unnecessarily.
15
In this scenario, we have exemplified an end-to-end connection with two subflows, SF1
and SF2, in normal conditions. The data packets will be transmitted mainly in order;
however, at some point of the connection, one of the two subflows (SF2 in this case)
can suffer from delay due to a larger RTT. Then, data packets will have more
probability to arrive out of sequence at the connection level, even though they are in the
correct subflow-level sequence order. Packets 3 and 4 are in the correct order in its
correspondent subflow, anyway, as SF1 is now faster than SF2, these packets will arrive
later than packets 5 and 6, which are already sent. Packets 5 and 6 will be considered
OOO, because de re-assembler will be still missing packets 3 and 4. This situation will
force the sender to reduce SF2 CWND and enter in Congestion Avoidance phase.
Figure 9: Packet reordering example. Packets sent through the slowest path have not arrived yet to the receiver. This
causes an Out-Of-Order event, and the reordering necessity at the buffer. Figure based on the works of [13].
Diverse reordering mechanisms have been proposed to try to avoid the retransmission
ambiguity and solve the performance problems caused by spurious retransmissions. We
are discussing three of them: D-SACK, TCP-DOOR and F-RTO.
• D-SACK
“This mechanism is an extension of the selective acknowledgment SACK option
implemented for standard TCP that it is based on the use of duplicate selective
acknowledgement (D-SACK) to detect segment reordering and retracts the
associated spurious congestion response. When congestion is detected, the current
value of CWND is saved before reduction and when a sender finds that it has made
a spurious congestion response based on the arrival of a D-SACK it performs Slow-
Start to increase the current CWND to the stored value before entering in
congestion avoidance.” [12]
• TCP-DOOR
“TCP-DOOR (Detection of Out-of-Order and Response) uses the TCP timestamp
option to inset the current timestamp into the header of each outgoing segment to a
destination. The receiver copies the timestamps in the corresponding ACKs. When a
packet loss is assumed, the sender retransmits the lost segment and always uses the
stored timestamp of the first retransmission in addition to the Slow-Start threshold
(SSThreshold) and the CWND. Upon receiving the ACK of the corresponding
segment, the sender compares the timestamp of the arrived ACK with the stored one.
16
If the ACK’s timestamp is smaller, then the retransmission was spurious.
Subsequently, the sender simply restores the SSThreshold and the CWND to the
stored values. Once the OOO is detected the TCP-DOOR responds by temporarily
disabling the congestion control and instant recovery during congestion avoidance.
The sender keeps its state variables constant for a time period, such as RTO and
CWND, and then recovers immediately to the state before congestion avoidance
action was invoked.” . [12]
• F-RTO
“The Forward RTO Recovery (F-RTO) algorithm is a TCP sender method that does
not require any TCP options to operate. After retransmitting the first
unacknowledged segment triggered by a timeout, the F-RTO algorithm at a TCP
sender monitors the incoming ACKs to determine whether the timeout was spurious
or not and also to decide whether to send new segments or retransmit
unacknowledged segments. However, if packet reordering or packet duplication
occurs on the segment that triggered the timeout, the F-RTO algorithm may not
detect the spurious timeout due to incoming dupACK.” [12]
It is commonly known that TCP protocol performs poorly in wireless networks since it
assumes all packet losses are due to congestion. Different studies [13], [14] have tested
and evaluated the previous packet reordering recovery methods and they conclude that
by performing Slow-Start during state restoration, D-SACK allows TCP to reacquire
ACK-clocking and avoid injecting traffic bursts into the network. However, its response
of is slower than the rest of algorithms.
Whilst TCP-DOOR can increase 50% of the path throughput, it may lead to congestion
collapse from undelivered packets by disabling the congestion control for a period,
when an OOO event is detected. Thus, TCP-DOOR does not perform well in a very
congested network. TCP-DOOR and DSACK utilize both paths effectively. MPTCP
using DSACK is less sensitive to path delay difference -up to 200ms- independently of
which CC algorithm is used. Furthermore, MPTCP should use F-RTO as a PR solution
if memory is a constraint.
The works [13] show that the packet reordering solutions bring a substantial
performance improvement for MPTCP by increasing the aggregate throughput as well
as the path utilization particularly when delay difference between subflows is less than
200 ms.
17
Section 3
3. State of the Art
In this section, we do a brief review of the work and proposals done by other
researchers. We clearly divide this chapter in two parts. In the first one, we will focus
on theoretical researches and experimental methods for testing MPTCP mechanisms,
and the second one will be oriented to future implementations ideas and uses of the
protocol.
3.1. Theoretical Researches and Experimental Methods The following papers show theoretical and experimental researches done in order to test
different properties or mechanisms that Multipath TCP incorporates in its
implementation. Furthermore, other techniques are proposed to obtain a better
performance or expand its functionality.
The following articles has been useful for a wider understanding and comprehension of
the protocol and its operativity.
Paper/Article Contribution Methodology Results
A Measurament-
based Study of
MultiPath TCP
Performance
over Wireless
Networks [9]
Analyse the benefits
that MPTCP can
provide to a wireless
connection in the
wild. There is special
interest on testing SP,
2-MP and 4-MP
connections and
observe the impact of
flow size on average
latency.
Download time, RTT
and out-of-order delay
are their metrics of
measurement during
the tests. Using the
University of
Massachusetts server,
a bench of different
sized files is
download using WiFi
and three cellular
providers.
Latency achieved
for MPTCP is
comparable to the
smallest latency
produced by either
WiFi or LTE
connections in
single path, except
for small files
under MB size.
Evaluation of
Throughput
Optimization and
Load Sharing of
Multipath TCP in
Heterogeneous
Networks [11]
Experimental study
of MPTCP behaviour
using different
networks such as
WiFi, cellular or
Ethernet without
using Congestion
Control algorithms.
Three different
scenarios are set up:
Eth+Eth, Eth+WiFi
and WiFi+3G, where
traffic is generated
using iperf. With this
they study different
characteristics in
terms of bandwidth
and delay.
Performance
evaluation shows
that MPTCP
requires special
mechanisms to
balance the load as
MPTCP
performance is
worse than single
TCP best path.
18
An Analysis of
the Impact of
Out-Of-Order
Recovery
Algorithms on
MPTCP
Throughput [13]
Evaluation of the
impact of OOO
events in end-to-end
throughput using
different MPTCP
congestion
controllers when run
in conjunction with
different TCP packet
reordering recovery
algorithms.
The paper compares
three different
MPTCP congestion
control algorithms
(Un-CC, FC-CC and
Co-CC) used in
conjunction with four
recovery algorithms
(D-SACK, Eifel,
TCP-DOOR, and F-
RTO) running
simulations using NS-
31 of symmetrical
network conditions.
The results show
that Co-CC
outperforms over
the others CC,
however is not
optimal when
balancing the load.
All four PR
mechanisms are
effective in a
determinate
situation, but they
stand out DSACK.
Impact of Path
Characteristics
and Scheduling
Policies on
MPTCP
Performance [10]
Experimental study to
provide evidences
that throughput can
be improved by slight
modifications to the
buffers and path
selection components
of the
implementation.
Path characteristics,
configuration
parameters and
scheduling policies
are the main aspects
in the study. They
emulate a network
using NS-3 and set a
varying bench of
variables to test in
different data
transmission
scenarios.
Conclude
exposing that Co-
CC is basic on
MPTCP
performance,
while schedulers,
path election and
buffer sizes have
also a significant
impact. Without
them, MPTCP
would perform
worse than TCP.
MPTCP Is Not
Pareto-Optimal:
Performance
Issues and a
Possible Solution
[15]
Researchers support
that MPTCP is not
Pareto-Optimal, as it
fails to satisfy the
fairness between
paths. They point
linked-increases
algorithm (LIA) for
allowing an excessive
amount of traffic
transmitted over
congested paths.
Three testbed
topologies that
represent client-server
scenarios are
simulated. Software
that emulate
topologies with
different
characteristics are
used to perform links
with configurable
bandwidth and delay
with RED queuing.
Iperf is used to
generate the traffic to test the experimental
performance
Opportunistic LIA
(OLIA) algorithm
is proposed.
Theoretical results
show that OLIA is
Pareto-Optimal
and satisfies the
three design goals
of MPTCP.
Moreover, the
results of the
simulation show
that solves the
problems caused by LIA.
Table 1: Theoretical researches and experimental methods of MPTCP.
1 https://www.nsnam.org/
19
3.2. Future Implementation Ideas and Uses
The following table summarises the proposals done by different groups of researchers in
order to apply the MPTCP functionalities to an already existing or newly technologies
to reach further objectives and take the maximum profit possible of the protocol.
Paper/Article Contribution Methodology Results
Towards
Dynamic
MPTCP Path
Control Using
SDN [8]
Use Software-Defined
Networks (SDN) to
track the available
capacity of connected
paths and pick the most
appropriate paths for
MPTCP depending on
varying network
conditions.
A WiFi wireless
client-server
connection is set up
with the SDN app
running on the end
user devices and edge
nodes. They compare
the download times
while the client gets
several files of various
sizes from the Web
server.
MPTCP
performance can be
improved in an
SDN-enabled
network, while it can
dynamically control
subflows based on
the available
capacity of
connected paths in
order to maximize
download rates.
Improving
Datacenter
Performance
and
Robustness
with MPTCP
[7]
Propose to use
Multipath TCP as a
replacement for TCP in
data centres, to
effectively and
seamlessly use available
bandwidth, giving
improved throughput
and better fairness on
many network
topologies.
Researchers perform a
Linux MPTCP
implementation on a
small cluster, and on
Amazon EC2.
But most of the results
come from NS-2
simulation of complex
datacentres topologies
with numerous nodes
at high speeds.
MPTCP can change
the way we think
about data center
design as it makes
effective use of
parallel paths in
modern data centre
topologies. Results
show that 8
simultaneous paths
are needed in order
to achieve good
values in FatTree
and BCube
topologies.
Optimal
Collaborative
Access Point
Association in
Wireless
Networks [16]
Optimize the high
density of APs by
associating them to
enable bandwidth
sharing collaboration
while mitigate the
negative impact of
overlapping coverage.
To accurately emulate
real-world WiFi
deployment,
residential WiFi data
was collected in more
than 100 locations.
Information set up the
evaluation scenarios in
numerical analysis
and 2D and 3D
simulations.
The proposed
solution is a
centralized
optimization based
on a proportional
fair access point
association to
maximize the
experience for
individual end users.
The simulation
results confirm the
effectiveness with
considerable
throughput gains
20
Seamless TCP
Mobility Using
Lightweight
MPTCP Proxy
[17]
Present a
Linux implementation
which enables seamless
session end-point
migration across multi-
provider network
environments to
conduct network
selection and
dynamic spectrum
sharing to facilitate
service migration across
IP domains built on a
lightweight MPTCP
proxy.
Using the proxy
implementation, they
conduct trials on a lab
bench and on live
networks
interconnecting two
interfaces of a multi-
homed host with
different interfaces,
where two different
configurations were
evaluated to compare
TCP with MPTCP
performance.
Trials have
demonstrated the
proper operation of
the proxy design.
However, further
portability
investigation of the
lightweight proxy to
actual mobile
platforms would be
needed to enhance
MPTCP mobility
support.
Saving Mobile
Energy with
Multipath TCP
[18]
Based on energy
models for the different
radio interfaces, they
compute schedulers for
different applications in
order to save energy by
continuously shift
active connections to
the most energy-
efficient network path.
Simulations of a
stationary application
over 3G and WiFi
state machines are
used for the evaluation
where random traffic
patterns are performed
in order to test the
models.
A large number of
different schedulers
have to be deployed
to cover all different
circumstances, since
many factors
influence the
application model in
a dynamic way.
Table 2: Future implementation ideas and uses for MPTCP.
21
Section 4
4. Networking through Multiple Interfaces
In this section, we introduce the experiments that have been performed in order to
validate the initial hypothesis. The objective is to figure out if a multiple interfaced
client-server connection outperforms the standard single interface method, and analyse
how much can benefit or harm the data transmission.
Our first proposal was to test the protocol itself in our network. However, many
difficulties appeared with this option, so a Java Application was developed with the
intention to perform an extrapolation of the protocol behaviour; this way, we were
conscious and controlled everything happening in the wireless system.
This chapter is divided in three subparts: in the first one we introduce the testbed setup
designed for the tests. Second part shows the installation and configuration of Multipath
TCP in our network. Finally, third part describes the Java Application development and
its function.
To perform the experiments, we mainly focused on the download time of different sized
files. We designed a test bench that included a total of ten files which size increased to
see the behaviour of the protocol in different situations. Different metrics and aspects
are important in the experimentation: Download time, RTT, Aggregate Throughput and
Link utilization.
4.1. Testbed Setup Figure 9 shows our testbed setup. The WLAN consists of a wired server (HP Compaq
600 pro) linked to an access point (ASUS AC2400) -AP1- which is configured as main
router, and a client (DELL optiplex 790) connected to both wireless access points -AP2
and AP3- considered secondary routers (Linksys WRT45G and Linksys WRT54GL)
through two wireless adapters (Qualcomm Atheros AR922X and TP-LINK TL-
WN822N). Each wireless access point is located 1 meter away of the client, being
separated the same distance between them.
Figure 10: Testbed setup.
22
MPTCP v0.91 is installed on Ubuntu 14.04.5 LTE (Trusty Tahr) for both client and
server. The client has two wireless interfaces 802.11g and 802.11n activated to access
both access points simultaneously, while the server uses only one ethernet interface.
The Wi-Fi access points irradiate at 2,4GHz but they are configured in channels 1 and
11 respectively, trying to avoid interferences between them. Transmission rate is set as
automatic for both AP2 and AP3, reaching a transmission speed of 1 megabit per
second (Mbps) in average, 2 Mbps as transmission pic. The signal strength measured at
the client’s device is -45 dBm over 802.11g and -38 dBm over 802.11n and the RTT
between the client and the server is less than 5 ms for both paths.
Our network is not connected to the Internet with the intention of controlling all the
traffic on it, and operate without background traffic that could have an impact on the
results.
This network was thought and developed based in our needs, and all the experiments
performed in our dissertation have been performed over this system.
4.2. MPTCP Implementation
Once we have studied and understood the implementation and behaviour of Multipath
TCP theoretically, we are ready to test the protocol in our network.
The version used of MPTCP is v0.91, the last release developed by the IP Networking
Lab (INL) of the Department of Computing Science and Engineering at Université
Catholique de Louvain (UCL) in Louvain-la-Neuve, Belgium2. MPTCP v0.91 can be
downloaded from his official site3 and installed automatically in both server and client
using their apt-repository. This version is based on the Linux Kernel Longterm Support
release v4.1.x.
Once the protocol is installed in our WLAN, we set up the proper protocol configuration
for our system and set the routing paths. For the protocol to work, the kernel has to be
able to split the traffic over the different interfaces of the client. These interfaces are
defined under an IP address, and each one is bound to a determinate access point. To
achieve this, we configure one routing table per outgoing interface, each routing table
being identified by a number. The route selection process then happens in two phases.
First the kernel does a lookup in the policy table, configured using ip rules; then the
corresponding routing table is examined to select the gateway based on the destination
address.
The following commands have been introduced as super user in both connection ends
terminals to configure the routing scheme:
2 https://inl.info.ucl.ac.be/ 3 http://multipath-tcp.org/
23
Client Server ip rule add from 192.168.1.5 table 1 ip route add 192.168.1.0/24 dev wlan0 scope link table 1 ip route add default via 192.168.1.2 dev wlan0 table 1 ip rule add from 192.168.1.6 table 2 ip route add 192.168.1.0/24 dev wlan1 scope link table 2 ip route add default via 192.168.1.3 dev wlan1 table 2 ip route add default scope global nexthop via 192.168.1.2 dev wlan0
ip rule add from 192.168.1.4 table 1 ip route add 192.168.1.0/24 dev eth0 scope link table 1 ip route add default via 192.168.1.2 dev eth0 table 1 ip rule add from 192.168.1.4 table 2 ip route add 192.168.1.0/24 dev eth0 scope link table 2 ip route add default via 192.168.1.3 dev eth0 table 2 ip route add default scope global nexthop via 192.168.1.1 dev eth0
Table 3: Commands for MPTCP rules and routing configuration.
As a result of these commands, we can now check the tables to observe the new rules
and routes established.
Figure 11: Routing IP rules and routes for MPTCP configuration.
Multipath TCP is installed and configured in our system now.
4.3. Application Development
The main objective that we pursue in our project is to analyse the data transmission
performance over multiple wireless interfaces. Although the most important protocol
that is being studied in this field is Multipath TCP, we want to perform the experiments
in a more controlled environment where we really know what is happening during the
test. For that reason, we have developed a Java Application and implemented it in our
network (figure 10).
With this, we try to extrapolate the behaviour of MTPCP. The communication is
stablished using a multi-socket connection between the client and the server. To
perform simultaneous connections, a thread is thrown for each subflow, where
interfaces are bound to its IP address and the server port number.
24
In a simpler way, our software splits the file to be transmitted by the proportion the user
demands, generating two data chunks and sending one per each interface. Since we do
not have the necessity to control the network state, because we own the only traffic on
it, dynamic schedulers are not needed, which is the reason for the file to be divided at
the beginning of the connection. This is the biggest difference of the App compared to
Multipath TCP, which does this process dynamically during the connection time.
The App is clearly divided in two parts: Server and Client.
• Server:
The server side is composed by one single class Server.java. The server socket is
initialized while an instance of the client socket is performed. The server will
remain active since it is run, it will stay listening for all the incoming subflows
and accept them all till the process is terminated.
public static void main(String args[]) throws Exception {
ServerSocket servsock = new ServerSocket(serverPort); System.out.println("Server Listening"); while (true) { Socket clientSocket = servsock.accept(); new Thread(new Server(clientSocket)).start(); System.out.println("Connection Established"); } }
When the server starts a communication with the client, the file transmission
begins. Using an input buffer, the server downloads the file and stores it at the
selected directory. Once the file is completely received, the server socket closes
the connection with the client socket and finishes the link. The server side will
remain active, listening for the next incoming client socket connection request
until its process is terminated.
• Client:
The client side is the responsible of sending the data files to the server. It
generates de binding between the sockets and the physical interfaces, splits the
file and transmits it.
This part of the code is composed by two Java classes: ClientLauncher.java and
MultiThreadClient.java.
- Client Launcher is the responsible to launch the data to the Multi-Thread
Client. In this class, IP addresses of the server and clients are initialized
as well as it is declared a list of threads, which will be the ones launched
to the other client class. This software offers the user to choose between
performing either Single Path or Multipath connection.
25
The Launcher gets the file from the client directory and calculate the split
proportion introduced by the user. The splitting is performed when the
file is being launched to the Multi-Thread class, in case the user has
chosen the MP option, and it does it by establishing limits on the file. A
function reads all the bytes of the file, calculate its length and divides it
by the desired proportion, obtaining the cut point of the file length. Then,
the data launching starts:
1) If the user has chosen the SP option, the launcher will generate a
client instance containing the IP address and port number of the
server, the IP address of one interface and the whole file to the Multi-
Thread Client.
case 1:
MultiThreadClient client = new MultiThreadClient(serverIP, serverPort, interfaceIP1, mybytearray, "Single Path");
threadList.add(new Thread(client)); startThreads(); waitThreads(); clearThreadList(); break;
2) If the user has chosen the MP option, the launcher will generate two
client instances, containing they both the IP address and port number
of the server and the IP address of one interface as well as the half of
the file for each correspondent client. The file is sent using both
interfaces: the first one will send from Byte 0 till the cut point Byte,
and the second from cut point Byte till the last one, the end of the
file.
case 2:
byte[] firstChunk = Arrays.copyOfRange(mybytearray, 0, cutPoint); byte[] lastChunk = Arrays.copyOfRange(mybytearray, cutPoint, fileLength); MultiThreadClient client1 = new MultiThreadClient(serverIP, serverPort, interfaceIP1, firstChunk, "Multipath"); MultiThreadClient client2 = new MultiThreadClient(serverIP, serverPort, interfaceIP2, lastChunk, "Multipath");
threadList.add(new Thread(client1)); threadList.add(new Thread(client2)); startThreads(); waitThreads(); clearThreadList(); break;
26
In both options, a thread will be created for the client instance or
instances and will be stored temporally in the threadList, in order to
calculate the time that the communication has lasted. Then we will clear
the list to restart the timer for the next connection.
- Multi-Thread Client receives the inputs from the Launcher and processes
its code for each arrival, whether it is a SP or MP connection.
public void run() {
try { Socket socket = new Socket(); socket.setReuseAddress(true); socket.bind(new InetSocketAddress(interfaceIP, serverPort)); socket.connect(new InetSocketAddress(serverIP, serverPort)); OutputStream os = socket.getOutputStream(); BufferedOutputStream bos = new BufferedOutputStream(os); long tStart = System.currentTimeMillis(); bos.write(fileBytes, 0, fileBytes.length); long tEnd = System.currentTimeMillis(); long totalTime = tEnd - tStart; System.out.println(type + " client " + interfaceIP); System.out.println("Time: " + totalTime + " ms\n"); bos.close(); socket.close(); } catch (Exception e) { e.printStackTrace(); } }
An instance of socket its created for each connection request sent by the
Launcher. This socket performs the binding with its correspondent
interface and connects the server. Once the server accepts the connection,
it starts transmitting the file by writing it into the output buffer. Just
before that, the timer is set up and starts counting the time till the transfer
ends.
Once the file transmission is finished, the software prints out the time it
has longed.
27
Section 5
5. Performance Evaluation
In this section, we introduce the performance evaluation of our experiments. This
chapter is divided in three parts. Section 5.1 shows the test bench designed for the
scenario explained on the previous section 4.1. Then, we explain the procedure carried
out in section 5.2. Finally, results obtained are analysed and explained in section 5.3.
5.1. Test Bench
Our main goal to reach in this project is to calculate the download time of a single path
connection and compare it with a multipath establishment in different network
scenarios, using the resources explained before.
To achieve this, we are using a set of different empty sized files (.bin) to calculate the
download time. Every file download is reproduced ten times in order to obtain reliable
results. The sizes of the testing files are the following:
64 KB 512 KB 1 MB 4 MB 8 MB 32 MB 64 MB 256 MB 512 MB 1 GB
In the case of the Multipath TCP testing, it has been configured an FTP Server on the
server terminal containing all the previous files, for the client to download them.
In the case of the Java App, the client is the terminal that contains the files, and it
uploads them to the server computer using the explained code. Three different tests will
be performed in the Java App scenario:
• Basic Upload Test: testing files will be uploaded to the server in fair and equal
network conditions for both access points. Files will be split 50/50 in a multipath
connection, and the total transmission time will be compared to single path time.
• Heterogenous Transmission-Rate Test: testing files will be uploaded to the
server in unequal network conditions where one of both paths transmit with a
lower rate. Files will be firstly split 50/50 in a multipath connection, and then a
fairer splitting will be performed in order to mitigate this unfair event. Results
will be compared and analysed.
• Background Traffic Test: testing files will be uploaded to the server in unequal
network conditions where one of both paths has some background traffic being
transmitted. Files will be firstly split 50/50 in a multipath connection, and then a
fairer splitting will be performed in order to mitigate this unfair event. Results
will be compared and analysed.
28
5.2. Software Implementation
We describe here the procedures to perform the tests explained in section 5.1 using
protocol MPTCP and our software.
5.2.1. MPTCP
Unfortunately, we were not able to test Multipath TCP in our system.
Once the protocol was installed and configured, by analysing it with tools like
Wireshark4 or ifstat5, we observed that only a single path was being used, hence we
could not perform the tests that were thought for this case.
Still today, we ignore which is the reason for this protocol to not work on our network.
Different configurations (including routing parameters, physical setup changes and
operative systems versions) and different protocol releases have been tested and proved
to not operate properly. We followed the installation and configuration guide provided
by INL, but did not worked out in our system. Notably, in one occasion we achieved
some good performance results with an old release configuration against a MPTCP
online server. However, we could not reproduce this scenario in our network, and we
were not able to achieve again that performance with the same configuration.
Our supposition for such a strange event to happen, is a mismatch on the software
versions. We think that the main reason for this test to fail resides on an incompatibility
of the operative systems on the computers.
Figure 12 shows an example of what should be seen when using MPTCP:
Figure 12: Wireshark capture of an MPTCP packet. We can see the “Multipath TCP” option in the details of the
packet captured. Source: http://blog.multipath-tcp.org/blog/html/2016/08/23/mptcp_analyzer.html
4 https://www.wireshark.org/ 5 https://linux.die.net/man/1/ifstat
29
Using Wireshark, we are able to sniff the traffic and capture the packets that we want to
study. In figure 12 we can observe a transmission capture, more specifically the packet
description details, where “Multipath TCP” option has been used, meaning that this
packet belongs to a MPTCP connection flow.
This protocol has been proved to work. However, we have not been able to make it
operate in our network for reasons that we still ignore.
5.2.2. Java App
Aversely, results of the java simulation are highly positive.
We first show how tests are performed, results will be analysed afterwards. Our App is
run over the same WLAN setup configuration; however, only the client maintains the
routing configuration shown in table 3, the server is set at default configuration.
Previously to the tests, we configure the options to set the file to be transmitted between
the client and server, as well as the origin and destination folder for the archive transmission. For starting the simulation, the first step to take is to run the Server class
in the server computer (1). With this, the server will be able to accept data transfer
requests from the client. The output text “Server Listening” will appear in the server
console. Once we see this, we can run the ClientLauncher class on the client side (2) to
establish a connection with the server. Now, the software offers the user to choose
between two options: either to perform a single path or a multipath connection. Once
the user makes the choice, the client software gets the selected file to be transmitted (3),
and divides it with the desired proportion in the client computer before it is sent, in case
of MP election. The resulting data chunks are transmitted for each correspondent path.
A folder in the server computer is the destination of both chunks (4), where they will be
stored when the transmission is completed. Considering the fact that files are just for
testing transmission time and they do not contain information, file reconstruction is not
needed. We leave file merging for future work.
Number references are taken from figure 13.
Figure 13: Multi-interfaced file transmission system flux diagram. The figure shows the steps taken to perform a
simulation in our network using the Java App.
30
We show now an example of typical procedure performance for both single path and
multipath options. Single path connection is referenced as number 1 in figure 14. When
the server is already listening, client sends the connection request for file uploading,
then the connection is established, as seen on the server side. Option 1 (SP) is chosen in
the client console, in order to perform a single path connection. The client remains silent
till the transfer is completed. Once it is over, the client console shows the details of the
connection. It can be seen on the display: the kind of connection, the client IP address
which has performed it and the file transmission time longed. At the same time, the
server notifies that the connection is over, “Connection Closed”.
It is not needed to restart the software to perform another test, the code was developed
to perform as many repetitions as the user wanted without the necessity to run it again.
To perform the second example, we select option 2 (MP), which corresponds to
multipath connection. It can be seen in figure 14 referenced by number 2, as in the
previous case. Two “Connection Established” messages will be shown this time in the
server console, as it has received two subflows through different threads. As in the
previous scenario, client remains silent till one of both connections finishes
transmitting. We can see that interface with IP 192.168.1.6 has transferred its chunk file
first; then, a second later, interface 192.168.1.5 finishes sending the remainder one;
being the second time, the transmission time. The total transmission time is equal to the
time of the longer subflow transmission. The server notifies the connection end by the
time each client confirms the transmission.
Figure 14: Capture of a connection simulation using our Java App. Left and right images belongs to client and server
consoles respectively. Two different connection examples are performed. Number 1 refers to single path connection while number 2 is related to multipath establishment. IP addresses of the client and the connection time can be seen
on the client console.
Figure 15 shows a capture where it can be seen ifstat tool. This software gives us
information of the traffic that is being transmitted for each interface. In our case, we
will focus on the wireless interfaces only. Following the same example rule, number 1
refers to SP connection while number 2 references MP establishment. Single path traffic
is transmitted using interface wlan0; alternatively, for multipath transfer, traffic is
detected on both interfaces.
31
Figure 15: Connection capture using ifstat. Two connections can be seen. Number 1 refers to single path connection
while number 2 is related to multipath establishment. ifstat shows the traffic transmitted through each interface.
Note that in the second case, when both paths are transmitting simultaneously, there is
always one that prevail over the other. In this situation, wlan1 sends data faster than
wlan0. As a result, the multipath connection barely ends at half of the single path
connection time. One of the paths always lasts a little longer, which harms the total
time, because the faster subflow absorbs the maximum throughput. Figure 16 shows a
clear example of this event, which we named “link monopole effect”.
Figure 16: Link monopole effect. This ifstat capture shows how an interface absorbs the network resources, preventing the other interface to transmit data at its maximum capacity. Once wlan1 interface has finished
transmitting, wlan0 starts operating normally.
32
Many tests have been performed trying to figure out the reason for this event to happen.
Routing configuration and testbed setup was changed, experiments were reproduced
with different APs and the same effect was observed. We finally conclude that there is
always one access point with higher capacity that consumes more computational
resources and predominates over the other, harming the total transmission time in
multipath connections. This event can be lead to drivers or OS computation.
5.3. Results
Finally, we can analyse the results obtained. We have achieved what we were looking
for. The tests performed in our App demonstrate that our hypothesis was correct.
Download time is reduced when by using multiple wireless interfaces.
Full analysis is run below.
5.3.1. Basic Upload Test
For the first test, files are uploaded to the server in equal network conditions, where
access points are set in automatic transmission rate, which means that the transmission
speed will be adapted to the network conditions.
Single path link is always performed firstly, in order to take a time reference. Then, for
multipath connection, files are split 50/50 and total transmission times are compared
and analysed. The following figures show the results obtained.
Experimental results are represented in figures 18 and 19, to a better resolution of the
downloading times. Since the increasing size of the files implies higher downloading
times, small file results are always difficult to visualize. We can see an approximation in
figure 17, with a representation of the total experimental test bench files expressed in
logarithmic scale. It can be observed that, as the size of the files grows, the difference
between the total time in SP and MP increases progressively.
Figure 17: File downloading times expressed in logarithmic scale.
33
Figure 18: First set of results for Basic Upload Test. The figure shows the download time of 64 KB, 512 KB, 1 MB,
4 MB and 8 MB files.
Figure 19: Second set of results for Basic Upload Test. The figure shows the download time of 32 MB, 64 MB,
256 MB, 512KB and 1 GB files.
With the intention to analyse these results, we perform a graphic evolution of the ratio
between the MP and SP times. It can be seen in figure 20, how the tendency line marks
the reduction behaviour. The more time both paths are simultaneously active, the more
benefits it provides to the connection. The impact that MP have in files is bigger as its
size increases. MP/SP ratio most optimal value is 0,5, which means that MP lasts half of
the SP time, due to the use of the two subflows that carry half of the file each one.
In any experiment, we could achieve the optimal value. However, as the table 4 shows,
great results are obtained. Even in small files, transmission time is reduced by almost
30%. As the file gets heavier in size, this improvement increases till it reaches nearly
the half of the total single path transmission time.
File Size 64 KB 512 KB 1 MB 4 MB 8 MB 32 MB 64 MB 256 MB 512 MB 1 GB
Single Path 39,7 400,4 850,8 3424,4 7374,3 29702,4 62307 253000,0 487259,4 967579 Multipath 28,9 289,4 606,8 2494,5 5160,0 20424,0 41544 159972,6 307703,8 608793 Reduction 27,2% 27,7% 28,7% 27,2% 30,0% 31,2% 33,3% 36,8% 36,9% 37,1%
Table 4: Results for the Basic Upload Test.
34
Figure 20: MP/SP Download Time Ratio. As the size of the file increases, the relationship between the time that
MPTCP lasts on downloading the file gets every time smaller in comparison with Single Path as shown by the
tendency line.
With the first experiment, our hypothesis is already validated. The following tests try to
reproduce different common scenarios that we could experiment in the network.
5.3.2. Heterogeneous Transmission-Rate Test
For the second test, files are uploaded to the server in unequal network conditions where
one of both access points transmits using lower rate. This time we will only test MP
performance, as the key factor is that one of the paths will be slower than the other.
Secondary Router 2 is set at 1 Mbps, while Secondary Router 1 is set at 2Mbps.
However, because of the link monopole impact (see figure 16), path 2 will transmit
under its possibilities until path 1 finishes. Then, the access point with lower rate will be
able to transmit at its maximum, but limited rate.
Files will be firstly split 50/50 to take a reference and see how the different transmission
rates harm the connection. Then a fairer splitting ratio will be performed in order to
equal the link performance.
Three files are tested in this scenario: 8 MB, 64 MB and 512 MB.
Figure 21 shows the first set of tests performed for Heterogeneous Transmission-Rate
Test. The client transmits the 8 MB sized file using both paths, sending for each one 4
MB, i.e. the half of the size, in a 50/50 split. It can be seen in the results how the slower
path takes more than two times the duration of the first path. To solve this inequality,
another splitting ratio is applied. This time, we divide the file using a splitting ratio of 4,
sending 75% through the fastest path, while the remaining 25% is sent through the slow
path. It is clearly demonstrated that a better balance benefits the connection time, as the
total time is reduced by 37%, after many repetitions of this experiment. Note that,
although the first path carries three times more traffic than the second one, it still ends
first. It also should be commented that the time of the fastest path it is increased by
20%, however this increment of the time does not affect the total connection time.
35
Figure 21: First set of tests for 8 MB sized file in Heterogeneous Transmission-Rate Test. Left bars show an
heterogenous connection with 50/50 splitting, while the right ones represent the same connection with splitting ratio
4. A time improvement can be observed thanks to the better splitting ratio.
Figure 22: Second set of tests for 64MB sized file in Heterogeneous Transmission-Rate Test. Left bars show an
heterogenous connection with 50/50 splitting, while the right ones represent the same connection with splitting ratio
4. A time improvement can be observed thanks to the better splitting ratio.
Figure 23: Third set of tests for 512 MB sized file in Heterogeneous Transmission-Rate Test. Left bars show an
heterogenous connection with 50/50 splitting, while the right ones represent the same connection with splitting ratio
4. A time improvement can be observed thanks to the better splitting ratio.
36
Similar results are obtained for 64 MB and 512 MB sized files. We can observe in both
cases the same behaviour than in the 8 MB file experiment. Figures 22 and 23 show the
results obtained. As in the previous case, the fairer factor 4 re-splitting provides a most
equal connection due to leveraging the fastest path. Total time is improved a 39% in the
64 MB case, and a 34% in the 512 MB. The fastest paths times are also increased;
however, neither does affect to the total connection time.
In table 5 we summarize the results obtained in the second test. As it can be clearly
seen, in all cases balance is improved as the time of the fastest path approaches the
slower path time, having a higher percentage in comparison between them. This causes
a time reduction in the second subflow, by increasing the first subflow time.
File Size 8 MB 64 MB 512 MB Average
Balance 44% 83% 32% 87% 36% 92% 37% 87% SF 1 Increment 20% 66% 72% 53% SF 2 Reduction 37% 39% 34% 37%
Table 5: Results for the Heterogeneous Transmission-Rate Test.
5.3.3. Background Traffic Test
For the third and last test, files are uploaded to the server in unequal network conditions,
where one of both paths has some background traffic being transmitted. This time we
will only test MP performance, as the key factor is that one of the paths will be busier
than desired.
Files will be firstly split 50/50 to take a reference and see how the additional traffic
harms the connection. Then a fairer splitting factor will be performed in order to
mitigate the unbalanced data load.
Background traffic is generated using iperf6 software. We will set an UDP traffic, using
different throughput values through the interface linked to AP3. The commands used in
both client and server side are the following:
- Server: iperf -s
- Client: iperf -c 192.168.1.4 -u -b 1M -t 3600 -B 192.168.1.6
Figure 24 shows the first set of experiments performed for the 8 MB size file in this
scenario. Three different background traffic throughputs are used for the tests, 0,25
MBps, 0,5 MBps and 0,75 MBps, in all cases though the second path. In each situation
two splitting ratios will be applied in order to see the network behaviour in front of this
unequal load balance.
As it can be seen represented in figure 24, bars show the transmission times, each
coloured bar refers to one of both paths, where the total time is equal to the longer path
time.
6 https://iperf.fr/
37
Figure 24: First set of tests for 8 MB sized file in Background Traffic Test. The test includes three background traffic
rates (0,25, 0,5 and 0,75MBps) and its correspondent splitting ratios (3, 4 and 5) for each affected path. Percentages
over the total time duration are indicated above the bars, and the experiments results of the path time variations are
shown between testing pairs of bars.
Odd bars pairs correspond to unfair splitting ratio, while the even pairs at its right
represents the same network scenario with a most balanced load, i.e. a different splitting
ratio. For example, first pair of bars shows a situation where the client sends the 8MB
file for both paths with a 50/50 split, however 0,25 MBps background traffic is being
sent for path 2. This gives some advantage to path 1, which can transmit faster, without
additional traffic. As it is represented in figure 24, the second path defines the total
connection time, where the bigger the percentage of the first path, the better traffic load
is performed during the connection. For the second pair, variables are changed and a 3
factor split ratio is set, 66,66% of the file will be sent for the path without background
traffic, while the other 33,33% is going to be transmitted through the busy path.
Split Ratio 67/33 75/25 80/20 Average
Balance 64% 94% 56% 94% 45% 83% 55% %90 SF 1 Increment 36% 56% 72% 55% SF 2 Reduction 8% 5% 7% 7%
Table 6: Results for the Background Traffic Test for 8 MB file.
Table 6 show the results for this test. We can see that the total time is barely improved.
Contrasting the results with the other experiments, the same event is observed. We
perform similar tests with different background traffic and splitting ratio parameters,
and no connection time improvement is observed in any case. However, we think that
the important conclusion to take out after this test performance is the better loading
balance. We redistribute the load better on the different paths, since the duration of both
paths are more homogeneous, which means that a better balancing the resources of the
network is performed.
38
Figure 25: Second set of tests for 64 MB sized file in Background Traffic test. The test includes three background
traffic rates (0,5, 1 and 1,5MBps) and its correspondent splitting ratios (4, 6 and 8) for each affected path.
Percentages over the total time duration are indicated above the bars, and the experiments results of the path time
variations are shown between testing pairs of bars.
Furthermore, when we perform similar tests over a 64 MB file, the results are more
conclusive. Figure 25 not only shows a better load balancing, but provides a higher
improvement in the total connection time. This time, we have injected more background
traffic through the second path, as the file was also heavier. For the same reason,
splitting ratios are bigger too: 75/25 split ratio is used for 0,5 MBps of background
traffic, 83/17 for 1 MBps and 88/12 for 1,5 MBps.
Split Ratio 75/25 83/17 88/12 Average
Balance 57% 89% 44% 84% 31% 83% 44% %85 SF 1 Increment 35% 64% 74% 58% SF 2 Reduction 14% 14% 34% 21%
Table 7: Results for the Background Traffic Test for 64 MB file.
As shown in figure 25 and table 7, results are highly improved in comparison with the 8
MB file. For the first and second test, the total time improved by 86% reducing the time
more than 5 seconds. More notably, the third test, with an 8 factor splitting ratio,
reduced the total time by 34%, 20 seconds less approximately. Although the fast path
time increased, the global connection performance has been highly improved.
39
Section 6
6. Conclusions
Although mobile devices have already equipped multiple wireless interfaces, we are not
taking the optimal profit that these systems can provide us. Given the future situation
predicted by Cisco analysis, Multipath TCP could mean a step forward in the
telecommunication science, since it will exploit the network capabilities.
In this project, we have widely studied this newly protocol and performed several tests
in different network scenarios to validate our original hypothesis: data transmission
performance can be optimized by using multiple wireless interfaces.
Our experimental results show that transmitting data over multiple interfaces can highly
benefit the connection by reducing the transmission times and performing a better data
load balance. Moreover, protocols that implement these mechanisms such as Multipath
TCP also provide further and deeper advantages on the connection. The use of various
information paths to send and receive data makes the user experience better, since it can
increase the reliability of the connection as well as the end-to-end throughput, while
dynamically adding and removing paths to perform a better and fairer global connection
avoiding congested network situations. Furthermore, this methodology can improve the
device mobility experience. This is thanks to the redundancy of different paths that give
to the connection a robust and seamless capacity.
6.1. Discussion
We are aware that our software does not performs the same way that MPTCP does.
However, our intention with this dissertation was to extrapolate its behaviour. With this
we achieved to demonstrate that multiple wireless interfaces outperform the standard
single interface connection.
Our results show how connection times are generally shortened. We have observed that
using several paths like MPTCP, we obtain at least, the same downloading times than in
single path TCP, if the time is not reduced. The impact of this improvement is higher as
file sizes increase. The more time the connection can operate with multiple interfaces,
the better results reaches.
Furthermore, the data load balance is an important factor that helps the network to better
redistribute its resources, giving a fairer connection that benefits all users connected to.
Despite of everything, we know that our results could be better. The root of some of the
problems occurred during the experimentation procedures could reside on the devices
quality. We have been using domestic access points and current computers, which could
explain some of the rare events experienced.
40
6.2. Future Work
For future work, we are mainly focused on testing genuine Multipath Transmission
Control Protocol. More precisely, we are interested on testing the protocol using
different access technologies; new standards such as 802.11ac and 802.11ad could be
appealing options. Furthermore, combine different access technologies in hybrid
performance tests, using different WiFi standards and cellular networks, would be an
interesting research path.
Mobility and network choosing are further fields that MPTCP should be researched, as
it would be able to switch connection for economics, coverage or battery consuming
reasons.
Additionally, we could also improve our network setup and software. A better
performance would be reachable if dynamic scheduling i.e. splitting ratio could be
applied.
41
References
[1] T. Cisco, “Cisco Visual Networking Index : Global Mobile Data Traffic Forecast
Update , 2016 – 2021,” Growth Lakel., vol. 2016, no. 4, 2016.
[2] editor Jon Postel, “Transmission Control Protocol,” 1981.
[3] S. Barré, C. Paasch, and O. Bonaventure, “MultiPath TCP: From theory to
practice,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell.
Lect. Notes Bioinformatics), vol. 6640 LNCS, no. PART 1, pp. 444–457, 2011.
[4] O. Bonaventure, M. Handley, and C. Raiciu, “An Overview of Multipath TCP,”
USENIX login, pp. 17–23, 2012.
[5] A. Ford, C. Raiciu, and O. Bonaventure, “TCP Extensions for Multipath
Operation with Multiple Addresses,” RFC Editor, 2013.
[6] Barclay’s, “The last mile,” pp. 1–15, 2015.
[7] C. Raiciu, S. Barre, C. Pluntke, A. Greenhalgh, D. Wischik, and M. Handley,
“Improving datacenter performance and robustness with multipath TCP,” Proc.
ACM SIGCOMM 2011 Conf. SIGCOMM - SIGCOMM ’11, p. 266, 2011.
[8] H. Nam, D. Calin, and H. Schulzrinne, “Towards dynamic MPTCP Path control
using SDN,” IEEE NETSOFT 2016 - 2016 IEEE NetSoft Conf. Work. Software-
Defined Infrastruct. Networks, Clouds, IoT Serv., pp. 286–294, 2016.
[9] Y.-C. Chen, Y. Lim, R. J. Gibbens, E. M. Nahum, R. Khalili, and D. Towsley,
“A Measurement-based Study of MultiPath TCP Performance over Wireless
Networks Categories and Subject Descriptors,” Internet Meas. Conf., pp. 455–
468, 2013.
[10] B. Arzani, A. Gurney, S. Cheng, R. Guerin, and B. T. Loo, “Impact of path
characteristics and scheduling policies on MPTCP performance,” Proc. - 2014
IEEE 28th Int. Conf. Adv. Inf. Netw. Appl. Work. IEEE WAINA 2014, pp. 743–
748, 2014.
[11] S. C. Nguyen, X. Zhang, T. M. T. Nguyen, and G. Pujolle, “Evaluation of
throughput optimization and load sharing of multipath TCP in heterogeneous
networks,” 8th IEEE IFIP Int. Conf. Wirel. Opt. Commun. Networks,
WOCN2011, vol. 6, 2011.
[12] C. Paasch, S. Ferlin, O. Alay, and O. Bonaventure, “Experimental evaluation of
multipath TCP schedulers,” Proc. 2014 ACM SIGCOMM Work. Capacit. Shar.
Work. - CSWS ’14, pp. 27–32, 2014.
[13] A. Alheid, D. Kaleshi, and A. Doufexi, “An Analysis of the impact of out-of-
order recovery algorithms on MPTCP throughput,” Proc. - Int. Conf. Adv. Inf.
Netw. Appl. AINA, pp. 156–163, 2014.
[14] D. Wischik and C. Raiciu, “Design, implementation and evaluation of congestion
control for multipath TCP,” … Implement., pp. 1260–1271, 2011.
[15] R. Khalili, N. Gast, M. Popovic, and J. Y. Le Boudec, “MPTCP is not pareto-
optimal: Performance issues and a possible solution,” IEEE/ACM Trans. Netw.,
vol. 21, no. 5, pp. 1651–1665, 2013.
[16] O. B. Karimi, J. Liu, and J. Rexford, “Optimal collaborative access point
association in wireless networks,” Proc. - IEEE INFOCOM, pp. 1141–1149,
2014.
[17] G. Hampel, A. Rana, and T. Klein, “Seamless TCP mobility using lightweight
MPTCP proxy,” Proc. 11th ACM Int. Symp. Mobil. Manag. Wirel. access -
MobiWac ’13, pp. 139–146, 2013.
[18] C. Pluntke, L. Eggert, and N. Kiukkonen, “Saving mobile device energy with
multipath TCP,” Proc. sixth Int. Work. MobiArch - MobiArch ’11, p. 1, 2011.
Recommended