163

Click here to load reader

IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

Embed Size (px)

Citation preview

Page 1: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

IMPLEMENTATION AND EVALUATION OF A PERFORMANCE

ENHANCING PROXY FOR WIRELESS TCP

Master Thesis

by

Dennis Dungs

Page 2: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

Department of Communication Technology

Wireless Networking Group / IP Lab

Aalborg University

Niels Jernes Vej 12

9220 Aalborg

IMPLEMENTATION AND EVALUATION OF A PERFORMANCE

ENHANCING PROXY FOR WIRELESS TCP

Project Period: September 1st 2003 - March 30th 2004

Author: Dennis Dungs

Supervisor: Assoc. Prof. Hans-Peter Schwefel

Number of pages: 138

Number of copies: 7

Abstract:

Future mobile networks are expected to support heterogenous wireless access technologies allowing

vertical handover between these technologies. Different access technologies have been developed

and optimized for maximum throughput and mobility support mechanisms have been deployed to

prevent breakdowns of the communication between distributed systems. The Transmission Control

Protocol (TCP) is known to potentially show performance degradations in wireless settings and

TCP proxies modifying the TCP congestion control behavior are a common method to overcome

these inefficiencies.

This thesis will examine if those assertions can be proven when Wireless LAN or Bluetooth is used

as wireless link in an experimental network. Based on these results, an implementation of a TCP

proxy is presented and evaluated.

Page 3: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

Dedicated to my family

iii

Page 4: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

iv

Page 5: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

Table of Contents

Table of Contents v

List of Tables viii

List of Figures xi

Abstract xv

Acknowledgements xvii

1 Introduction 1

2 Background 32.1 ISO/OSI Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.2 The TCP Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.2.1 Goals of TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2.2 The TCP Header . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2.3 Connection Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.2.4 Data Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.2.5 Connection Termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.3 TCP Flavours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.3.1 Common TCP Flavours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.3.2 TCP Flavours using TCP Header Options . . . . . . . . . . . . . . . . . . . . 182.3.3 Wireless TCP Flavours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.3.4 Other TCP Flavours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.4 Wireless Access Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.4.1 Introduction to Wireless LAN . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.4.2 Introduction to Bluetooth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3 Evaluation Method 293.1 Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.1.1 Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293.1.2 Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.1.3 Traffic Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

3.2 Measurement procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

v

Page 6: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

3.3 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.3.1 Instantaneous Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.3.2 Instantaneous Averaged Throughput . . . . . . . . . . . . . . . . . . . . . . . 413.3.3 Transmission Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.3.4 Round-Trip Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.3.5 Handover delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.3.6 Other Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4 Evaluation of standard TCP 474.1 TCP over Ethernet and Serial Links . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4.1.1 Influence of Test-Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484.1.2 Influence of Serial links on Performance . . . . . . . . . . . . . . . . . . . . . 49

4.2 TCP over Wireless LAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544.2.1 General Performance of TCP over WLAN . . . . . . . . . . . . . . . . . . . . 544.2.2 Curve fitted Transmission Throughput . . . . . . . . . . . . . . . . . . . . . . 584.2.3 Influence of Bit Error Rates on TCP Performance . . . . . . . . . . . . . . . 614.2.4 Influence of Cross-Traffic on TCP Performance . . . . . . . . . . . . . . . . . 634.2.5 Influence of Handovers on TCP Performance . . . . . . . . . . . . . . . . . . 68

4.3 TCP over Bluetooth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704.3.1 Performance of a Bluetooth link in adhoc scenario . . . . . . . . . . . . . . . 714.3.2 Throughput of a Bluetooth link using an access point . . . . . . . . . . . . . 75

4.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

5 Implementation of the TCP Proxy 835.1 Proxy Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835.2 The Split TCP Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845.3 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855.4 Proxy Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

5.4.1 Meta-Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 865.4.2 TCP proxy with handover support in intrasubnet handovers . . . . . . . . . . 875.4.3 TCP proxy with handover support in intersubnet handovers . . . . . . . . . . 885.4.4 TCP proxy without proxy handover . . . . . . . . . . . . . . . . . . . . . . . 885.4.5 TCP proxy with Mobile IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895.4.6 Conclusion about the Proxy Location . . . . . . . . . . . . . . . . . . . . . . 91

5.5 Network Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 915.5.1 Header Option Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 925.5.2 IP Tunneling Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945.5.3 In-Path Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955.5.4 ARP Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965.5.5 Routing Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975.5.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

5.6 Proxy Functionality and Software Architecture . . . . . . . . . . . . . . . . . . . . . 1005.6.1 Mirroring Proxy Implementation . . . . . . . . . . . . . . . . . . . . . . . . . 1005.6.2 Split TCP Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1015.6.3 Mobile IP Daemon Implementation . . . . . . . . . . . . . . . . . . . . . . . . 105

vi

Page 7: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

6 Evaluation of the TCP Proxy in Wireless Scenarios 1076.1 Influence on RTTs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1076.2 Influence of Delayed ACKs on Throughput . . . . . . . . . . . . . . . . . . . . . . . 1086.3 Influence of the Delay-ACK Timer on Throughput . . . . . . . . . . . . . . . . . . . 1106.4 Further Evaluations and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . 111

7 Conclusion 113

A TCP 117A.1 The TCP Header . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117A.2 TCP Flavours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

A.2.1 Vegas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118A.2.2 Forward Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120A.2.3 Total Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121A.2.4 Header Checksum Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122A.2.5 Transactional TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123A.2.6 Multicast TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124A.2.7 Smooth-Start + Dynamic Recovery . . . . . . . . . . . . . . . . . . . . . . . 125A.2.8 TCP Pacing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

B Hardware Specification 129

C Additional Related Measurement Results 133

D Performance Test Tools 137D.1 IPerf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137D.2 Ethereal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137D.3 tcptrace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138D.4 UDPBurst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

Bibliography 140

vii

Page 8: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

viii

Page 9: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

List of Tables

3.1 Overview of different Access Technology Parameters . . . . . . . . . . . . . . . . . . 38

4.1 Comparison of transmission times with Ethereal disabled/enabled . . . . . . . . . . . 48

4.2 UDP Throughput over 8 MBit/s serial links . . . . . . . . . . . . . . . . . . . . . . . 51

4.3 Statistical Performance Parameters of UDP over Wireless LAN . . . . . . . . . . . . 55

4.4 Statistical Performance Parameters of TCP over Wireless LAN . . . . . . . . . . . . 57

4.5 Statistical Comparison between UDP and TCP over Wireless LAN . . . . . . . . . . 57

4.6 Curve fitted Transmission Throughput parameters . . . . . . . . . . . . . . . . . . . 59

4.7 Transmission throughput of a TCP stream competing with a 500 kBit UDP stream

over Wireless LAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

4.8 Cumulative Transmission Throughput of TCP over Wireless LAN with different con-

stant packet rate UDP stream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

4.9 Handover delays of TCP in a WLAN handover scenario . . . . . . . . . . . . . . . . 70

4.10 Average Transmission Throughput of UDP after 10s in a Bluetooth Adhoc Scenario 73

B.1 Hardware specifications of used routers . . . . . . . . . . . . . . . . . . . . . . . . . . 129

B.2 Hardware specifications of used switches . . . . . . . . . . . . . . . . . . . . . . . . . 129

B.3 Hardware specifications of used WLAN access point . . . . . . . . . . . . . . . . . . 129

B.4 Hardware specifications of used PAN access point . . . . . . . . . . . . . . . . . . . . 130

B.5 Hardware specifications of used fixed hosts . . . . . . . . . . . . . . . . . . . . . . . . 130

B.6 Hardware specifications of the proxy hosts . . . . . . . . . . . . . . . . . . . . . . . . 130

B.7 Hardware specifications of Mobile Node 1 . . . . . . . . . . . . . . . . . . . . . . . . 130

B.8 Hardware specifications of Mobile Node 2 . . . . . . . . . . . . . . . . . . . . . . . . 130

B.9 Hardware specifications of the 3Com WLAN adapter card . . . . . . . . . . . . . . . 131

B.10 Hardware specifications of the Belkin Bluetooth adapter . . . . . . . . . . . . . . . . 131

C.1 Throughput of two competing UDP streams over Wireless LAN . . . . . . . . . . . . 133

ix

Page 10: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

C.2 Throughput of a TCP stream competing with a 3 MBit UDP stream over Wireless

LAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

C.3 Throughput of a two competing TCP streams over Wireless LAN . . . . . . . . . . . 134

C.4 Throughput of a TCP streams competing with 1 MBit burst UDP stream over Wire-

less LAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

C.5 Transmission Throughput of a TCP streams over a Bluetooth link using an access point135

x

Page 11: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

List of Figures

2.1 ISO/OSI-Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.2 The TCP Header . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.3 TCP Connection Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.4 TCP Data Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.5 Duplicate Acknowledgement Situations in case of packet loss (left figure) or reordering

(right figure) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.6 TCP Connection Termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.7 TCP Tahoe’s slow start and congestion avoidance algorithm . . . . . . . . . . . . . . 15

3.1 Meta-modell of the considered infrastructure . . . . . . . . . . . . . . . . . . . . . . 31

3.2 Fully wired network architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.3 Single Access Point Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . 33

3.4 Mobility Support Network architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4.1 UDP Throughput over 8 MBit/s serial links at sender (left figure) and receiver side

(right figure) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

4.2 TCP Transmission Throughput over a 8 MBit/s serial link . . . . . . . . . . . . . . . 51

4.3 Instantaneous Averaged TCP Throughput over a 8 MBit/s serial link . . . . . . . . 52

4.4 TCP RTTs over 8 MBit/s serial links . . . . . . . . . . . . . . . . . . . . . . . . . . 53

4.5 Performance of UDP over Wireless LAN . . . . . . . . . . . . . . . . . . . . . . . . . 55

4.6 TCP Performance over Wireless LAN . . . . . . . . . . . . . . . . . . . . . . . . . . 56

4.7 Curve fitted Transmission Throughput and Residuals . . . . . . . . . . . . . . . . . . 59

4.8 Round-Trip Times of TCP over Wireless LAN . . . . . . . . . . . . . . . . . . . . . . 60

4.9 Single RTT Measurement over WLAN . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4.10 WLAN Location Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

4.11 Influence of distances and obstacles on TCPs Performance over Wireless LAN . . . . 63

xi

Page 12: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.12 Network model to measure the influence of crosstraffic on TCP’s performance over

Wireless LAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

4.13 TCP Performance over Wireless LAN in presence of a competing 500 kBit/s UDP

stream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

4.14 Cumulative Throughput of TCP over Wireless LAN with different constant packet

rate UDP streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

4.15 Instantaneous Transmission in a WLAN handover situation . . . . . . . . . . . . . . 69

4.16 A Problem in the DHCP-Client of the mobile nodes causes long network layer dis-

connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

4.17 Network model of Bluetooth adhoc scenario . . . . . . . . . . . . . . . . . . . . . . . 72

4.18 UDP performance over a Bluetooth link in adhoc scenario . . . . . . . . . . . . . . . 72

4.19 TCP Performance over a Bluetooth link in Adhoc Scenario . . . . . . . . . . . . . . 74

4.20 Single RTT measurement over Bluetooth in adhoc scenario . . . . . . . . . . . . . . 74

4.21 TCP performance over a Bluetooth link in adhoc scenario . . . . . . . . . . . . . . . 75

4.22 Single UDP Performance over Bluetooth Run using an access point . . . . . . . . . . 76

4.23 UDP Transmission Throughput over Bluetooth using an access point . . . . . . . . . 77

4.24 TCP Transmission Throughput over Bluetooth using an access point . . . . . . . . . 78

4.25 TCP RTTs over a Bluetooth link using an access point . . . . . . . . . . . . . . . . . 79

5.1 The Split TCP Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

5.2 Meta-Model for network integration of a TCP proxy . . . . . . . . . . . . . . . . . . 86

5.3 Model for Network Implementation of a TCP proxy supporting proxy Handover in

intrasubnet Handover Scenarios of a Mobile Node . . . . . . . . . . . . . . . . . . . . 87

5.4 Model for Network Implementation of a TCP proxy supporting proxy Handover in

intersubnet Handover Scenarios of a Mobile Node . . . . . . . . . . . . . . . . . . . . 88

5.5 Model for Network Implementation of a TCP proxy without proxy Handover in Han-

dover Scenarios of a Mobile Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

5.6 Model of Mobile IP Data-Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

5.7 IP Header Modification model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

5.8 IP Tunneling Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

5.9 In-Path Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

5.10 Model of the ARP Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

5.11 TCP proxy implementation using policy-based routing . . . . . . . . . . . . . . . . . 99

5.12 High-Level-Design of the mirroring TCP Proxy . . . . . . . . . . . . . . . . . . . . . 101

5.13 High-Level-Design of the Split TCP Proxy . . . . . . . . . . . . . . . . . . . . . . . . 102

xii

Page 13: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.14 The qualitative impact of different proxy functionalities on connection setup delays

in TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

5.15 High-Level Design of the Mobile IP Module . . . . . . . . . . . . . . . . . . . . . . . 105

6.1 Influence of the TCP Proxy on RTTs . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

6.2 Transmission throughput with different number of ACKs transmitting 30 MByte of

data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

6.3 Transmission Throughput with different delayed ACK timeout values . . . . . . . . . 111

A.1 TCP State Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

xiii

Page 14: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

xiv

Page 15: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

Abstract

The difference between theory and practice is a lot bigger in practice than in theory.

(Peter van Linden in ”Expert C Programming - Deep C Secrets”)

Future mobile networks are expected to support heterogenous wireless access technologies allow-

ing vertical handover between these technologies. Different access technologies have been developed

and optimized for maximum throughput and mobility support mechanisms have been deployed to

prevent breakdowns of the communication between distributed systems. The Transmission Control

Protocol (TCP) is known to potentially show performance degradations in wireless settings and

TCP proxies modifying the TCP congestion control behavior are a common method to overcome

these inefficiencies.

This thesis examines if those assertions can be proven when Wireless LAN or Bluetooth is used as

wireless link in an experimental network. Based on these results, an implementation of a TCP proxy

is presented and evaluated.

Chapter 1 presents a general introduction to the project and states the goal of this project.

Chapter 2 introduces the specification of the Transmission Control Protocol, its goals, concepts and

flavours.

Chapter 3 describes the experimental approach to measure the performance of TCP and the con-

sidered scenarios.

Chapter 4 evaluates the performance of the Transmission Control Protocol in the defined scenarios.

Chapter 5 discusses the different implementation possibilities of a TCP proxy and presents the ap-

proach followed in this thesis.

Chapter 6 evaluates the basic functionality of the TCP proxy.

Chapter 7 ends with the conclusion about the performance of TCP over wireless links.

xv

Page 16: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

xvi

Page 17: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

Acknowledgements

First, I would like to thank Assoc. Prof. Schwefel, Prof. Jessen, Dr. Jobmann and Prof. Prasad for

making this thesis possible.

Special thanks to my supervisor, Assoc. Prof. Hans-Peter Schwefel, for his guidance, support,

hints and ideas. This thesis is a direct consequence of his supervision.

Many thanks to my parents, Angelika and Werner, for supporting and believing in me through-

out my whole studies.

Many thanks also to my my parents, my sister Janine and her boyfriend, Alex, and my sister

Carolin for giving me ”nutritional” support during my thesis in Denmark.

Thanks deserve also Markus, Janine and Martin for commenting my thesis.

Thanks to the people from Winglab, namely Rui, Ezequiel, Martin, Sergio, Imad, Suvra, Yaoda,

Lars, Daniel, Basak, Homare and Witold for some amazing month in Denmark. Also thanks to all

the people I met during my stay in Denmark.

And last, but not least, i want to thank my friends in Germany, Quirin, Markus, Martin,

Christoph, Carmen and Mike for always being there when needed.

xvii

Page 18: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing
Page 19: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

Chapter 1

Introduction

In today’s networks, TCP/IP has become the most used communication protocol. It is not only used

in the world largest network, the Internet, but also more and more in any kind of computer networks,

like measurement and control networks. Nowadays, wireless access technologies and mobility support

have been developed to allow the moveability of network, paritcularly and system hosts. People are

able read their e-mails everywhere and to do their work from every place in the world with their

own laptop.

As the Transmission Control Protocol (TCP) was first introduced in 1988 by Van Jacobsen, some

assumptions have been made. The Bit Error Rate (BER) at the link layer and disconnections on

the link layer have been considered as negligible. For Ethernet, the BER is typically in a range of

10−6 and disconnections on link layer are rather unlikely. So it was supposed that packet loss only

occurs in case of congestion in the network layer. Packets are dropped, when a node in the path

from sender to receiver cannot receive a packet due to a full packet buffer. Van Jacobsen developed

a mechanism of detecting the maximum transfer rate available in the current network [26]. This

mechanism is divided into two parts: flow control and congestion window adjustment.

Flow control prevents the receiver from flooding with too many packets in too short time, so that

the receiver cannot process all these packets. Congestion window adjustment means a ”continuous

probing” algorithm and tries to determine the current maximum available bandwidth by measuring

packet loss. The sender’s packet rate is incremented until packet loss occurs, is set back to a start

value and incremented again.

Different implementations showing better performance than the original implementation have been

1

Page 20: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

proposed, for example NewReno, SACK and Vegas. These implementations are still based on the

assumptions from Van Jacobsen and perform good in wired scenarios. As more and more mobile

hosts are integrated into current networks, these assumptions seem to be unsustainable for wireless

links. Wireless links suffer from higher Bit Error Rate (BER), its value is reported to be around

10−5 and more. Times of physical disconnection, particularly, when a mobile node switches its point

of attachment to a new network and is handed over from one access point to another, may degrade

the performance of TCP.

This thesis first describes the TCP specification as it is currently standard in the Internet. Secondly,

different implementations for a wired TCP implementation and its enhancements as well as proposed

solutions for a wireless TCP implementation are presented. The main part investigates the TCP

Perfromance over wireless technologies in an experimental setting. The investigation covers different

wireless scenarios over different access technologies, in particular Wireless LAN and Bluetooth.

Based on the identification of performance problems of TCP over the wireless links, a possible

improvement that is based on a split connection approach, its implementation into the network and

its software design will be shown. The evaluation of the proposed solution concludes the work.

2

Page 21: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

Chapter 2

Background

TCP uses a congestion control algorithm to transmit data at the maximum available bandwidth.

To analyze the performance of TCP in wireless settings, a deep understanding of the dynamics of

TCP is required. The specification of

The specification basically describes the functional requirements on a implementation of TCP, but

do not describe the algorithm to perform congestion control and to send data at the maximum

available bandwidth, researchers have developed different approaches to implement this algorithm.

This approaches are called flavours and are also described in this chapter.

The performance of TCP over wireless links is evaluated using two wireless access technologies,

namely Wireless LAN and Bluetooth. An introduction to Wireless LAN and Bluetooth is given in

this chapter.

2.1 ISO/OSI Model

The International Standards Organization (ISO) has developed a layered structure to describe the

different tasks in a networking operating system. The structure is called Open System Interconnec-

tion (OSI) model, which is often referred as ISO/OSI-Model. This model consists of 7 Layers. Every

layer has its specific functions and goals and uses the functions provided by the underlying layer to

fulfill its goals.

The 7 layers are from top to down: Application Layer, Presentation Layer, Session Layer, Trans-

port Layer, Network Layer, Link Layer and Physical Layer. Figure 2.1 shows the structure of the

3

Page 22: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.1 ISO/OSI Model

ISO/OSI-Model.

7. Application Layer

6. Presentation Layer

5. Session Layer

4. Transport Layer

3. Network Layer

2. Link Layer

1. Physical Layer

7. Application Layer

6. Presentation Layer

5. Session Layer

4. Transport Layer

3. Network Layer

2. Link Layer

1. Physical Layer

Logical Link

Logical Link

Logical Link

Logical Link

Logical Link

Logical Link

Physical Link

Figure 2.1: ISO/OSI-Model

The Application Layer defines the topmost layer in the model. Its purpose is to provide function-

ality for the end user, for example writing e-Mails or browsing web-pages. Common Application-

Layer Protocols are HTTP, SMTP or POP3. These protocols use byte streams to communicate with

each other.

The Presentation Layer translates the system-dependent byte stream (for example ANSI-Code) to

a system-independent stream (for example UNICODE).

The Session layer provides functionality to use sessions for user-authentication and splitting the

byte-stream into logical tokens.

The Transport Layer provides a logical end-to-end connection for an application. The tokens deliv-

ered to the Transport Layer are split into so called segments and sent through the Network Layer.

The end-to-end connection might be reliable as in TCP or unreliable as in UDP. Units of data

including the transport layer headers will be called segments in the thesis1.

The Network Layer delivers the segments from the Transport Layer from one end to another. The1Some papers distinguish between TCP segments and UDP datagrams. In order to avoid confusion, the term

segment will be used for UDP and TCP furthermore

4

Page 23: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.2 The TCP Specification

goals of the Network Layer is to find a path from the sender to the receiver and send the packet

along this path. A wide-spreaded example for a Network Layer protocol is IP. Units of data at the

network layer will be termed datagrams.

The Link Layer provides a connection between two neighbor-nodes in a network. The Link Layer

can be divided into two sub-layers: medium access layer and link-layer control. The medium access

layer controls, which node is currently allowed to use the medium. The link-layer control adds error

correction code to the bit-stream to provide a reliable transmission of the bits. Examples of Link-

layer protocols are Ethernet and TokenRing. Units of data sent at this layer will be called frames.

Last, but not least, the Physical Layer defines the electrical specifications of a network including

medium, voltage levels and termination. Examples for Physical Layers are 10BaseT-Twisted Pair

and FDDI.

The last 4 Layers (Physical, Data Link, Network and Transport Layer) are often called transport

system, whereas the first 3 Layers (Session, Presentation and Application Layer) often are imple-

mented in one application on the target system. The term packet is used in general case and used

for units of data on every layer.

2.2 The TCP Specification

TCP [41] resides on the Transport Layer and was designed to realize a reliable, bidirectional end-to-

end connection for data transfer on top of the Internet Protocol (IP). When IP is mentioned here,

it is always referred to IP Version 4 [40]. The next section will describe the basic concepts of TCP

and its requirements to implementations of the specification.

2.2.1 Goals of TCP

IP provides a basic service for sending data over a network. Every host in an IP-Network gets a

specific address, called IP-Address. The IP-Address is an unique number for an interface of a host

to a network. The data, send over an IP-network, is routed through the network from the source

host to the destination host at best effort. There is no guarantee that a datagram sent over an

IP-network is received at the destination host. It may be lost in case of a full buffer at a routing

point. If more than one packet is sent, the packets do not necessarily arrive in the order they were

5

Page 24: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.2 The TCP Specification

sent. There is also no support for any kind of multiplexing. TCP tries to solve these problems and

therefore, its goals are:

• Byte oriented data transfer

• Reliable in-order delivery

• Bidirectional data transfer

• Multiplexing

• Flow Control

Byte-oriented data transfer

TCP transfers a continuous stream of bytes between its attached processes by fragmenting them

into packets and sending them over an IP-network.

Reliable in-order delivery

Reliable transmission of a data stream includes not only the arrival of all data, but also the arrival

of data in the order they were sent. IP itself does not take care of the order of packets or packet loss.

TCP introduces two mechanism to achieve reliability: sequence numbers and acknowledgements.

Additionally, reliability ensures also, that the TCP packets are received correctly, without any bit

or byte errors. TCP adds a checksum to achieve this aim.

Sequence numbers are consecutive numbers representing the position of a byte in the data stream.

As packets are received out-of-order, they can be reassembled upon their sequence number. Ac-

knowledgement numbers represent the packet number the receiver is expecting next. The receiver

implicitly acknowledges all previous packets as successfully received. As the sender receives the

acknowledgements of the receiver, he can determine if packets are lost and react accordingly by

retransmitting possibly lost packets.

6

Page 25: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.2 The TCP Specification

Bidirectional

Because of the bidirectional nature of TCP, data can be exchanged in both directions using the same

TCP connection.

Multiplexing

In an IP-network, only one connection is possible between two end-hosts . If two concurrent processes

on the two hosts wish to communicate, IP would not be able to distinguish between the processes.

To solve this problem, TCP introduces a new mechanism: ports. A port represents a unique number

on one host, the communicating process is attached to. The sender process as well as the receiver

process is connected to a port.

Flow Control

The mechanism to prevent the receiver process to be flooded by too many packets is called flow

control2. To achieve flow control, the receiver of data is advertising the number of bytes he can

process. This number is called receiver advertised window (RWND)3 and is transmitted in the TCP

header. The RWND describes the number of bytes the process can handle after the reception of the

last packet. It allows the sender to send more data until further permission is received.

2.2.2 The TCP Header

TCP uses its own header to communicate protocol specific data between the two endpoints. The

TCP header follows immediately the IP header and is shown in Figure 2.2. Its main fields are as

follows:

• Source Port: The source port carries the port number the sending communication process

is attached to.

2The term congestion control, that will be described in detail later, is sometimes also used for the combination ofcongestion and flow control. Since flow control is a requirement by the TCP specification and congestion control is afeature provided by the TCP flavours, these two schemes will be explicitly distinguished in the thesis.

3The expresseion advertising window (AWND) is sometimes also used in the literature. In this thesis, the termreceiver advertised window window will be used

7

Page 26: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.2 The TCP Specification

U

R

G

A

C

K

P

S

H

R

S

T

S

Y

N

F

I

N

Options Padding

Data

Data Offset Reserved Window

Checksum Urgent Pointer

Source Port Destination Port

Sequence Number

Acknowledgement Number

Figure 2.2: The TCP Header

• Destination Port: The destination port carries the port number the destined communication

process is attached to.

• Sequence Number: The Sequence Number contains the position of the first byte of payload

in the data stream. If a SYN is present, the sequence number contains the initial chosen

sequence number, so that the first byte of the data stream will be numbered with initial

sequence number plus one.

• Acknowledgement Number: The acknowledgement number contains the sequence number

of the byte, that is expected next to receive. In case the TCP packet is a pure SYN-packet,

no packet can be acknowledged and thus the acknowledgement number is set to zero

• Data Offset: The number indicates the length of the TCP header. Since different options

may be added to the TCP header, its length is variable for different packets. Data offset

indicates the position of the payload.

• SYN, ACK, RST, FIN Control Bit: These Bits signal different flags to the communication

partner. They are used in connection setup (SYN) and connection termination (RST, FIN).

Packets with ACK enabled indicate a valid acknowledgement number. TCP packets with SYN

enabled will be referred to as SYN-packet, with SYN and ACK enabled as SYNACK-packet.

• Window: The window value is used in the reverse communication direction to indicate the

number of bytes, that the receiver is willing to accept

8

Page 27: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.2 The TCP Specification

• Checksum: A checksum of a pseudo IP header4, the TCP header and the payload is computed

into this field. The checksum is computed by the ones complement of the sum of the ones

complement of every 16bit byte word.

• Options: Additional options can be carried in the options field. For instance, the maximum

segment size negotiated in the connection

2.2.3 Connection Setup

As TCP is a connection-oriented transport protocol, an explicit connection setup is needed, to ensure

that the destination process is reachable. This may not be the case, if the corresponding IP address

is not found, no path to the destination IP address is found or no process is listening on the specific

destination port. In general, a communication partner can set up or close one side of the connection.

The destination process has to set up or terminate its corresponding side of the connection.

Sender ReceiverSYN, Seq=10000, ACK=0

SYNACK, Seq=60000, ACK=10001

ACK=60001

Figure 2.3: TCP Connection Setup

The Connection Setup is so called 3-Way-Handshake, meaning that at least 3 packets have to be

exchanged before the connection is usable for data transfer5. The initiating process starts the setup

by sending a specific SYN packet. A SYN packet is a TCP packet with the SYN-Flag enabled in the

TCP header. It also contains a clock-driven initial sequence number and may advertise a maximum4The complete pseudo header consists of source and destination IP address, the TCP protocol identifier and the

size of the TCP segment5The specification is unclear if the sender is allowed to send data within in the third packet. In any observed case

in the evaluation, the sender did not use the third packet to send data.

9

Page 28: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.2 The TCP Specification

segment size (MSS). The MSS is used to indicate the maximum number of bytes that can be send

over the IP-network without being fragmented.

When the destination process receives the SYN-packets, it sends a SYN-packet back to the sender,

explicitly acknowledging the SYN packet. This packet is often called SYNACK-packet. If the

advertised MSS is higher than the destination can handle, it also sends a new (lower) MSS to the

sender. The SYNACK-packet explicitly acknowledges the received SYN-packet by inserting the

sequence number of the SYN-packet in the acknowledge field of the TCP header and incrementing

it by one. SYN-packets, as well as SYNACK-packets and FIN-packets are considered as a one-byte-

packet. The SYNACK-packet also a clock-driven initial sequence number

After the sender process receives the SYNACK-packet, it acknowledges the reception of the packet

by sending an empty ACK-packet. The connection is then setup and usable for any kind of data

transfer. The Connection Setup is visualized in 2.3.

2.2.4 Data Communication

As soon as a TCP connection is established, data can be exchanged via segments. TCP uses retrans-

missions to ensure that every packet is delivered, since packets may be lost due to errors indicated

via a bad checksum or network congestion. Sequence numbers that are assigned consecutively to

every byte of the data stream indicate the position of the bytes in the overall data stream. In the

following section, the data transfer only of an unidirectional flow will be described for the sake of

clearness. The bidirectional data flow follows the same rules, but would complicate the description.

After the TCP connection is established, the TCP sender can start sending data by chopping the

data into different chunks, adding the TCP header to it, and sending the segment to the IP layer.

The sequence numbers of the different packets are set in the TCP header according to the position

of the data chunk in the data stream plus an initial offset, that was negotiated in the connection

setup phase. An example of a data communication can be found in Figure 2.4.

In this example, the sender starts to send a data packet containing 1460 bytes of payload. The

initial sequence number was negotiated to 10000. After receiving the packet, an ACK expecting a

sequence number of 11461 is send back. After the reception of the ACK, the sender starts sending

another data packet with 1460 bytes of payloadwith a sequence number of 11461. The receiver

10

Page 29: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.2 The TCP Specification

Sender ReceiverSeq=10000, Data 1460

ACK 11461

Seq=11461, Data 1460

ACK 12921

Figure 2.4: TCP Data Communication

responds the data packet with an adequate ACK packet expecting byte 2921 next. In the example,

every TCP packet, regardless of data- or ACK packet, triggered a new packet. One data packet

triggered one new ACK packet and vice versa. As described in the specification of TCP, an ACK

need not necessarily be triggered immediately after the reception of a data packet. This technique is

often called ”delayed ACK”. [11] states, that a TCP implementation should not excessively delay an

ACK, but send an ACK at least after every second received TCP segment or after 500ms. Common

TCP implementations use timers of 200 ms. Due to the coarse grained implementation of timers in

a kernel, the timer may fire after an interval of 201 - 400 ms.

In common TCP implementations, duplicate ACKs (DUPACK) are used as an indication of network

congestion. DUPACKs are triggered in case of packet loss or reordering of packets. Figure 2.5 shows

two standard situations for DUPACKs.

In the first case, the sender injects one packet to the network, that is received and acknowledged

by the receiver. After the first packet, the receiver has 1460 bytes in his local buffer and expects

byte number 1461. In the example, the next packet that is sent by the sender is lost for some reason.

If the sender sends the next packet and the receiver gets this packet, the receiver still expects byte

number 1461, since the data in the local buffer cannot be filled with more in-order data. It thus

sends a DUPACK. In the same manner DUPACKs are created in case of packet reordering.

11

Page 30: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.2 The TCP Specification

Sender ReceiverSeq=10000, Data 1460

ACK 11461

Seq=11461, Data 1460

ACK 11461

Seq=12921, Data 1460

DUPACK !

Sender ReceiverSeq=10000, Data 1460

ACK 11461Seq=11461, Data 1460

ACK 11461

Seq=12921, Data 1460

DUPACK !

ACK 14381

Figure 2.5: Duplicate Acknowledgement Situations in case of packet loss (left figure) or reordering(right figure)

2.2.5 Connection Termination

Analogous to the connection setup, the connection termination can only be performed for one side

of the connection. The connection partner has also to close his side of the connection. There are

two ways of closing a connection. In the normal case, one side of the connection sends a FIN-packet

to indicate that it wants to terminate the connection. In any error case, the connection can be reset

by sending a RST-packet.

In the standard case, one process closes the connection, if it has no more data to send. He starts

Sender Receiver

FIN, Seq=10000

ACK 10001

ACK=60001

FIN, Seq=60000

Figure 2.6: TCP Connection Termination

12

Page 31: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.3 TCP Flavours

with sending a FIN-packet. If the corresponding process acknowledges the packet by sending an

ACK-packet, one side of the connection is closed and cannot send any data. As one side of the

connection is still opened, this state is often called half-opened. If the second process also finished

sending the outstanding data, it also closes the connection in the same way. Figure 2.6 shows a

typical connection termination.

Based on the requirements regarding connection setup, data communictaion and connection termi-

nation, a state machine can be build. This state machine is described in appendix A.1.

2.3 TCP Flavours

Based on the requirements that are demanded by the TCP specification, several flavours have been

proposed. An excerpt of the flavours will be described next. The flavours can be divided into the

following categories:

• Common TCP flavours: This category will cover the TCP flavours, that are mainly used in

the internet.

• Flavours using TCP header options: TCP flavours, that use additional TCP header options

to enhance the TCP performance will fall into this category.

• Wireless TCP flavours: Flavours optimized for wireless settings settings to overcome the prob-

lems introduced by wireless links are listed in this category.

• Other TCP flavours: flavours, that cannot be classified in one of the categories above, are

listed in this class.

2.3.1 Common TCP Flavours

Since the beginning of research on TCP and its congestion control algorithm, different flavours have

been proposed. The most common flavours, that have been deployed in different operating systems

are:

• Tahoe

13

Page 32: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.3 TCP Flavours

• Reno

• NewReno

• Vegas

TCP Tahoe, Reno and NewReno are currently deployed in the operating systems, TCP Vegas

will be deployed in the next kernel deployment of Linux. Since only implementations of TCP Reno

and NewReno have been used in the evaluations, they will be described next. A description of TCP

Vegas can be found in Appendix A.2.1.

Tahoe

In 1988, Van Jacobsen proposed an enhanced implementation6 of TCP [26]. This version is often

called ”TCP Tahoe”. Since several performance collapses of the internet have been experienced, it

was investigated, that the performance dropdown was mainly caused by the initial implementation

of TCP. Several algorithms have been added to the implementation, namely:

• slow-start

• dynamic window sizing (congestion avoidance)

• round-trip-time variance estimation

• exponential retransmit timer backoff

• more aggressive receiver ACK policy

• Karn’s retransmit backoff

TCP is a self-throttling protocol, that adapts its sending rate corresponding to the maximum

available bandwidth. TCP uses a second window, called Congestion Window (CWND) to determine

the maximum number of packets, that can be sent without running into packet loss due to congestion.

Thus, the CWND indicates the sending rate of TCP measured in packets7.6Literature sometimes referres to TCP Tahoe as first implementation of TCP, since it was the first implementation

using a congestion control algorithm. The TCP implementation proposed in [26] was an enhancement of a first”straight forward” implementation of TCP.

7Some TCP implementations measure the CWND in bytes rather than in packets. For the sake of clarification,the CWND will be measured in packets.

14

Page 33: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.3 TCP Flavours

The mechanism to adjust the sending rate automatically makes TCP stable, but also hard to

start. To bring TCP in this state, ACKs have to be present in order to send data, but ACKs are

generated after data was sent. The initial startup of a TCP connection is an important phase of TCP.

To achieve good performance in the startup-phase, an algorithm called slow-start was implemented

into TCP Tahoe. The main idea behind slow-start is to start with the smallest possible sending

rate, i.e. CWND = 1 packet, and increase the rate exponentially.

In contrast to the slow-start phase, the Congestion Avoidance phase increases the sending rate

linearly per RTT. This means, that the CWND has to be increased by 1/CWND per received ACK.

This behavior is often called ”linear increase and multiplicative decrease”.

Both algorithms, slow-start and congestion avoidance, affect the sending rate of TCP, but they are

0

2

4

6

8

10

12

14

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

pack

ets/

RT

T

RTT

TCP sending rate in congestion control algorithms

Figure 2.7: TCP Tahoe’s slow start and congestion avoidance algorithm

used in different situations. In order to signal the end of the slow-start phase and to enter Congestion

Avoidance, a new parameter called slow-start threshold (ssthresh) is used. Every TCP Connection

starts with a slow-start and increases its CWND exponentially per RTT until ssthresh is reached.

It then enters the Congestion avoidance phase and increases its sending rate linearly per RTT. If a

timeout occurs and signals a packet loss, the ssthresh is set to the half of the current CWND, while

the CWND is set to 1 to enter slow-start again. An example run of both algorithms can be found

in figure 2.7

15

Page 34: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.3 TCP Flavours

Since the retransmission timer is used to detect packet loss, the value of this timer plays a very

important role. A too short value for the timer will unnecessarily trigger retransmissions enter slow-

start too often. A degradation of throughput and performance will be the result. On the other hand,

a too big value for the timer will cause a late detection of packet loss and thus give a bad estimation

of the maximum available bandwidth. Again, a degradation of throughput and performance would

be the result. The TCP standard suggests to estimate the mean round-trip time via a low-pass filter:

R ← αR + (1− α)M (2.1)

where R is the estimated average RTT , M is the RTT measurement for the most recent ACK and

α a filter constant that is suggested to be set to 0.9. The retransmission timeout (RTO) is then set

to

R ← βR (2.2)

where β accounts the variance of RTT. β is suggested to be set to 2 by the TCP standard. Van

Jacobsen presented a modified scheme of calculating the RTT that takes also the variance of the

RTT into account:

Err = M −A

A ← A + g ∗ Err

D ← D + (| Err | −D)

RTO = A + 4D (2.3)

where A is the smoothed RTT, D is the smoothed deviation, M the measured RTT. The gain g for

the average is usually set 18 and the gain h for the deviation is usually set to 1

4 . The new slow-start

and congestion avoidance algorithm as well as the modified RTT estimator improved the initial TCP

implementation dramatically.

TCP Tahoe also addressed the problem of multiple retransmissions on one packet. Reference [26]

stated, that an exponential timer backoff is the only reasonable algorithm for multiple retransmis-

sions. Thus, after a timeout, the RTO is doubled in case of a second retransmission of the same

packet. The proposed algorithm tries to handle DUPACKs. DUPACKs occur either in case of a lost

packet or in case of reordered packets. The algorithm proposes, that reordered packets will only be

16

Page 35: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.3 TCP Flavours

received one or two packets out-of-order. Thus, teh algorithm waits for the next two packets after

receiving the DUPACK to determine, if the packet was lost or reordered. If three or more DUPACKs

are received, a lost packet is assumed, the apparently lost packets is retransmitted and slow-start is

entered. If less than three DUPACKs are received, TCP Tahoe continues to transmit data via the

Congestion Avoidance algorithm.

Reno

TCP Reno [47] is a modified version of TCP Tahoe. It adds the Fast Recovery algorithm to the

TCP Implementation. When TCP Reno enters Fast Retransmit and receives 3 or more DUPACKs,

this information also indicates that there is still data flowing between the sender and the receiver.

Thus, there is still a moderate bandwidth available for the TCP connection and the connection must

not be restarted by a Slow-Start. The Fast Recovery algorithm retransmits the lost packets and

performs the congestion avoidance algorithm.

NewReno

The NewReno Implementation [24] is a slight modification of Reno to overcome the problems of

multiple packet losses in one window of the Reno Implementation. TCP Reno retransmit only one

packet per window in case packet loss is indicated via three DUPACKs. If multiple packets are lost in

one window, a performance degradation would be the result. TCP NewReno uses some information,

that is available in order to make a retransmission decision during Fast Recovery.

After receiving three DUPACKs, NewReno retransmits the missing packets, but additional DU-

PACKs could be received, since the receiver acknowledges data packets, that were injected to the

network before the sender entered Fast Retransmit. In case of a single packet loss, the retransmitted

packet will acknowledge all packets, that have been sent before the sender entered Fast Retransmit.

In case of multiple losses, an ACK for some, but not all packets will be received assuming, that there

was no reordering. This ACK is called partial ACK.

NewReno changes the NewReno flavour in case of the reception of a partial ACK. NewReno does

17

Page 36: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.3 TCP Flavours

not leave Fast Recovery and wait for the next three DUPACKs in case of multiple packet losses.

Instead, it decreases its congestion window by the amount of new acknowledged data and sends the

new packet according to the partial ACK. If subsequent ACKs are received, TCP Reno remains in

Fast Recovery and retransmits the lost packets.

2.3.2 TCP Flavours using TCP Header Options

Additionally to the common TCP implementations, some flavours have been developed, that use

header options to transmit additional information from the sender to the receiver. This information

can be used to estimate the current available bandwidth more precisely or to react more precisely

to lost packets. The following TCP flavours are described next:

• Selective Acknowledgements (TCP SACK)

• Forward Acknowledgements (TCP FACK)

• Total Acknowledgements (TCP TACK)

• Header Checksum Option (TCP HACK)

TCP SACK is currently implemented in most operating systems while TCP FACK, TCP TACK

and TCP HACK are only partially available in the operating system protocol stacks and have not

been used in the evaluation. Hence, TCP SACK will be described next, while a description of TCP

FACK can be found in Appendix A.2.2, TCP TACK in Appendix A.2.3 and TCP HACK in A.2.4

Selective Acknowledgements

Selective Acknowledgements (SACK) [33, 27] are used to inform the sender about the current state

of the receiving buffer in case of multiple packet losses. SACK introduces a new TCP header option,

that carries the information of the successful received bytes beyond the in-order byte stream. On

receiving any SACK information by the sender, the sender can skip those sequence numbers and

thus reduce the number of bytes, that have to be retransmitted. SACK blocks can also be used

to adapt congestion control more adequately, since the number of selectively acknowledged bytes

18

Page 37: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.3 TCP Flavours

provide information about the maximum available bandwidth.

2.3.3 Wireless TCP Flavours

The next section will describe TCP Implementations, that have been proposed to perform substan-

tially better over wireless links to overcome the problems, that are introduced by wireless links.

I-TCP

Indirect TCP (I-TCP, [4]) follows a split connection approach to overcome the lacks of TCP in

wireless settings. The connection between the Mobile node and the fixed host is split into two sepa-

rated connections. I-TCP terms the connection splitting node ”mobile support router” (MSR). This

approach results in two benefits: First, the TCP endpoint is moved from the wireless part to the

wired MSR, so that all the characteristics of wireless links are hidden to TCP. Second, the connec-

tion between the MSR and the mobile node can be optimized for the wireless link. The connection

between MSR and mobile node needs not necessarily be a full TCP/IP implementation, but can be

a small protocol stack, e.g. implemented on a small palm top. The palm top would still be able to

use services from the fixed network. If both connections use a full TCP to communicate, I-TCP can

be seen as a TCP Proxy.

I-TCP proposes, that the connection splitting point should be implemented at the base station or

the access point of a wireless network. Thus, the MSR is changed during a handoff and the logical

split connection state has to be transferred to the new MSR.

The evaluation showed, that I-TCP enhances the performance the more greatly, the more the wireless

link characteristics come into play. In a local area scenario, where the different cells of the different

base stations are overlapping, the performance gain was around 4.5%, whereas the performance gain

in scenario with non-overlapping cells and a 1 sec cell distance, was around 53%. The performance

gains in a wide area network were even better. These gains result mainly from the fact, that the

TCP sender is hidden to the handover process and thus does not see any ACK-losses.

A drawback of I-TCP is the loss of the true end-to-end semantics supposed to TCP as a transport

layer protocol in the ISO/OSI-model (2.1) as it splits the TCP connection into two different con-

nections. Another drawback is, that I-TCP maintains a hard-state, that has also to be handed over

19

Page 38: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.3 TCP Flavours

during a cell switching process of a mobile node. This causes additional network traffic and delays

the traffic between the MSR and the mobile node. [4] showed, that even in a static scenario, where

no handover takes place, the performance was improved due to the lower variation of the RTT.

Multiple TCP

[55] suggests a new session layer protocol (Mobile Host Protocol, MHP), implemented in the Base

Stations of a wireless network and the mobile node, to overcome the problems of wireless links.

MHP assumes, that all routing for mobile hosts is done via MobileIP and that MobileIP can provide

information about a pending handover to the upper layers.

When an application on mobile node requests a connection to a host, MHP intercepts this requests

and sets up a TCP connection to its base station. The base station in turn receives the request and

sets up MHP agent and establishes a connection to the desired host. These two independent TCP

connections can be optimized to their parts of the link, regarding their parameters.

M-TCP

Mobile TCP (M-TCP, [13]) was designed to improve standard TCPs performance lack in handover

situations, while keeping up the end-to-end semantics of TCP.

Mobile TCP uses a split-connection approach between the mobile host and the fixed host. The

TCP connection is split into two parts at the Supervisor Host (SH). The SH is an entity in the

network, that controls several Mobile Support Stations (MSS) and handles routing and Quality of

Service (QoS) tasks. The TCP client at the SH connected to the fixed host sender uses a modified

version of TCP, called SH-TCP, while the TCP client connected to the mobile node uses a M-

TCP8 implementation. Thus, the implementations that is provided in fixed host can further be

used without any changes. When the SH-TCP receives data from the fixed host, it passes it to the

M-TCP to send it to the mobile node. To maintain end-to-end semantics, ACKs are not generated

independently at the SH-TCP, but they are send when an according ACK is received from the mobile

node.

SH-TCP strategy to cope with the TCPs problems in handover situation is to acknowledge one8The term M-TCP is used in [13] for the complete split-connection concept as well as the TCP implementation

between the SH and the mobile node.

20

Page 39: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.3 TCP Flavours

byte less to the fixed host sender than the mobile node has acknowledged to the SH-TCP. The

last unacknowledged byte is then used to freeze sender into a persistent state by sending a Zero

Window Advertisement (ZWA) in case the mobile node gets disconnected. M-TCP is responsible

for notifying the SH-TCP client about the disconnection and reconnection events. After receiving

the reconnection event, SH-TCP sends a new ACK with a non Zero Receiver Window Advertisement

to open up the window again quickly.

On the mobile nodes side, a changed TCP implementation ”M-TCP” was used. M-TCP was build

on the assumption, that the Bit Error Rate (BER) is rather low and retransmission timeouts only

occur due to disconnection. The low BER can be justified by the use of link-layer techniques like

Forward Error Correction (FEC) or Automatic Repeat Request (ARQ). On the SH side of the M-

TCP connection, the retransmission timeout is used to indicate the disconnection and thus informs

SH-TCP, that freezes the fixed host sender. If the mobile node gets informed by the communication

hardware, that the mobile node got disconnected, it freezes all timers and enters the persistent

state. After receiving the reconnection notification from the communication hardware, M-TCP

sends a special marked ACK providing the sequence number of the last successfully received byte.

This special marked ACK is used to inform SH-TCP to unfreeze the fixed host sender.

The evaluation of M-TCP showed, that TCPs lacked due to frequent disconnection could be reduced

by using the M-TCPs split-connection approach. The impact of M-TCP rises, the more the influence

of the wireless link compared to the whole path from the sender to the receiver is.

FreezeTCP

FreezeTCP ([25]) addresses TCPs problems in handover situations without breaking the end-to-

end semantics of TCP. TCP assumes, that packets, that were lost due to the handover, were lost

due to congestion. TCP reacts with inappropriate slowing the sending rate. This leads to an

underutilization of the wireless link. Furthermore, TCP uses an exponential backoff mechanism

to retransmit packets. After a packet is lost, while the mobile node is disconnected, TCP tries to

retransmit the lost packet and backs off exponentially. During this backoff time, the mobile node can

be reconnected again. The mobile node will thus idle until the retransmission timer fires, although

it is able to send data again.

21

Page 40: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.3 TCP Flavours

FreezeTCP uses two mechanism to avoid this performance degrading behavior of TCP. It uses Zero

Window Advertisements (ZWA) to ”freeze” the complete state of the senders TCP and 3 DUPACKs

to recover from the frozen state and continue sending at the rate that was determined before the

handover took place [11].

A Zero Window Advertisement (ZWA) is a TCP packets, that offers a Advertised Receiver Window

(AWND) of 0. The sending TCP, which receives the ZWA, assumes, that there is no buffer space

left in the receivers and stops therefore immediately sending new packets. Since a ZWA does not

provide any information or state change of the intermediary nodes of the network, TCP assumes,

that the maximum available bandwidth stays constant and freezes its congestion window as well

as its timers and enters a persist state. Thus sending a ZWA before the handover event occurs,

keeps up the congestion window and therefore results in a better performance. FreezeTCP suggests,

that the ZWAs have to be sent one RTT before the handover takes place. If the time, before the

ZWA is sent is too long, the connection would be unnecessarily idle for some time resulting in a

bad performance. On the other hand, if the time is to short, the mobile node might not be able to

send the ZWA before the handover event occurs, resulting in an idle time after reconnection and a

decreased congestion window.

To recover from this frozen state, the standard TCP sends Zero Window Probes (ZWPs) to the

receiver until the receiver opens up the window again. The ZWPs are also sent in an exponential-

backoff manner, so using ZWPs to recover from the frozen state could also lead to an idle time.

FreezeTCP sends 3 copies of the ACK for the last segment, that was received before the handover.

In a TCP Implementation with Fast Recovery algorithm implemented, the 3 DUPACKs result in

transmitting the packet expected by the 3 DUPACKs and TCP continues sending data without

shrinking the congestion window.

The evaluation of FreezeTCP showed, that FreezeTCP enhances the performance of TCP in wireless

settings in handover scenarios, especially in cases, where the influence of the wireless link is big (like

in single or few-hop scenarios). FreezeTCP also does not need any changes in the sending TCP and

does not break up the end-to-end semantics of TCP.

The drawbacks of FreezeTCP are the small overhead, that is produced by sending the ZWA and the

3 DUPACKs to recover, but this overhead has almost no influence on the throughput in cases, where

22

Page 41: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.3 TCP Flavours

the wireless link has almost no influence on the overall link. Secondly, a FeezeTCP sender has to

predict a handover in order to send the ZWA early enough. This requires to exchange some cross-

layer information or may even not be possible in certain cases (e.g. RTTs in GPRS are measured

around 700ms to 1s, that requires the mobile node to send the ZWA 700ms to 1s before the handover

event occurs).

Snoop

The snoop protocol [5, 50, 51] was designed to maintain the end-to-end-semantics of TCP while

keeping the TCP implementations on the mobile node and the correspondent node.

The snoop module is implemented as na agent on the base station in order to eavesdrop and cache

all TCP packets flowing from and to the mobile node. Depending on the kind of TCP packets that

arrive at the base station and are received by the snoop module, the snoop agent triggers different

actions:

• A new data packet is received. In a regularly flowing TCP connection, receiving a new

data packet is the standard case. The snoop module caches this packet in its local data cache

and places a timestamp on the packet in order to calculate RTTs for the packet.

• An out-of-order data packet arrives, that was cached before. An already cached,

out-of-order packet is received, when a dropped packet causes a timeout at the sender, which

is a less common case. Depending, if the sequence number of the packet is higher than the last

acknowledgement seen so far, further actions are triggered. If the sequence number is higher,

then it is very likely, that the packet did not reach the mobile node and is thus forwarded. In

the second case, if the sequence number is lower than the last acknowledged sequence number

seen so far, the packet was already received by the mobile node, so it could be discarded. Since

such an out-of-order packet is mainly generated by a lost ACK packet, the snoop module sends

an ACK to the correspondent host.

• An out-of-order data-packet arrives, that was not cached before. This situation

indicates, that either a packet was lost due to congestion or has been delivered out-of-order.

Since reordering by the network layer is rather unlikely, congestion is assumed and the packet

is forwarded to the mobile node, while it is marked as retransmitted locally.

23

Page 42: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.3 TCP Flavours

• A new ACK is received. On receiving a new ACK, the successful reception of new data

is signalled. Thus, the ACK is forwarded and the local cache of unacknowledged data packet

is cleared the acknowledged sequence number. The ACK packet is also used to update the

estimation of the current RTT.

• An old ACK is received. In this normally rare situation, the ACK is dropped by the

snooped module.

• A DUPACK is received. Depending on the type and current state of the snoop module,

different actions are taken. If the DUPACK indicates the expectation of a data packet, that was

not saved in the local cache, the packet was lost from the sender to the snoop module. Thus,

the data packet has to be resent with eventually triggering congestion control at the sender

side by forwarding the DUPACK to the sender. If a DUPACK is received, that indicates the

expectation of a data packet, that is stored in the local cache, the stored data is retransmitted

and the ACK is discarded.

To improve the performance of TCP over the wireless link, the snoop protocol implements NACK

scheme, that requires changes in the TCP implementation of the mobile node. Additionally, a

mechanism to support mobility based on the approach of Mobile IP was additionally implemented

The evaluation of the Snoop protocol showed, that the influence and performance gain of snoop

raises with the probability of packet errors.

2.3.4 Other TCP Flavours

Other TCP flavours have been proposed that do not fit in one of the categories above. Four TCP

flavours are categorized in this class:

• Transactional TCP (T/TCP): Transactional TCP enhances the TCP performance by reducing

the overhead introduced by connection setup and connection termination. A description of

T/TCP can be found in Appendix A.2.5

• Multicast TCP: Multicast TCP addresses the problems introduced when multicast IP addresses

are used rather than unicast. A dsescription of Multicast TCP can be found in Appendix A.2.6

24

Page 43: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.4 Wireless Access Technologies

• Smooth start + dynamic recovery: Smooth start is proposed to replace slow start to provide

a better estimation of the maximum bandwith in slow start and dynamic recovery intends to

estimate the available bandwidth more accurately in multiple losses scenarios. Smooth start

and dynamic recovery are described in Appendix A.2.7

• TCP Pacing: TCP Pacing’s approach spreads data packets equistant in one RTT rather then

sending a packet burst. A description of TCP Pacing can be found in A.2.8

2.4 Wireless Access Technologies

In the latter evaluation of TCP, two wireless link technologies have been used, namely Wireless LAN

802.11b and Bluetooth. A short introduction to both technologies can be found in the next section.

2.4.1 Introduction to Wireless LAN

Since the beginning of mobile computing and wireless networking, different wireless technologies

have been developed. The Institute of Electrical and Electronics Engineers (IEEE) has developed

a standard for Wireless Local Area Networks (WLAN), that operates in the license-free industrial-,

science- and medical- (ISM) radio band (2.4 GHz or 5 GHz) or in infrared range. This standard is

known as IEEE 802.11 standard [1]. IEEE 802.11 defines the physical and medium access (MAC)

layer.

IEEE 802.11 defines two different types of network nodes: A mobile node equipped with a WLAN

NIC and access points which act as a bridge between a wireless and a wired network. Two different

modes are supported by 802.11: Infrastructure mode and adhoc mode. In infrastructure mode, one

or more fixed central nodes are used as attachment to a wired network. An adhoc network is formed

by several nodes without any central administration.

The MAC layer of WLAN 802.11 is formed by two coexisting coordination functions: the Distributed

Coordination Function (DCF) is used for asynchronous data transfer and the Point Coordination

Function (PCF) is used for synchronous, time-bounded data transfer. WLAN 802.11 defines three

different interframe spaces (IFS): short IFS (SIFS), point coordination IFS (PIFS) and distributed

IFS (DIFS). The IFS are mandatory and control different priorities of events. The DCF is the

25

Page 44: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.4 Wireless Access Technologies

basic function to access the channel and implements a Carrier Sense Multiple Access with Collision

Avoidance (CSMA/CA) scheme. In CSMA/CA, a sender of a packet senses the channel and starts

to send a frame, if the channel is idle. Otherwise, the transmission is delayed until the channel is

sensed idle again. To reduce the probability of collisions, the sender has to wait for an randomly

generated time after the channel is sensed busy for the first time. The back off time is reduced at

every transmission attempt. After transmitting a packet, an ACK is sent back by the receiver. If the

ACK is not received at the sender, a link layer retransmission is triggered. A second, but optional

method to share the channel with other nodes is RTS/CTS. A Request-to-Send (RTS) packet is send

from the sender to the receiver using CSMA/CA. After receiving the RTS, a Clear-to-Send (CTS)

is sent back to the sender. After a CTS, only the sender is requesting sender is allowed to send

data. Since the RTS packet contains a information about the length of the packet, other nodes can

eavesdrop on this information and backoff for the time of transmission.

The PCF uses a point coordinator which usually resides on the access point. The point coordinator

organizes the traffic flow between the nodes and the access point using a polling scheme. The PCF

is optionally, whereas the DCF has to be implemented by every WLAN node.

2.4.2 Introduction to Bluetooth

Bluetooth [46] was developed to provide a cheap, short range, low power, wireless connection sys-

tem, so that small and cheap devices can exchange data easily. A Bluetooth network is formed out

of several devices that are grouped into piconets and scatternets. A piconet is formed out of one

master device, that controls the traffic on the physical channel by a polling scheme. A master can

communicate to maximal 7 slave devices in an adhoc fashion. A scatternet is formed out of several

collocated piconets that are interconnected to each other via devices, that are slaves in more than

one piconets.

Bluetooth specifies a synchronous, connection oriented (SCO) data transfer mode that can be com-

pared to circuit switched networks and is mainly used for transferring speech data. A maximum

full duplex data transfer rate of 64 kBit/s is achievable in SCO mode. Asynchronous connection-

less (ACL) data transfer can be used to transfer data over a packet-switched network. A nominal

symmetrical data rate of 433.9 kBit/s or a nominal asymmetric data rate of 723.2 / 57.6 kBit/s is

26

Page 45: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.4 Wireless Access Technologies

achievable in ACL mode.

Bluetooth uses a Frequency Division Multiple Access (FDMA) with frequency hopping to multiplex

different links over the air interface. The frequency hopping sequence is negotiated at the beginning

of a connection. A Time Division Multiple Access (TDMA) scheme is used to send packets over the

connection in a constant time slot. Packets can be sent in 1, 3 or 5 time slots.

Different packet types for the ACL link are defined in the specification of Bluetooth. The packet

types differ mainly in their time slot length and their error protection. Packets can use Forward

Error Correction (FER) and Cyclic Redundancy Check (CRC) to detect bit errors. In case of bit

errors, Bluetooth uses an ARQ scheme to retransmit packets on link layer. Depending on the num-

ber of used time slots and the amount of FEC/CRC, a payload between 17 and 339 bytes can be

carried in one packet.

27

Page 46: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

2.4 Wireless Access Technologies

28

Page 47: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

Chapter 3

Evaluation Method

The evaluation of the performance of TCP in different scenarios is done using an experimental net-

work. The next chapter gives a definition of scenario and describes the different parameter described

in a scenario as well as the limitations of scenarios regarding to the evaluation.

Different performance metrics to quantify the performance of TCP are also introduced in this chap-

ter.

3.1 Scenarios

TCP can be evaluated in many cases and in different scenarios. The performance of TCP might

be considered in scenarios with a moving mobile node, with different kind of traffics or different

kind of link types. In this thesis a description of a scenario contains a description of the network

architecture, a description of the mobility model and a description of the traffic model.

These different models and its limitations to the thesisi are described in detail next.

3.1.1 Network Architecture

The network architecture basically describes the network nodes and link types used in the path

between the communication partners. The different parameters described in the network topology

can be divided into several descriptions:

• Access technology of mobile node: A mobile node might use different wireless access

technologies to access a wired network. The evaluation is limited to the access technologies

29

Page 48: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

3.1 Scenarios

IEEE 802.11 [1] and Bluetooth [46]. Only one concurrent access technologiy will be used.

Multi-homed scenarios are not considered. Although only one wireless technology at time is

used, the level of interference is unknown.

• Number of hops: The number of hops describe the number of single links a packet has to

pass, until it reaches the receiver. The number of can be divided into the number of wired

hops and wireless hops. The number of hops and hence the number of intermediary network

nodes influences the round trip time (RTT). The more network nodes have to be passed, the

higher the RTT is expected. In this thesis, only a single last wireless hop is considered. The

number of wired hops is limited to a small known number depending on the scenario.

• Link layer types of intermediary wired links: Each network node might be connected

with different types of link technologies to a neighbored network node. Besides the wireless

technologies, only Ethernet and serial links are used between the wired networks.

• Network configuration: The network configuration contains information about the routing

process used to forward packets form the sender to the receiver as well as the configuration

of the different links, for example, the serial links may run at different clock rates, the access

point might use special channels or transmitting power.

• Proxy location: The proxy can be inserted in different locations in between the logical path

from the sender to the receiver. The implications of different proxy locations are described in

Section 5.4. In the considered scenarios, the proxy was located on a fixed position.

A meta-model for the network topology that is used for the evaluation, can be found in Figure

3.1.

The meta-model consists of three routers, that form a backbone of the network. Specifications

for the three routers can be found in Appendix B.1. The routers are interconnected via an 8 MBit/s

serial link. On both ends of the backbone, a subnet1 for the communicating end systems was

installed. In case a proxy was installed into the network, it was added to the intermediate router

via a FastEthernet link. The specifications of the proxy host can be found in Appendix B.6. It has

been observed, that the routers send infrequent background traffic caused by routing algorithms or1When subnet is mentioned, it is referred to an IP subnet

30

Page 49: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

3.1 Scenarios

Legend:

Fixed Host

Router

TorontoSwitch

Tokyo DelftAalborg8 MBit/s 8 MBit/s

100 MBit/s100 MBit/s

TCP Proxy

100 MBit/s

Host1-Network Host2-Network

Figure 3.1: Meta-modell of the considered infrastructure

MAC address updates.

Based on the meta model three different network architectures are considered in this thesis:

• Fully wired srchitecture

• Single access point architecture

• Mobility support architecture

These network models are described next.

Fully Wired Network architecture

The first test setup used for an evaluation of the performance of the backbone was built without

any wireless link. The network architecture can be found in Figure 3.2.

One fixed host was installed on each edge router with a 100 MBit/s FastEthernet link over a

switch. The specifications of the fixed hosts is described in Appendix B.5, the specification of the

switch in Appendix B.2. No Proxy was installed in this network architecture.

Single Access Point Network architecture

To measure the influence of a wireless link on the performance of TCP, one wired host was repleaced

by a wireless access point and a mobile node. The network architecture can be found in Figure 3.3.

31

Page 50: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

3.1 Scenarios

Tokyo Delft

Legend:

Aalborg

Fixed Host

Router

Toronto

10.10.3.254

TorontoSwitch

8 MBit/s 8 MBit/s

100 MBit/s100 MBit/s

Shanghai

10.10.1.254

Figure 3.2: Fully wired network architecture

A laptop is used as mobile node equipped with wireless adapters. The specifications of the mobile

node can be found in Appendix B.7. The used Wireless LAN adaptor is specified in Appendix B.9,

the Bluetooth adaptor in Appendix B.10. For an evaluation of multiple data streams over an access

point, a second mobile node (MN2) was used. The specification of MN2 is described in Appendix

B.8.

Two different access technologies have been used as wireless links. In case of a Wireless LAN

link, a CISCO access point was installed as a bridge between the mobile node and the wired network.

The standard configuration, that was used for the access point, can be found in Appendix B.3. In

case of a Bluetooth link, a Bluetooth access point bridging between the mobile node and the wired

network was installed. The specifications of the Bluetooth access point and the Bluetooth link can

be found in Appendix B.4.

Mobility Support Network architecture

To measure the influences of handover delays on the TCP performance, a network architecture

supporting mobility was used. A second Wireless LAN cell was added to the previous architecture

and mobility support was added by integrating Mobile IP into the network architecture. An overview

can be found in Figure 3.4.

The second Wireless LAN cell was built with a second access point attached to the wired network.

The specification of the second access point can be found in Appendix B.3. Two non-adjacent cells

with different SSIDs have been chosen to minimize the influence of interference and overlapping cells

32

Page 51: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

3.1 Scenarios

10.10.1.X

Tokyo Delft

Legend:

Aalborg

Fixed Host

Mobile HostDhaka

Router

Mobile Node

Toronto

Server

10.10.3.254

TorontoSwitch

WLAN Access Point

8 MBit/s 8 MBit/s

100 MBit/s 100 MBit/s

100 MBit/s

Shanghai

100 MBit/s

Figure 3.3: Single Access Point Network Architecture

have been used.

A Mobile IP implementation provided by [37] was used to support IP mobility for the mobile node.

A Home Agent has been installed in the subnet of the first Wireless LAN cell. The subnet conatining

the Home Agent will be referred to home network. A Foreign Agent was installed into the second

Wireless LAN cell. Accordingly, this subnetwork will be referred to foreign network.

3.1.2 Mobility

The second part of a scenario description will cover the mobility. Mobility itself contains a wide

range of mobile objects.

Mobility categories

Mobility can be divided into 4 categories:

• user mobility

• application mobility

• host mobility

33

Page 52: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

3.1 Scenarios

Tokyo Delft

Legend:

Aalborg

Fixed Host

Mobile HostDhaka

Router

Mobile Node 1

Toronto

Foreign Agent

10.10.3.254

TorontoSwitch

WLAN Access Point

8 MBit/s 8 MBit/s

100 MBit/s100 MBit/s

100 MBit/s

Shanghai

100 MBit/s

Istanbul

Home Agent

10.10.3.254

Frankfurt

Server

10.10.2.254Shanghai

10.10.3.X10.10.1.X

Figure 3.4: Mobility Support Network architecture

• network mobility

User mobility means, a person may use a special service from every computer he has access to.

Reading your e-mails with a web-interface, would be a typical example of user mobility. You can

read your mails from every terminal, which has access to the internet independent from the web-

browser, the operating system, the computer architecture or even network technology you use, as

long as they meet the different standards. You are able to read your mails at your Unix System with

Mozilla Browser and an Ethernet Local Area Network (LAN) connected to the internet at work.

You can use your Home PC using Windows and Windows Explorer and an Dial-Up Connection to

an Internet Service Provider (ISP) or your even may use your laptop with a Wireless LAN card

connected to an Access Point at the airport.

Application mobility is the ability to move application between different kind of hosts regardless

which operating system or hardware architecture the source or destination host is using. A presen-

tator might move with his presentation from a crowded small room to a bigger room. The application

associated will move with him to a new host, if the presentation application is mobile.

34

Page 53: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

3.1 Scenarios

Host mobility describes the movability of computers, mainly notebooks or nowadays IP-aware mo-

bile phones, in a network without loosing its connectivity and without the breakdown of the current

applications and connections running on the mobile host. For instance,the mobile networks like

GPRS, HSCSD and the upcoming UMTS are examples for networks supporting host mobility. One

can use his host (that is your phone) nearly everywhere, as long as network coverage is available.

Other techniques, like WLAN (Wireless Local Area Network, IEEE 802.11), ”Bluetooth” and In-

frared make computers and computer peripherals mobile in a short range. They communicate via

the air-interface based either on radio or infrared signals, but have no built-in support for handovers.

Network mobility outlines the moveability of small networks in the encapsulating network. Wireless

Personal Area Networks (WPANS) connected to the internet via WLAN or UMTS could be an

example for network mobility. A WPAN might be, for example, a wireless system generating an

Electrocardiogram (ECG). The single nodes capture different views of your heartbeat, while the

patient is free to move around.

Mobility considered in the observed scenarios will cover only the mobility of hosts, in particular the

mobility of one or two hosts.

Host mobility categories

Host Mobility itself can be distinguished into several categories:

• Fixed position

• Fixed access point

• Handover in same subnet

• Handover to different subnet

• Handover to different access technology

Fixed position

A Host could be installed on fixed position. On a fixed position, a wireless connection may justify

the lower maximum bandwidth compared to the wired solution, if the installation of a cable might

35

Page 54: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

3.1 Scenarios

be difficult for some reasons. On a fixed position, the installation of a wireless can be optimized to

the needs of the fixed host, e.g. by optimizing the position and transmitting power of antennas and

shielding the link to background noise and interference.

The fixed host will thus have a fixed IP-Address and the link quality is expected to be the same

most of the time. Thus TCP is expected to perform well in this scenario.

Fixed access point

When a mobile node moves around a fixed access point in short range, he could stay associated with

the access point at all time, if he stays within the coverage of the access point. While moving around,

he could experience the signal strength of the link and a resulting a higher BER. Also noise and

interference could be a problem, if the signal between access point and mobile node is not shielded.

This would also affect the BER.

While the mobile node is constantly associated to the access point, its IP-Address will be constantly

assigned. Packet losses will occur due to congestion or a high BER, which is rather likely, especially

on wireless links with bad channel quality.

This scenario is typical for rooms in offices or hotels, where access points, so called hot-spots, are

installed.

Handover in same subnet

If a mobile node gets out of range of the access point into the range of another access point, the

scenario is called handover. One scenario contains the handover to the same subnet. The IP-Address

remains constant, but during the handover a disconnection and reassociation time will effect the TCP

performance. This time could be influenced by the overlapping degree of the access points and by

the type of configuration of the mobile node. If the mobile node is configured dynamically, e.g. via

the Dynamic Host Configuration Protocol (DHCP), some information, like its current IP-Address,

has to be exchanged to configure the mobile node. Thus, the disconnection time will be longer as in

case of a static configuration.

During the time of disconnection and reassociation, TCP is unable to send data over the network.

The standard TCP implementation could would interpret this scenario as congestion and perform a

36

Page 55: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

3.1 Scenarios

slow-start. This would lead to a performance degradation.

A typical application of this scenario would be a building, that is equipped with several Access

Points on different levels and rooms.

Handover to different subnet

If a mobile node disconnects from one access point into the range of another access point, the new

access point must not be necessarily in the same subnet as the access point, the mobile node was

connected to before. From the corresponding nodes point of view, the mobile node is not reachable

with its first IP-Address. As the mobile node will not response to any packet send to the old IP-

Address, this will results in several timeouts and will finally break down the TCP connection. To

overcome this problem, a mechanism to make the mobile node also available in other subnetworks

than the home network was introduced. This approach was termed Mobile IP [38].

During the handover, different components impact the time, the mobile node is unreachable. The

time of physical disconnection as well as the time of reassociation prevents the mobile node of sending

or receiving any packets. The time of reconfiguring the mobile node to the new scenario influences

the time of unreachability as described above. The fourth component, that counts to the time of

unreachability, is the time, the mobile nodes needs to register at the foreign agent and to setup a

IP-Tunnel to the home agent.

Depending on the implementation of the home agent and foreign agent, packets send to the old IP-

Address might be lost, if the agents implement a store-and-forward-algorithm. TCP itself will then

cause a retransmission timeout and resend the packet again. Another case would be, that the sending

TCP keeps sending packets while the mobile node hands over. After reconnecting, reassociation and

relocating, the receiving TCP will receive some out-of-order data and acknowledge this data with

DUPACKs. In both cases (DUPACK and timeout), TCP wrongly assumes a congested network

and lowers unnecessarily the CWND. If the home and foreign agents implementation is aware of

the TCP protocol on top, a more intelligent store-and-forward algorithm could be implemented,

that will forward packets based on the ACKs of the mobile node. This approach would equal a

home/foreign agent with an integrated TCP proxy.

37

Page 56: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

3.1 Scenarios

Handover to different access technology

A mobile node could also switch to another access technology. Depending on the underlying network

architecture, this handover could be to the same subnet, or to a different subnet. In this case, the

time of physical disconnection, the time of reassociation and relocating will impact the time of

unreachability.

Wired optimized TCP implementation would react to a packet loss due to time-over or DUPACKs2

with a slow-start. In case of a handover to a different handover, this behavior is desirable, since

the maximum available bandwidth is very unknown and varies strongly between different access

technologies. Table 3.1.2 gives an overview of maximum bandwidth and round-trip times of different

access technologies.

Maximum Bandwidth Round-trip timesGPRS 196 kBit/s 700 ms

Bluetooth 723 kBit/s 200 msWireless LAN 802.11b 11 MBit/s 20 ms

UMTS 2 MBit/s 200ms

Table 3.1: Overview of different Access Technology Parameters

3.1.3 Traffic Model

The traffic model contains information about the network traffic, that is applied to the network

model. In this thesis, only unicast data flow will be observed.

• Type of transport layer protocol: Two distributed applications might use different transport

layer protocols to communicate. The type of transport layer and the transport layer protocol

specifc parameters are part of the traffic model

• Size of transmitted data or transmission duration: Data is sent over a transport layer protocol.

The traffic model contains either a specification of the size of the transmitted data or the

duration, data is transmitted from sender to receiver.

• Number of connections: The number of connections concurrently established over the network

characterizes the traffic model.2depending on the implementation, DUPACKs may cause slow-start

38

Page 57: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

3.2 Measurement procedure

• Unidirectional / bidirectional traffic: data might be transmitted only in one direction (unidrec-

tional) or data might be sent in both directions as it appears mainly in interactive applications.

• Constant or bursty traffic: Traffic flowing over a network need not necessarily be of constant

bandwidth or packet rate. In case of variable bit rate video streaming [31], the used bandwidth

will vary over time, online games usually create traffic patterns characterized as ON/OFF-

traffic [45]. Such traffic sends data repetitively in a specific interval (ON period) following a

period without sending data (OFF period).

In the evaluation, only three different basic traffic types that can be combinded, are used:

• Unidirectional TCP connection: A TCP connection is established between two end sys-

tems. Only unidirectional data streams are considered. A MSS of 1460 byte per segment will

be used throughout the evaluation in order to fully utilize an Ethernet frame. A sending and

receiving socket buffer of 85 kbyte is used for a TCP connection.

• Unidirectional constant bandwidth UDP stream: A UDP stream with a constant appli-

cation layer bandwidth is used in some scenarios. A segment size of 1472 bytes and a sending

and receiving socket buffer of 64 kbyte is used for UDP streams, unless otherwise specified.

• Unidirectional bursty UDP stream: A bursty UDP stream is used to simulate variable

data flow as it often occurs in the Internet. The UDP stream sends UDP packets an ON-period

and idles in an OFF-period. Only ON- and OFF-periods that are exponentially distributed

with the same mean value are considered. During the ON-period, data is sent at a constant

application layer bandwidth.

3.2 Measurement procedure

A measurement consists of three phases. First, the network has to be setup for the measurement

according to the scenario. Secondly, the measurement itself has to be performed according to the

traffic model. The third step is to evaluate the gathered data and to perform statistical analysis.

In order to perform a measurement according to the traffic model, IPerf Version 1.7.0 [49] was used

as traffic generator for UDP and TCP traffic. Iperf is able to generate TCP traffic with specific

39

Page 58: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

3.3 Performance Metrics

buffer sizes and UDP traffic with definable payload sizes and application layer bandwidth, As IPerf

is not capable of generating ON-OFF UDP traffic, a self written traffic generator UDPBurst was

additionally used in some scenarios.

Ethereal Version 0.10.0 [21] was used to capture the packets on both communication ends, i.e. on

sender and receiver side. Packets leaving or arriving at the network interface card (NIC) are recorded

into a file and timestamps are placed to the packets at the time, the packets have been detected by

Ethereal.

After recording the packets into a file, the files have been analyzed via tcptrace to calculate the

throughput metrics. Matlab [48] has been used to compute the statistical parameters and GNUplot

[42] was used to visualize the data.

3.3 Performance Metrics

Different metrics are needed to describe quantitavely the performance of UDP and TCP. The metrics

used to analyze the performance of TCP are described in the next section.

3.3.1 Instantaneous Throughput

Packets captured with Ethereal at the network interface card (NIC) are marked with a time stamp,

at the time they are arriving or leaving, Thus, Ethereal creates tuples formed by (time, packet),

where time is the time, the packet was captured and packet is the byte sequence of the captured

packet. By ordering the tuples by time, every packet can be given a consecutive ascending number

k. Subsequently, the series of captured packets will be referred to packetk.

From an application layer point of view, the throughput Λ can be calculated as the amount of data,

that can be send over UDP or TCP in a given time:

Λ =amount of data

time(3.1)

Based on this equation, the throughput is calculated only by packets that carry payload. In case

of UDP, that sends only data packets, the series of data packets is equal to the series of captured UDP

packets. TCP introduces ACK and retransmitted packet, that are counted to calculate throughput

values. Hence, the data packet series is extracted out of the captured packet series by leaving out

40

Page 59: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

3.3 Performance Metrics

all ACK and retransmitted packets.

Based on the data packet series, the instantaneous throughput that is achieved for every single

packet can be calculated. It is assumed, that the sender always has enough data to send, so that the

time to transmit a data packet can be calculated as the time difference to the previous data packet:

Λinst(n) =packetsizen

tn − tn−1(3.2)

whith

• Λinst(n) the constant throughput of data packet n

• packetsizen the amount of TCP payload in the data segment n, measured in bytes

• tn the detection time of data segment n

Additionally, the average Λinst(n) over the instantaneous throughput can be computed by:

Λinst =∑n

l=1 Λinst(i)n

(3.3)

3.3.2 Instantaneous Averaged Throughput

The instantaneous throughput calculates the throughput for every single packet. Measurements of

the instantaneous throughput showed that throughput values tend to build blocks of high throughput

values. A small group of values gained a very high throughput, while the following packet showed

a very low throughput. Ethereals method of timestamping the packets can cause this effect. The

packets arriving in the network card are buffered before read and timestamped by Ethereal. Thus,

while Ethereal gets processing time by the kernel, Ethereal reads all packets stored in the buffer

and places the actual system clock value as timestamp to the packets. Thus, a bundle of packets

gets almost the same timestamp resulting in a high throughput value. After expiring the processing

time, additional packets are buffered and delayed resulting in a low throughput for the first packet.

Other influences may also affect the correct timestamping.

To overcome those effects, a block of instantaneous throughput values is build and the average over

41

Page 60: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

3.3 Performance Metrics

the block is calculated. The throughput values calculated over a block of instantaneous throughput

values are termed instantaneous averaged throughput and can be described as follows:

Λaverage(n) =

∑i+kl=i+1 packetsizel

ti+k − ti, i = k ∗ (n− 1), for a given k. (3.4)

The average Λaverage(n) can be computed by:

Λaverage =∑n

l=1 Λaverage(i)n

(3.5)

An import question is, how to set the correct value for the block size k. From intuition, some

considerations can be made. The effect of process switching and thus the number of grouped

packets depends on the processing time, that is given to one process. If this time is very long,

many packets will be buffered until they are read and timestamped resulting in a big block of high

throughput values. Thus, the more often processes are switched and the more often the times-

tamping process can read the packet buffer, the more appropriate the throughput values are and

less packets have to be averaged. So the block size is proportional to the processing time given

to one process3. Secondly, the more processes are running on the host, the less often the pro-

cessing time is granted to the timestamping process. Hence, the block size is proportional to the

number of concurrently running process excluding the timestamping process. On the other hand,

the higher the packet rate arriving at the NIC with constant processing time, the more packets

will be buffered resulting in a higher block size. Hence, the block size is proportional to the

packet rate arriving at the NIC. Summing up, these consideration lead to the conclusion, that

block size ≈ packet rate ∗ (number of processes− 1) ∗ processing time .

For simplification, a constant block size of 10 packets have been used throughout the measurements,

since it showed a good compromise between time accuracy and accurate throughput.

When calulating statistical throughput values, mean throughput values and confidence intervals are

calculated based on Λaverage(n).

3It is assumed, that the processing time is sliced into equidistant parts and given equally to every process. Thisround robin process scheduling scheme can be assumed in common operating systems for non real-time applicationsthat use the same process priorities

42

Page 61: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

3.3 Performance Metrics

3.3.3 Transmission Throughput

When transferring objects like files or HTTP objects over a TCP connection, the end-user is mainly

interested in the time it takes to transmit the complete object. This time is often termed transmission

time. Based on the transmission time, the development of the transmission throughput over time

can be calculated by:

Λtransmission(n) =∑n

i=1 packetsizei

tn − t0(3.6)

with t0 the detection time of the first detected packet. In single TCP connection scenarios, t0 is

the time of the first SYN packet detected and hence t0 = 0. In multiple connection, t0 may be the

detection of the first UDP or TCP packet depending on the scenario.

Given these formulas, the average value of the instantaneous throughput over the transmission

time, the average value of the instantaneous averaged throughput over the transmission time and

the transmission throughput after the transmission time can be calculated. Since the packet times

are not distributed in equidistant intervals, those values differ:

Λtransmission(n) 6= Λaverage 6= Λinst (3.7)

3.3.4 Round-Trip Times

Based on the ACK mechanism of TCP, the RTT tRTT (n) of a data packet can be computed. Since

every sent data packet requires a subsequent ACK packet, the time difference between the detection

of a data packet and the detection of an ACK packet can be used to measure the TCP RTTs. Since

data packets can be cumulative acknowledged, the RTTs calculations are only based on non-delayed

ACKs:

tRTT (n) = tACK(n)− tdata(n) (3.8)

with

• tRTT (n) the measured RTT for data packet n

43

Page 62: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

3.3 Performance Metrics

• tACK(n) the detection time of an ACK for data packet n

• tdata(n) the detection time of data packet n that is not delayed ACKed.

In principle, the RTT measurements can be done on senders and receiver side. On receiver side,

the RTT measurement result in a measurement of the processing time, since received data packets

are processed and ACKs generated immediately. The oneway packet delay, i.e. the time it takes the

network to transmit a packet over the network, is not included at the receiver’s side. Hence, RTT

measurements are only made on the sender’s side.

Based on the single RTT measurements, the RTT average tRTT can be computed by:

tRTT =∑n

i=1 tRTT (n)n

(3.9)

95% confidence intervals are computed as indication of jitter.

3.3.5 Handover delay

During a handover, the mobile nodes looses its connectivity to a network until it is reconnected

again. To measure the performance of TCP in handover situations, a UDP probing data stream

is concurently to the TCP connection established, while the handover is performed. The handover

delay on network layer can be approximated by :

∆network = tfirst − tlast (3.10)

with

• ∆network the network layer handover delay

• tfirst detection time of first packet seen after handover

• tlast detection time of last packet seen before handover

Accordingly, the TCP handover delay is computed by:

∆TCP = tfirstTCP − tlastTCP (3.11)

with

44

Page 63: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

3.4 Conclusion

• ∆TCP TCP handover Delay

• tfirstTCP detetion time of first TCP packet seen after handover

• tlastTCP detetion time of last TCP packet seen before handover

3.3.6 Other Performance Metrics

Some other metrics are used to quantify the performance of TCP more precisely:

• ηpackets: Number of packets to transmit the given data.

• ηRT : Number of retransmitted packets.

• ϑRTO: Number of retransmission timeouts.

3.4 Conclusion

This chapter aimed to introduce the considered scenarios, the evaluation process as well as the

performance metrics.

A definition of a scenario has been given and has been decomposed into a network architeture, a

mobility model and a traffic model. Given the description of the models, the considered scenarios

have been limited. The limitations are as follows:

• Network architecture: Only last single wireless hops and fixed proxy location are considered.

The thesis focus on the wireless access technology Wireless LAN and Bluetooth. Only the three

main architectures ”fully wired architecture”, ”single access point architecture” and ”mobility

support architecture” are evaluated in this thesis.

• Mobility: The different flavours of mobility have been described. In the evaluation, only host

mobility is considered.

45

Page 64: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

3.4 Conclusion

• Traffic model: The chracteristics of a traffic model have been described. The traffic model

used in the evaluation is limited to three basic streams, that can used combined: A unidi-

rectional TCP stream, a unidirectional constant bandwidth UDP stream and a uniderectional

bursty UDP stream.

The experimental approach to evaluate the performance of TCP has been described. To quantify

the performance, different performance metrics have been introduced. The main metrics are as

follows:

• Instantaneous throughput Λinst(n)

• Instantaneous averaged throughput Λaverage(n)

• Transmission throughput Λtransmission(n)

• Network layer hand over delays ∆network and TCP hand over delay ∆TCP

• RTTs

46

Page 65: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

Chapter 4

Evaluation of standard TCP

TCP is reported to perform poorly under conditions, where the initial assumptions made for wired

TCP flavours cannot be kept up. Wired TCP flavours assume that disconnections are unlikely and

packet loss occurs only in case of congestion in the network. Wireless links introduce packet drops

due to bit errors and times of link disconnection due to handovers of the mobile node.

Different measurements with an experimental network have been made to analyze the behavior of

TCP over two wireless technologies, IEEE 802.11b [1] and Bluetooth [46]. The results of the exper-

iments are illustrated in the following chapter.

4.1 TCP over Ethernet and Serial Links

The experimental network used for the experiments consisted of different types of network nodes:

routers, switches and fixed hosts as well as different type of links like wireless IEEE 802.11 links,

wireless Bluetooth links, wired serial links and wired FastEthernet links.

Since the different links and nodes influence the end-to-end connection characteristics, experiments

have been made to get an overview of those influences. In a first experiment, the influence of the

test-setup on the connection performance was measured. Secondly, the performance of a 8 MBit/s

serial link, that was established between the routers, was analyzed, since the 8 MBit/s serial link

provided nominally a lower bandwidth than a IEEE 802.11 11 MBit/s WLAN link. The results of

those experiments are presented next.

47

Page 66: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.1 TCP over Ethernet and Serial Links

4.1.1 Influence of Test-Setup

The packet capturing program Ethereal was used to capture the packets from the network interface

card in order to perform detailed statistical and analytical evaluation on the traffic flow. Since

capturing packets takes processing time of the CPU of a node, this degradation of available processing

power and time may affect the throughput and RTTs of the TCP connection.

In order to get an impression about the influence of the capturing program, the following experiment

was made. The fully wired network architecture as described in Section 3.1.1 was set up and a TCP

connection between two fixed hosts was established transferring 10 MByte of data. Since the traffic

generating program IPerf is capable of reporting the duration time of the TCP Connection with a

resolution of 100 ms, a comparison between the duration times of a run with Ethereal enabled and

a run with Ethereal disabled could be used as an indicator of the influence of Ethereal on the TCP

performance. Note, that the connection time, that is reported by IPerf, is the connection time seen

from an applications point of view and thus not the time between the first SYN-packet and last

FINACK-packet seen. The packets will be buffered in different levels of the operating system and

additional delay due to process- and kernel-thread-switching will be encountered. Table 4.1 shows

the transmission times for the described scenario with Ethereal enabled and disabled.

Transmission Time with Ethe-real disabled [s]

Transmission Time with Ethe-real enabled [s]

Run 1 10.9 11.0Run 2 10.9 11.1Run 3 10.9 11.0Run 4 10.9 11.1Run 5 10.9 11.2Run 6 10.9 11.1Run 7 10.9 11.2Run 8 10.9 11.0Run 9 10.9 11.2Run 10 10.9 11.2Average 10.9 11.11Std.Dev. 0.0 0.08756

Table 4.1: Comparison of transmission times with Ethereal disabled/enabled

Performing a Lilliefors-Test [48] on the transmission times with Ethereal enabled to test if the

data follows a normal distribution, leads to the result, that the null hypothesis can be accepted and

the data thus follows a normal distribution. Based on this result, a T-Test can be applied on both

48

Page 67: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.1 TCP over Ethernet and Serial Links

data sets to compare their means and calculate their statistical differences in mean value. Perform-

ing the T-Test, the Null-Hypothesis, that both data sets have the same means, can be rejected at

a significance level of 5%. The confidence interval of the difference between the transmission times

with Ethereal disabled and enabled can be computed to [0,1518s - 0,2682s].

Although the statistical analysis showed that the influence of Ethereal on the performance is statis-

tical significant, the influence on the overall transmission time and thus the average throughput is

rather low (2% in worst case). It has to be considered, that the measurement procedure is based on

a higher layer in the operating system. Thus, influences of process- and thread-switching are higher

than on lower layers. The measurements made in the following chapters are based on a lower layer

and thus more appropriate.

4.1.2 Influence of Serial links on Performance

The different routers used in the experimental network have been interconnected over serial links

capable of transmitting 8 MBit/s. Since some considered access techniques, especially Wireless LAN

802.11b, is nominally capable of transmitting 11 MBit/s of data over the wireless link, the serial link

would limit the available bandwidth instead of the wireless link resulting in different results. The

fully wired network architecture was used to measure the maximum bandwidth with UDP and to

measure the performance of TCP.

UDP Performance over a 8 MBit/s serial link

To measure the maximum bandwidth, that can be provided by the serial links, the throughput using a

constant bandwidth packet rate UDP stream as described in Section 3.1.3 was used. The application

layer bandwidth provided by UDP was set to 8,5 MBit/s. The throughput graphs measured at the

sender and receiver side are shown in Figure 4.1.

The throughput graph shows, that the instantaneous throughput as well as the instantaneous

averaged throughput reach nearly a maximum of 1 MByte/s, which equals 8 MBit/s. In constant

time intervals of around 5s several peaks of the throughput value can be observed.

These dropdown periods can be can be caused by Ethereal: Since Ethereal records packets at Eth-

ernet level, packets with a packet size of 1514 bytes including all headers are recorded and written

49

Page 68: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.1 TCP over Ethernet and Serial Links

0

200000

400000

600000

800000

1000000

1200000

1400000

1600000

1800000

2000000

00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35

Thr

ough

put (

byte

/s)

Time

UDP Throughput of a 8 MBit/s serial link (Sender)

Instantaneous ThroughputInstantaneous Averaged Throughput

Transmission Throughput0

200000

400000

600000

800000

1000000

1200000

1400000

1600000

1800000

2000000

00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35

Thr

ough

put (

byte

/s)

Time

UDP Throughput of a 8 MBit/s serial link (Receiver)

Instantaneous Throughput Instantaneous Averaged Throughput

Transmission Throughput

Figure 4.1: UDP Throughput over 8 MBit/s serial links at sender (left figure) and receiver side(right figure)

into a file, that is buffered in the local RAM. As packets arrive at an application layer bandwidth

of 1 MByte/s, after 5 sec, the buffered in the memory contains 1 MByte/s∗5s∗1514 byte1472 byte ≈ 5.4 MByte.

This amount of data is then transferred from the local RAM to the physical memory on the hard

disk. The writing process is CPU intensive on a low power PC, so that less CPU processing time is

left to record the packets in real-time in Ethereal. Secondly, occupied CPU resources by the kernel

due to disk writes reduce the available CPU processing time for IPerf. Hence, IPerf is not able

to generate UDP Packets at the given bandwidth resulting in a lower throughput while the kernel

is writing the Ethereal trace file to the hard disk. IPerf uses an adaptive method to compute the

delay times between two packets before they are sent. Hence, the time it takes the operating system

to send a packet to NIC is taken into account. Since the new computed delay time is based on

the previous sending time, an oscillating sending time causes an inappropriate inter packet delay

resulting in oscillating throughput.

A second effect, that influences the throughput, is generated by the packet-capturing method, Ethe-

real is using. The operating systems copies all packets that are arriving and leaving the NIC into

a buffer. Ethereal reads the packets from this buffer and places a timestamp to every packet at

the time, the packet was read from the buffer. In high loaded CPU situations, more packets are

delayed in the buffer before a timestamp is placed. Thus, packets will be timed too late resulting

in a lower instantaneous throughput. In contrast, packets, that are timestamped in time after a

50

Page 69: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.1 TCP over Ethernet and Serial Links

delayed packet, will experience an incorrect higher throughput.

Average Through-put Λaverages

[ kBytes

]

Standard Devia-tion of Λaverage(n)

[ kBytes

]

95% Confidence In-terval of Λaverage(n)

[ kBytes

]

Λtransmission(10s)

[ kBytes

]

Sender 1050,6 245,6 [1039,8 - 1061,4] 981,8Receiver 954,3 103,8 [949,6 - 959,0] 922,6

Table 4.2: UDP Throughput over 8 MBit/s serial links

Despite the observed throughput dropdown, an estimate for the maximum achievable throughput

over the serial link can be given by Λaverage and the confidence interval of Λaverage(n) confidence

interval. The results of this computation is given in Table 4.2. Since the throughput dropdown

cause a lower average throughput compared to the real throughput value, this measured value can

be used as a lower bound for the real throughput value. Later evaluation showed, that a UDP

throughput of 850 kByte/s and thus the serial link is able to provide sufficient bandwidth not to

limit the bandwidth of Wireless LAN.

TCP Performance over a 8 MBit/s serial link

In a second experiment, the performance of TCP over the serial links was evaluated. A single

TCP connection as described in Section 3.1.3 was used to measure the TCP performance. The

transmission throughput graphs measured at sender and receiver side can be found in Figure 4.2.

0

100000

200000

300000

400000

500000

600000

700000

800000

900000

1000000

1100000

00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35

Thr

ough

put (

byte

s/se

c)

Time

TCP Transmission Throughput over a 8 MBit/s serial link (Sender)

Run 1Run 10Run 2Run 3Run 4Run 5Run 6Run 7Run 8Run 9

0

100000

200000

300000

400000

500000

600000

700000

800000

900000

1000000

1100000

00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35

Thr

ough

put (

byte

s/se

c)

Time

TCP Transmission Throughput over a 8 MBit/s serial link (Receiver)

Run 1Run 10Run 2Run 3Run 4Run 5Run 6Run 7Run 8Run 9

Figure 4.2: TCP Transmission Throughput over a 8 MBit/s serial link

51

Page 70: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.1 TCP over Ethernet and Serial Links

From the graphs it can be observed, that the Ethereal effects the transmission throughput mostly

on the senders side, whereas the transmission throughput on the receiver side is less influenced by

Ethereal. Since IPerf creates high load on the sender CPU, Ethereal is granted less processing time

resulting in inappropriate timestamps resulting in throughput dropdowns. Plotting Λaverage and

the confidence intervals of Λaverage(n) and leads to Figure 4.3.

948000

950000

952000

954000

956000

958000

960000

962000

1 2 3 4 5 6 7 8 9 10

Thr

ough

put (

byte

s/s)

Run

TCP Throughput over Serial links

Figure 4.3: Instantaneous Averaged TCP Throughput over a 8 MBit/s serial link

The average throughput achieved over the serial link varies sparsely except the first run. In

order to calculate the throughput degradation caused by TCPs dynamics, a t-test was performed

over Λaverage(n) of TCP of each run compared with the coresponding UDP values. The t-test

calculates that the TCP throughput is in a 95% confidence interval of [ 30,0 - 40,6 ] kbytes/s higher

than the UDP throughput.

These results are counter-intuitive to the expectation. TCP should degrade the maximum application

layer throughput compared to a UDP stream due to its increased overhead and its congestion

control and slow start algorithm. Since the throughput values were used as indication for maximum

52

Page 71: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.1 TCP over Ethernet and Serial Links

achievable throughput, no further investigation was made. The latter evaluation showed, that the

serial link provided sufficient bandwidth not to limit the bandwidth of the wireless link, when both

are part of an end-to-end connection.

TCP Round-trip times over a 8 MBit/s serial link

Based on the ACK-packets, the RTT times for data packet have been measured. For every single

run, the average and 95% confidence intervals have been calculated and are visualized in Figure 4.4.

63.20

63.40

63.60

63.80

64.00

64.20

64.40

64.60

64.80

1 2 3 4 5 6 7 8 9 10

RT

T (

ms)

Run

RTT of TCP over Serial links

Figure 4.4: TCP RTTs over 8 MBit/s serial links

Calculating the average over tRTT of each single run leads to RTT of 64.1 ms. Conceptually, the

RTT can be decomposed into different delays:

• Propagation delay is the time, it needs to transport a bit over the physical medium. Since

only link length up to 40m have been used, the propagation delay can be assumed to.

• Serialization delay is the time it needs to serialze a data packet and send it to the medium.

53

Page 72: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.2 TCP over Wireless LAN

• Queuing delay is the time it packets are buffered at intermediate routers.

• Processing delay is the time it takes the receiver to process a packet.

Based on the specification of the serial link and the FastEthernet throughput, the serialization

delay of a 1460 byte segment and a following ACK-packet in reverse direction over 2 serial links

can be computed to 6,2 ms. Measurements at the receiver side showed, that the processing time

for generating an ACK is in the interval [ 0.3 - 0.7 ] ms. Hence, the additional delay added to the

RTT is caused by queuing. Since the TCP (TCP Reno in Linux) to estimate the maximum available

bandwidth by successive doubling the sending rate in slow start, packets wiull be queued when the

maximum bandwidth is achieved.

4.2 TCP over Wireless LAN

After measurements over the the serial links of the experimental network, measurements were ex-

tended to evaluate the TCP performance over Wireless LAN (WLAN 802.11b). A mobile node

equipped with a WLAN card and a WLAN access point attached to the experimental network pro-

vided a basis for the following experiments.

4.2.1 General Performance of TCP over WLAN

In order to get an overview of the performance and throughput of a wireless link, two experiments

have been made. First, a constant packet rate UDP stream was used to measure the maximum

throughput, that is achievable on link-layer. Secondly, the same experiment was repeated with a

TCP flow and compared with the UDP stream.

In both experiments, the single access point network arhcitecture as described in Section 3.1.1 has

been used. The mobile node stayed at a distance of 2m from the access point.

UDP Performance over WLAN

For the first experiment, the traffic model consisted only of a constant bandwidth UDP stream of

7,5 MBit/s at application level. Thus, a packet is send every 1,57 ms. Taking the overhead of

UDP, IP and Ethernet into account, the bandwidth at link layer can be computed to 7,7 MBit/s,

54

Page 73: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.2 TCP over Wireless LAN

which equals 960 kbyte/sec . Each measurement lasted 30sec and was performed in upstream and

downstream direction1. The results for downstream and upstream are shown in Figure 4.5.

0

200000

400000

600000

800000

1000000

00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35

Thr

ough

put (

byte

/s)

Time

UDP Throughput over Wireless LAN (Downstream)

Instantaneous ThroughputInstantaneous Averaged Throughput

Transmission Throughput0

200000

400000

600000

800000

1000000

00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35

Thr

ough

put (

byte

/s)

Time

UDP Throughput over Wireless LAN (Upstream)

Figure 4.5: Performance of UDP over Wireless LAN

To evaluate the throughput of UDP over Wireless LAN, Λaverage as well as the Λtransmission

at the end of the transmission can be considered. Table 4.3 shows the throughput statistics for

the UDP connection over Wireless LAN. The mean throughput as well as the confidence interval

calculations are based on Λaverage. The confidence level was chosen to 95%.

Λaverage [ kBytes

] Confidence Inter-val Boundaries ofΛaverage(n) [ kByte

s]

Λtransmission [ kBytes

]

Downstream 857,8 ± 3,5 850,6Upstream 479,5 ± 0,8 479,0

Table 4.3: Statistical Performance Parameters of UDP over Wireless LAN

The value for mean throughput and transmission throughput differ slightly, since the packets do

not arrive in equidistant time intervals.

The evaluation shows that a UDP application layer throughput of about 850 kbytes/s (6,8 MBit/s),

is achievable in downstream direction. Comparing this value to the UDP throughput achievable over

the serial link (954 kbyte/s = 7,6 MBit/s), it can be concluded, that the wireless link provides less

1Upstream and Downstream-Direction is seen from the mobile node’s point of view. Thus, in downstream direction,data is send to the mobile node, while data is send from the mobile node in upstream direction

55

Page 74: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.2 TCP over Wireless LAN

throughput than the 8 MBit/s serial link, although the nominal throughput of 802.11b is 11 MBit/s.

In upstream direction, a low throughput of 480 kbytes/s (3.8 MBit/s) is achieved, although WLAN

provides a symmetric 11 MBit/s wireless link. Further evaluation showed, that on sender as well as

on receiver side the number of captured packets and hence the throughput was equal. Measurements

with different mobile nodes with the same WLAN NIC card as well as measurements with a different

WLAN NIC card of the same model and different operating systems on the mobile node showed

the same UDP performance. This leads to the conclusion, that the WLAN NIC adapter was not

able to send at higher data rates than the observed throughput. A measurement with a Cisco 350

WLAN NIC adapter instead showed a performance in upstream direction, that is comparable to the

downstream performance of the 3Com card. This effect may be a cause of compatibility, since the

used access point was also a product of Cisco.

TCP Performance over WLAN

To measure the performance of TCP and to be able to compare the performance of the wireless link

teh experiment was repeated with a TCP connection between the mobile node and the fixed host.

Figure 4.6 shows the transmission throughput for every run in downstream and upstream direction.

0

100000

200000

300000

400000

500000

600000

700000

00:00 00:02 00:04 00:06 00:08 00:10 00:12

Thr

ough

put (

byte

s/se

c)

Time

TCP Transmission Throughput over Wireless LAN (Downstream)

Run 1Run 10Run 2Run 3Run 4Run 5Run 6Run 7Run 8Run 9

0

100000

200000

300000

400000

500000

600000

700000

00:00 00:01 00:02 00:03 00:04 00:05 00:06 00:07 00:08 00:09 00:10 00:11

Thr

ough

put (

byte

s/se

c)

Time

TCP Transmission Throughput over Wireless LAN (Upstream)

Run 1Run 10Run 2Run 3Run 4Run 5Run 6Run 7Run 8Run 9

Figure 4.6: TCP Performance over Wireless LAN

Different observations can be made from the graphs. In downstream direction, TCP adapts

accurately to the maximum available bandwidth. The transmission throughput after 10s can be

56

Page 75: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.2 TCP over Wireless LAN

observed to be 700 kbytes/s. Over ten runs, TCPs behavior seems very stable. In contrast, in

upstream direction TCP seems to adopt more slowly to the maximum available bandwidth. The

maximum available bandwidth can be observed to be lower than in downstream direction. Over ten

runs, the maximum available bandwidth varies more than in downstream direction. To prove those

observations, a statistical analysis of the throughput values has been performed. The analysis can

be seen in Table 4.4.

Downstream Upstream

Λaverage

[ kBytes

]

ConfidenceIntervalBoundaries ofΛaverage(n)

[ kBytes

]

Λtransmission(10s)

[ kBytes

]

Λaveraget

[ kBytes

]

ConfidenceIntervalBoundaries ofΛaverage(n)

[ kBytes

]

Λtransmission(10s)

[ kBytes

]

Run 1 717,9 ± 77,0 700,7 654,6 ± 111,6 609,8Run 2 721,5 ± 74,0 717,2 618,8 ± 113,7 608,5Run 3 719,4 ± 84,1 708,3 600,4 ± 113,8 560,9Run 4 706,2 ± 80,7 697,0 672,7 ± 110,6 546,6Run 5 718,5 ± 77,3 691,0 677,7 ± 19,9 631,5Run 6 726,5 ± 73,1 704,3 672,9 ± 110,1 646,0Run 7 727,0 ± 73,2 713,6 655,9 ± 111,0 639,5Run 8 724,5 ± 74,0 713,7 664,3 ± 111,1 617,3Run 9 715,9 ± 80,52 710,8 635,5 ± 112,1 623,5Run 10 728,6 ± 68,2 701,3 651,9 ± 11,7 590,7Average 720,6 705,8 650,5 607,4

Table 4.4: Statistical Performance Parameters of TCP over Wireless LAN

Assuming, that the channel quality did not change significantly during the measurement of UDP

and TCP flows and further assuming, that the UDP throughput measured from above can be used

as metric for the maximum achievable throughput over a wireless link connection, the performance

degradation due to TCP slow-start and congestion algorithm can be calculated as shown in Table

4.5.

UDP TCP Degradation

Λaverage

[ kBytes ]

Λtransmission(10s)

[ kBytes ]

Λaverage

[ kBytes ]

Λtransmission(10s)

[ kBytes ]

Λaverage [%] Λtransmission(10s)[%]

Downstream 857,8 850,6 720,6 705,8 15.9 17.0Upstream 479,5 479,0 650,5 607,4 -35.7 -26.8

Table 4.5: Statistical Comparison between UDP and TCP over Wireless LAN

The statistical evaluation proves the visual observations. In downstream direction a TCP

throughput of 700 kbyte/s (equals 5.6 MBit/s) can be achieved. Compared to the UDP flow,

57

Page 76: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.2 TCP over Wireless LAN

around 16% of throughput is lost due to TCP slow-start, congestion control and additional over-

head. In upstream direction a throughput of 600 kbyte/s (which equals 4.8 MBit/s) can be achieved

in contrast to a low UDP throughput in upstream direction. An explanation for this behavior could

be that overloading the card with packets leads to a lowered throughput.

4.2.2 Curve fitted Transmission Throughput

Based on the form of the transmission throughput graph and the consideration that such a curve

could be compared to saturation curve, an approach to fit the measured values to a saturation

curve was applied to the values measured above. The time coherence used for a fitted transmitted

throughput is expressed by the following expression:

Λtransmission(n) = Λmax ∗ (1− e−tn−ts

tr ) + εn (4.1)

with the measured parameters

• tn packet detection time

• Λtransmission(n)

and the fitting parameters

• Λmax maximum achieved throughput

• ts setup time of connection

• tr relaxation time

The estimation error εn expresses the degree of error made in the fitting process.

The data series calculated by the transmission throughput were fitted to this equation. Since a indi-

vidual Λtransmission(n) value contains information about the history of the transmission throughput,

the transmission throughput values are highly correlated. Thus, the estimation error εn will also be

highly correlated. This considerations leads to to the result, that a standard least square method

will produce wrong results, thus a generalized non-linear least squares method [22] has to be applied.

This method is provided by the software R [52].

58

Page 77: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.2 TCP over Wireless LAN

An analysis of a single TCP measurement over Wireless LAN using a generalized non-linear least

squares aproximation method on the transmission throughput data led to the following results pre-

sented in Table 4.6.

Maximum Throughput 693109 Bytess

Connection Setup Time 0.0593 sRelaxation Time 0.1412 s

Table 4.6: Curve fitted Transmission Throughput parameters

Comparing the maximum transmission throughput (693,1 kBytes ) computed by approximation

to the transmission throughput gained at the end of the transmisison (697,0 kBytes ) leads to the

conclusion, that the approximation at the end of the connection can be seen as appropriate. Plotting

the results and calculating the residuals of the estimation leads to Figure 4.7.

0

100000

200000

300000

400000

500000

600000

700000

00:00 00:01 00:02 00:03 00:04 00:05 00:06 00:07 00:08 00:09

Curve fitted Transmision Throughput (WLAN, downstream, Single TCP Connection)

f(t) = 693109.0*(1-e-(t-0.0593)/0.1412)

Transmisison ThroughputCurve fitted Transmisison Throughput

-120000

-100000

-80000

-60000

-40000

-20000

0

20000

40000

60000

00:00 00:01 00:02 00:03 00:04 00:05 00:06 00:07 00:08 00:09

Residuals for Curve fitted Transmission Throughput

Figure 4.7: Curve fitted Transmission Throughput and Residuals

The residuals show, that the approximation in the beginning of the connection is not accurate

due to an inappropriate model. Also a steady TCP state and thus a steady transmission throughput

can not be assumed. In the single TCP connection scenario, the approximation did not converge in

most cases, so that this approach was not followed further.

59

Page 78: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.2 TCP over Wireless LAN

TCP Round-trip times over WLAN

To conclude the evaluation of a single TCP connection over a WLAN link, the RTTs have been

analyzed. RTTs were measured at the sender side based on the measurements from above. For

every single run, the average RTT and its 95% confidence interval was computed. The results can

be found in Figure 4.8.

16.00

17.00

18.00

19.00

20.00

21.00

22.00

23.00

24.00

25.00

1 2 3 4 5 6 7 8 9 10

RT

T

Run

RTTs in Wireless LAN (Downstream)

16.00

17.00

18.00

19.00

20.00

21.00

22.00

23.00

24.00

25.00

1 2 3 4 5 6 7 8 9 10

RT

T

Run

RTTs in Wireless LAN (Upstream)

Figure 4.8: Round-Trip Times of TCP over Wireless LAN

In downstream direction, the RTT are distributed in a small interval of [ 22 - 23 ] ms. In upstream

direction, the RTTs vary in a greater interval of [ 17 - 24 ms ]. Comparing this values to the RTTs

of 60ms measured in the serial link scenario, the RTTs are lowered, although a WLAN link was

inserted instead of a wired FastEthernet link.

The measurements of the RTTs showed a greater variation in upstream direction than in downstream

direction. Plotting the RTTs of a measruement run five in upstream direction leads to Figure 4.9.

A dropdown of the RTTs after approximately 2.5s can be observed. Further evaluation showed,

that those dropdown are distributed between 1s and 5s. Hence, the average RTT shown in 4.8

depends on the time of the dropdown. To measure the individual delays in downstream direction,

the measurement nodes must be high precision time-synchronized.

60

Page 79: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.2 TCP over Wireless LAN

0

10

20

30

40

50

60

70

80

00:00 00:02 00:04 00:06 00:08 00:10 00:12

RT

T (

ms)

Time

RTTs over WLAN (Single Run Upstream)

Figure 4.9: Single RTT Measurement over WLAN

4.2.3 Influence of Bit Error Rates on TCP Performance

The influence of a high BER in wireless environment is reported to be significant resulting in poor

TCP performance since packet loss is used as an indicator for congestion. Thus, TCP reduces its

sending rate, although the packet was lost due to bit errors.

To measure this influence on TCPs performance and behavior over Wireless LAN, a new experiment

was setup. Since it was not possible to measure the BERs, the BERs were assumed to be higher,

the more the distance is between the mobile node and the access point. Also obstacles like walls

should increase the BERs. An experiment with different locations of the mobile node compared to

the access point was made. In a first scenario, the mobile node was situated in a direct line-of-sight

location in 1 m distance of the access point. The channel conditions are expected to be ideal in this

setting and thus the BER should be near to 0. In a second scenario, the mobile node was placed in

a direct line-of-sight connection 5 m away from the access point. The third position of the mobile

node was chosen to be 5 m distant from the access point with a wall in between. Finally, a fourth

61

Page 80: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.2 TCP over Wireless LAN

position was chosen as a 20 m out-of-sight location. An overview of the different positions can be

found in Figure 4.10.Room 4Room 3Room 2Room 1 Position #1

Position #2

Position #3

Position #4

Legend:

Mobile Host

WLAN Access Point

Figure 4.10: WLAN Location Scenario

The single access point network architecture, shown in Section 3.1.1, was used except that the

transmission power of the access point was lowered from 100mW to 30mW. The lower transmission

power was chosen in order to achieve poorer transmission quality. The signal strength and link

quality as reported by the WLAN card driver in the mobile node was very low in position 4, whereas

on the other 3 positions, the link quality was reported to be good.

The traffic model contained a TCP stream sent over 10s. The experiments were repeated in upstream

and in downstream direction.

The results of the experiments are shown in Figure 4.11.

The graphs show that in the first 3 positions, where the channel quality and signal strength was

reported to be good, the impact on the throughput can almost be neglected. In Position 4, with

bad channel quality reported, the throughput drops heavily to a very low level. Looking at the TCP

traces, recorded at the sender and the receiver, it can be observed that on TCP layer no packet was

dropped and thus no retransmission was triggered. Wireless LAN uses link layer retransmissions to

overcome the problems of bit and packet errors thus preventing any packet loss due to bit errors.

With link layer retransmissions the link layer can be seen as a reliable link with the drawback of

a lowered bandwidth. Therefore, also the IP layer can be assumed to be reliable, if there is no

62

Page 81: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.2 TCP over Wireless LAN

0

100000

200000

300000

400000

500000

600000

700000

800000

00:00 00:02 00:04 00:06 00:08 00:10 00:12

Thr

ough

put (

Byt

es/s

ec)

Time

Influence of Distance and Obstacles in Wireless LAN (Downstream)

Direct Line of Sight ConectionLine of Sight 5mOut-of-Sight 5m

Out-of_sight 20m

0

100000

200000

300000

400000

500000

600000

700000

800000

00:00 00:02 00:04 00:06 00:08 00:10 00:12

Thr

ough

put (

Byt

es/s

ec)

Time

Influence of Distance and Obstacles in Wireless LAN (Upstream)

Direct Line of Sight ConectionLine of Sight 5mOut-of-Sight 5m

Out-of_sight 20m

Figure 4.11: Influence of distances and obstacles on TCPs Performance over Wireless LAN

congestion on one of the links or intermediary systems. So, from TCP’s point of view,if there is

no congestion, the IP layer provides a reliable datagram service with a specific bandwidth, that is

lowered the more bit errors can be observed. The interference between link layer retransmissions

and TCP retransmission scheme is left for future work, since it was not possible to deactivate the

link layer retransmission scheme at the access point in order to make comparable experiments.

Rapidly changing conditions, that can occur when a mobile node moves in a Wireless LAN cell,

are not observed here and also left for future work. If channel conditions change rapidly, link

layer retransmissions may occur with rapidly changing frequency resulting in widely varying packet

delays. It is expected, that TCP may not be able to adjust fast enough to those conditions resulting

in lowered throughput.

4.2.4 Influence of Cross-Traffic on TCP Performance

Single UDP or TCP Connections over Wireless LAN in different scenarios have been observed so far

and the measurements indicate that TCP is performing well under these conditions. The next part

of the evaluation will investigate TCPs performance over the Wireless LAN 802.11b link in presence

of multiple network flows competing for bandwidth.

The single access point scenario was used to and two mobile nodes have been associated to the access

point. The specification of Mobile Node 1 (MN1) can be found in Appendix B.7, of Mobile Node 2

63

Page 82: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.2 TCP over Wireless LAN

(MN2) in Appendix B.8. In upstream direction, both mobile nodes sent data to the server, whereas

in downstream direction, both modile nodes received data from the fixed host. Figure 4.12 shows

the architecture for crosstraffic measurements.

10.10.1.X

Tokyo Delft

Legend:

Aalborg

Fixed Host

Mobile HostDhaka

Router

Mobile Node 1

San Francisco

Toronto

Server

10.10.3.254

TorontoSwitch

WLAN Access Point

8 MBit/s 8 MBit/s

100 MBit/s100 MBit/s

100 MBit/s

Shanghai

100 MBit/s

Mobile Node 2

CrosstrafficConnection

TCP Connection

Figure 4.12: Network model to measure the influence of crosstraffic on TCP’s performance overWireless LAN

In a first scenario, a constant bandwidth UDP stream with at an application layer bandwidth

of 500 kBit/s was established between the fixed host and MN2. After approximately 2 sec, a TCP

connection was established for 10 sec between the fixed host and MN1. The experiment was repeated

for upstream and downstream and the transmission through development of TCP can be observed

in Figure 4.13.

Performing a t-test leads to the result, that the difference between the transmission throughput

of a single TCP connection and Λtransm.,UDP + Λtransm.,TCP in case of a 500 kBit/s UDP stream

lies in a confidence interval of [ 58,6 - 112,4 ] kByte/s ([ 468,8 - 898,4 ] kBit/s).

Repeating the experiments with two UDP connections, with a UDP bandwidth of 3 MBit/s and

with two TCP connections that compete for the available bandwidth, leads to the results shown in

Table 4.8. The table shows the cumulative transmisison throughput of both streams. See Table C.1

to Table C.3 in the Appendix for the detailed results. Using a WLAN adapter card from CISCO led

64

Page 83: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.2 TCP over Wireless LAN

0

100000

200000

300000

400000

500000

600000

700000

00:00 00:02 00:04 00:06 00:08 00:10 00:12

Thr

ough

put (

byte

s/se

c)

Time

TCP Throughput (Downstream)

Run 1Run 10Run 2Run 3Run 4Run 5Run 6Run 7Run 8Run 9

0

100000

200000

300000

400000

500000

600000

700000

00:00 00:02 00:04 00:06 00:08 00:10 00:12 00:14 00:16

Thr

ough

put (

byte

s/se

c)

Time

TCP Throughput (Upstream)

Run 1Run 10Run 2Run 3Run 4Run 5Run 6Run 7Run 8Run 9

Figure 4.13: TCP Performance over Wireless LAN in presence of a competing 500 kBit/s UDPstream

to the result, that in a UDP/UDP and a TCP/TCP scenario, almost all traffic generated or received

by the 3COM card was blocked out. An interpretation of this behavior could be, that the adaptor

card from CISCO uses a more aggressive policy to access the channel and send data packets over

the link.

In this section only UDP streams with a constant packet rate have been observed. This model is

not very appropriate for the Internet, where the available bandwidth changes rapidly in short times.

In that case, TCP should adapt to the available bandwidth as fast as possible without overestimating

the bandwidth and therefore loosing some packets. To evaluate the effectiveness of TCPs adaption

mechanism, the following experiment was made.

The network model with two mobile nodes connected over Wireless LAN to an access point from

above (Figure 4.12) was reused. The traffic model consisted of a TCP stream, that was established

between MN1 and the fixed host and a burst UDP stream flowing between the MN2 and the fixed

host. The UDP sender sent packets at a constant packet rate for a specific time (ON-Time), then

backs off for a while (OFF Time) and starts sending packets again, after the OFF-Time expired. The

ON- and OFF-Times were chosen to be distributed exponentially with a mean value of 100 packets

at an application layer bandwidth of 1 MBit/s. The packets contained 1472 bytes of payload. Given

these numbers, the mean ON- and OFF-Time can be computed to be 1.1 s.

Since ON- and OFF-Times are identically distributed the average bandwidth used by the burst UDP

65

Page 84: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.2 TCP over Wireless LAN

Upstream DownstreamΛtransm.,UDP

[ kBytes

]

Λtransm.,TCP

[ kBytes

]

Λtransm.,UDP +Λtransm.,TCP

[ kBytes

]

Λtransm.,UDP

[ kBytes

]

Λtransm.,TCP

[ kBytes

]

Λtransm.,UDP +Λtransm.,TCP

[ kBytes

]

Run 1 54,3 489,8 544,2 62,5 655,5 718,0Run 2 61,5 490,9 552,3 62,3 652,7 715,1Run 3 56,9 494,8 551,7 62,5 656,0 718,4Run 4 58,2 476,9 535,2 62,8 662,2 725,0Run 5 55,5 491,2 546,8 62,4 649,5 711,9Run 6 56,3 481,9 538,1 62,5 647,6 710,0Run 7 59,4 496,3 555,8 62,4 652,1 714,5Run 8 59,2 435,7 495,0 62,5 654,7 717,2Run 9 59,3 490,5 549,8 62,4 642,8 705,2Run 10 60,6 430,6 491,2 62,4 663,6 726,0Average 58,1 477,9 536,0 653,7 62,5 716,1Std.Dev 2,3 24,3 23,5 19,8 18,8 6,0

Table 4.7: Transmission throughput of a TCP stream competing with a 500 kBit UDP stream overWireless LAN

UDP/UDP 500 kBit/s UDPCrosstraffic

3 MBit/s UDPCrosstraffic

TCP/TCP

Upstream 659586 564976 570523 447047Downstream 665180 716137 650742 670537

Table 4.8: Cumulative Transmission Throughput of TCP over Wireless LAN with different constantpacket rate UDP stream

stream can be set to half of the bandwidth used in the OFF-Times. Thus, the 1 MBit/s burst UDP

stream has the same long term average rate as the 500 kBit/s constant packet rate UDP stream.

Compared to a 500 kBit UDP stream competing with TCP, a TCp stream competing with a 1 MBit

burst UDP stream gains a higher throughput, which seems counter-intuitive. Since TCP uses an

adaptive method to adjust to the maximum throughput, it takes time to adjust to the new available

bandwidth when the UDP burst sender changes from an ON-period to an OFF-period. When chang-

ing from an OFF-period to an ON-period, it was expected, that TCP encounters packet loss due to

congestion and hence halving its congestion window. No retransmissions have been observed in the

traces, since the receiving window limited the sending rate of TCP instead of the congestion window.

Plotting the results that have been observed so far leads to Figure 4.14. The values used for the

graphs are the sum of the transmission throughputs of both streams.

The graphs show, that the maximum throughput achieved over WLAN, is lower in upstream

66

Page 85: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.2 TCP over Wireless LAN

0

100000

200000

300000

400000

500000

600000

700000

800000

900000

TC

P /

TC

P

3 M

Bit

UD

P /

TC

P

1 M

Bit

Bur

stU

DP

/ T

CP

500

kBit

UD

P /

TC

P

UD

P/U

DP

TC

P

UD

P

Thr

ough

put (

byte

s/se

c)

Cumulative Throughput of TCP and UDP over WLAN in different scenarios

UpstreamDownstream

Figure 4.14: Cumulative Throughput of TCP over Wireless LAN with different constant packet rateUDP streams

direction than in downstream direction in every case. This could be a direct effect of the overloading

problem of the 3COM card.

In downstream direction, it was expected, that the throughput degradation in the UDP/UDP sce-

nario compared to the UDP scenario is negligible, since one access point sends arrives on the wireless

links. Influences due to collision avoidance and exponential back off on link layer should not occur,

since only the access point sends packets over the wireless link. In contrast to the expectation, the

throughput degrades severely. In scenarios combining a TCP connection with a UDP stream, the

throughput was expected to degrade, since the mobile node receiving data over TCP sends ACK

packets. Thus, collisions while accessing the wireless medium resulting in lowered throughput were

expected in downstream. The tendency to lowered throughput should fall with higher bandwidth

UDP streams, since TCP should adopt to a lower throughput resulting in less ACK packets and

thus in less collisions. In contrast to this expectation, two concurrent TCP connections gain the

same throughput compared to the UDP/UDP scenario.

67

Page 86: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.2 TCP over Wireless LAN

In upstream direction, the graph meets the expectations regarding collisions on link layer. A TCP

connection with a concurrent 500 kBit/s UDP stream gains slightly less throughput than a TCP

connection with a 3 MBit UDP stream. Since the overhead added by UDP is less than the overhead

added by TCP, the given result can be expected. Thus, two concurrent TCP connections gain least

throughput. Considering that the access point and two mobile nodes are accessing the wireless link

and two nodes send long data packets of 1514 bytes (including all headers), confirms the observation

of a lowered throughput.

4.2.5 Influence of Handovers on TCP Performance

The last experiments made with Wireless LAN links considered the issue of handovers. The mobil-

ity support network architecture was set up for handover experiments. The mobile node was first

associated to the access point in the home network. Approximately 5s after establishing the data

flows, the mobile node was handed over to the foreign network by changing the SSID. Thus, the

handover was performed mobile node initiated.

Two simultaneous flows were established before the mobile node was handed over. A low bandwidth

UDP flow was used as probing flow to measure the time of reestablished network layer connectivity.

UDP packets with a payload size of 50 bytes were sent at an application layer bandwidth of 50

kbyte/s. Hence, nominally one UDP packet was sent every 1ms, so the precision of measuring net-

work layer connectivity can be assumed to 1ms. A TCP connection was established simultaneously

to the UDP flow.

The instantaneous throughput, that shows graphically every detected packet, is shown in Figure

4.15.

The Figure shows, that TCP restarts transmitting data after approximately 30s despite network

layer connectivity is recovered after approximately 15s. To prove this behavior, the measurement

was repeated 10 times. The results of 10 runs can be found in Table 4.9. The handover delays have

been calculated as presented in Section 3.3.5.

The results raise three questions:

• The handover delays are rather long. What is the main influence that leads to this result?

68

Page 87: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.2 TCP over Wireless LAN

0

200000

400000

600000

800000

1000000

1200000

00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35 00:40 00:45

Thr

ough

put (

byte

s/se

c)

Time

TCP/UDP Throughput in handover situation (Receiver)

UDP probing flowTCP Flow

Figure 4.15: Instantaneous Transmission in a WLAN handover situation

• TCP almost doubles the handover delays. What mechanism causes this effect?

• Run 5 did not suffer from long TCP delays as the others. What has changed compared to the

other runs?

To answer the first question, the message flow while performing a handover was investigated.

The evaluation showed that assigning a new IP address to the mobile node via DHCP [20] causes

network layer disconnection delays of over 11s. The problem is visualized in Figure 4.16.

After the mobile node is handed over and link layer connectivity is reestablished, the mobile

nodes broadcasts a DHCP-Request to find a the new DHCP server and to ask for a new IP address.

The IP address, that was assigned to the mobile node before the handover was performed, is inserted

as source address into the broadcast packet. The new DHCP server does not respond to this re-

quest, since the node that is asking for a new IP address does not belong to its subnet. After 1s, the

DHCP client on the mobile node times out and requests to release its IP address, again with its old

IP address inserted as source IP address. Since the DHCP server does not respond to this request,

69

Page 88: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.3 TCP over Bluetooth

Handover Delay ∆network [s] TCP Delay ∆TCP [s] TCP Gap ∆network −∆TCP [s]Run 1 15.91 29,046 13,13Run 2 14,65 29,16 14,51Run 3 15,12 29,22 14,10Run 4 15,68 29,22 13,54Run 5 15,02 16,39 1,38Run 6 15,65 29,06 13,41Run 7 16,19 29,22 13,03Run 8 14,90 29,05 14,15Run 9 14,92 29,04 14,12Run 10 14,88 29,05 14,17Mean 15,29 27,85 12,55

Std.Dev 0,52 4,02 3,9695% - Conf. 0,010 0,080 0,078

Table 4.9: Handover delays of TCP in a WLAN handover scenario

the DHCP client on the mobile node times out again after 10s. At this point, the normal method

of obtaining a new IP address is performed and the mobile node achieves network connectivity in

short time. The remaining 5s are caused by Mobile IP and its message signaling.

To answer the second question, the packet traces at the sender side were analyzed. Since the sender

does not receive any ACK packet for the last few packets, the retransmission timer is fired and a

packet is retransmitted. For Run 3, the measured retransmission timeout is 230 ms. Due to the

handover delay, the sender does not receive an ACK for 15s, the retransmission timer is doubled

every failed retransmission (x times within 15s) until the value for the retransmission timer reaches

14,72s and an ACK is received form the mobile node. Summing up the individual retransmission

timeout values leads to a value of 29,21s, which corresponds to the value measured as TCP Delay.

Analyzing Run 5 leads to the result, that the initial retransmission timeout value was computed

to 260ms. Thus, the retransmission timeout needed to double only until it reached 8320ms, since

after that amount of time an accumulated time of 16,38s had expired. From the measurement, a

handover delay of 15s can be observed. So, the retransmission timer did not need to double again

to retransmit a new data packet.

4.3 TCP over Bluetooth

In a second series of experiments, the influence of TCP over a Bluetooth wireless link was evaluated.

Since Bluetooth encounters different link characteristics than WLAN due to different access and

70

Page 89: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.3 TCP over Bluetooth

Mobile Node DHCP Server

DHCP Request, IP Source = 10.10.1.123

DHCP Offer, IP Source = 10.10.3.1

Layer 2 reconnect

Layer 2 disonnect

IP: 10.10.1.123/24 IP: 10.10.1.1/24

IP: 10.10.3.1/24

Subnet change

dropDHCP Release, IP Source = 10.10.1.123 drop

1s timeout

DHCP Discover, IP Source = 0.0.0.0

10s timeout

DHCP Request, IP Source = 0.0.0.0

DHCP ACK, IP Source = 10.10.3.1

IP: 10.10.3.123/24

Figure 4.16: A Problem in the DHCP-Client of the mobile nodes causes long network layer discon-nection

data transfer techniques, the influences of this characteristics have been evaluated.

4.3.1 Performance of a Bluetooth link in adhoc scenario

In a first scenario, the maximum achievable throughput that is achievable over a Bluetooth link was

measured using a constant packet rate UDP stream in an adhoc scenario. Two Mobile nodes were

equipped with a Belkin Class 2 USB Bluetooth adapter. A Pentium4 2.66 GHz laptop with 512 MB

RAM running WindowsXP (referred as MN1) acted as master device in the piconet. A Pentium

266MHz laptop with 32MB RAM running WindowsXP (referred as MN2) acted as slave device. A

wireless network connection was established using the Personal Area Network (PAN) application

profile. The mobile nodes were situated in a distance of 1m. The network model is shown in Figure

4.17

Throughput measurements have been done for UDP and TCP in upstream and downstream

71

Page 90: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.3 TCP over Bluetooth

Mobile Node 1(Master)

Mobile Node 2(Slave)

Figure 4.17: Network model of Bluetooth adhoc scenario

direction2 and are described in the following section.

UDP throughput of a Bluetooth link in adhoc scenario

The first experiment using a constant packet rate UDP stream was applied to analyze the maximum

achievable throughput.

A constant packet rate UDP stream was established between both mobile nodes to measure maximum

throughput. UDP packets with a payload size of 1472 bytes per packet were sent at an application

layer bandwidth of 700 kBit/s. Taking the different headers of the different layers into account, the

network layer bandwidth can be computed to 720 kBit/s. The transmission throughput developments

of every single run is graphically shown in Figure 4.18.

0

10000

20000

30000

40000

50000

60000

70000

80000

90000

00:00 00:02 00:04 00:06 00:08 00:10 00:12

Thr

ough

put (

byte

s/se

c)

Time

UDP Throughput in Bluetooth AdHoc Scenario (Downstream)

Run 1Run 10Run 2Run 3Run 4Run 5Run 6Run 7Run 8Run 9

0

10000

20000

30000

40000

50000

60000

70000

80000

90000

00:00 00:02 00:04 00:06 00:08 00:10 00:12

Thr

ough

put (

byte

s/se

c)

Time

UDP Throughput in Bluetooth AdHoc Scenario (Upstream)

Run 1Run 10Run 2Run 3Run 4Run 5Run 6Run 7Run 8Run 9

Figure 4.18: UDP performance over a Bluetooth link in adhoc scenario

Computing the average transmission throughput after 10s over the 10 runs leads to Table 4.10.

Performing a t-test on both data series with a significance level of 5% leads to the result, that

2Upstream and downstream direction are seen from the slave device’s point of view

72

Page 91: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.3 TCP over Bluetooth

Downstream [kBytes

] Upstream [ kBytes

]

Run 1 80,5 80,1Run 2 80,4 79,9Run 3 69,3 78,4Run 4 80,8 79,9Run 5 80,6 78,2Run 6 80,9 80,5Run 7 80,4 81,0Run 8 80,2 80,9Run 9 81,0 80,3Run 10 80,7 80,6Average 80,63 80,0Std.Dev. 0,243 0,955

Table 4.10: Average Transmission Throughput of UDP after 10s in a Bluetooth Adhoc Scenario

the UDP transmission throughput after 10s is not significant different in upstream and downstream

direction. Since the effect that caused the throughput degradation in Run 3 in downstream direction

was observed more frequently in a scenario using an access point, an explanation for this effect will

be given there.

TCP throughput over a Bluetooth link

The experiment from above was repeated with a TCP stream instead of a UDP stream to measure

the performance of TCP over a single Bluetooth hop. A sender and receiver buffer of 85 kByte was

used for the TCP connection. The measured transmission throughput developments over time can

be found in Figure 4.19.

The graphs show that in upstream direction a maximum transmission throughput of 75 kbyte/s

is achieved after 10s in most cases. Two cases encounter the throughput dropdown to a lower level,

resulting in reduced transmission throughput. In downstream direction, only the low throughput

level is achieved by TCP. A conspicuous sawtooth pattern can be observed on the downstream

graphs. Analyzing the packet flow on receiver side show regular idle times of the connection. These

idle times are caused by ACK packets, that encounter long RTTs. Since the receiver advertised a

RWND of 17520 bytes on every arrived data packet, the sending rate of the sender was limited by

the RWND, after the CWND hit RWND. Thus, a new packet is only injected into the network upon

a reception of an ACK. Hence, an ACK with a long delay causes a delayed injected data packet and3Run 3 was left out, when calculating the average and standard deviation

73

Page 92: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.3 TCP over Bluetooth

0

10000

20000

30000

40000

50000

60000

70000

80000

90000

00:00 00:02 00:04 00:06 00:08 00:10 00:12

Thr

ough

put (

byte

s/se

c)

Time

TCP Throughput in Bluetooth AdHoc Scenario (Downstream)

Run 1Run 10Run 2Run 3Run 4Run 5Run 6Run 7Run 8Run 9

0

10000

20000

30000

40000

50000

60000

70000

80000

90000

00:00 00:02 00:04 00:06 00:08 00:10 00:12

Thr

ough

put (

byte

s/se

c)

Time

TCP Throughput in Bluetooth AdHoc Scenario (Upstream)

Run 1Run 10Run 2Run 3Run 4Run 5Run 6Run 7Run 8Run 9

Figure 4.19: TCP Performance over a Bluetooth link in Adhoc Scenario

thus a lowered throughput. The characteristics of the RTTs over Bluetooth are discussed next.

RTTs over a Bluetooth link

Using the measurements from above, the RTTs have been calculated. The RTTs over time of one

single run in downstream and upstream direction are shown in Figure 4.20.

0

100

200

300

400

500

600

00:00 00:02 00:04 00:06 00:08 00:10 00:12

RT

T (

ms)

Time

RTTs over Bluetooth in Adhoc Scenario (Single Run, Downstream)

0

100

200

300

400

500

600

00:00 00:02 00:04 00:06 00:08 00:10 00:12

RT

T (

ms)

Time

RTTs over Bluetooth in AdHoc Scenario (Single Run, Upstream)

Figure 4.20: Single RTT measurement over Bluetooth in adhoc scenario

The upstream RTT graph shows in the beginning an increase of the RTT until it reaches a

value of 200ms after almost 1s. After the increase, a relatively constant value of 200ms can be

observed. Additional peaks in equidistant intervals are suspicious. In downstream direction, an

74

Page 93: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.3 TCP over Bluetooth

initial first peak, that is caused by a delayed first data packet, is visible. After this first peak, the

RTT increases until it reaches 200 ms. After the increase, a baseline of 200ms with 2 additional

peaks are visible in the time interval [ 1.5s - 4s ]. The RTT increases again until it reaches 300 ms.

In the second increase phase, an additional peak can be observed. Until the connection is terminated

a RTT baseline of 300ms with additional peaks are visible.

Since no possibility was given to analyze the dynamics of the Bluetooth link and its baseband, a

proven explanation for those behavior cannot be given. It is assumed that phases of linear increase of

the RTT are the result of buffering on baseband level. Peaks could be an indicator of Bluetooths flow

control scheme at link layer. If the Bluetooth node buffers ACKs before sending in low bandwidth

direction, buffering adds time to the RTT and thus a high RTT would be visible. Since ACKs are

sent bundled, the following ACKs would lower the RTT and a falling edge of the peak would be

visible. On the other hand, sending ACKs in the low bandwidth direction will block data packets

from being transmitted resulting in a raising edge of the peak.

Calculation of the mean RTT and its confidence interval in upstream and downstream direction

leads to Figure 4.21.

150.00

200.00

250.00

300.00

350.00

400.00

450.00

1 2 3 4 5 6 7 8 9 10

RT

T

Run

RTTs over Bluetooth using adhoc scenario (Downstream)

150.00

200.00

250.00

300.00

350.00

400.00

450.00

1 2 3 4 5 6 7 8 9 10

RT

T

Run

RTTs over Bluetooth using adhoc scenario (Upstream)

Figure 4.21: TCP performance over a Bluetooth link in adhoc scenario

4.3.2 Throughput of a Bluetooth link using an access point

The single access point network architecture was changed to support Bluetooth. The WLAN access

point was replaced by a BLIP Bluetooth access point. The Bluetooth piconet was formed with the

75

Page 94: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.3 TCP over Bluetooth

access point as master and the mobile node as slave device. The distance between the mobile node

and the access point was measured to 1m. To support sending and receiving Ethernet packets over

Bluetooth, the PAN application profile was used. The mobile node did not move from its location

nor did any mobile node initiated handover take place.

UDP Throughput over a Bluetooth link using an access point

In the first experiment, a constant packet rate UDP stream was sent in downstream and in upstream

direction for 30s. The UDP payload size was chosen to 1472 bytes at an application layer bandwidth

of 700 kBit/s. The result of one single run in downstream direction is shown in Figure 4.22.

0

20000

40000

60000

80000

100000

120000

140000

160000

00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35

Thr

ough

put (

byte

/s)

Time

UDP Throughput over time (Downstream)

Instantaneous ThroughputInstantaneous Averaged Throughput

Transmission Throughput

Figure 4.22: Single UDP Performance over Bluetooth Run using an access point

It can be observed that in the first 15s the throughput is almost constantly at 75 kbyte/s and then

drops to a lower level of 60 kbyte/s after approximately 15s. In order to measure the dynamics of the

throughput dropdown, the measurements have been repeated 10 times in upstream and downstream

direction. The results are graphically shown in Figure 4.23.

76

Page 95: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.3 TCP over Bluetooth

0

10000

20000

30000

40000

50000

60000

70000

80000

00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35

Thr

ough

put (

byte

s/se

c)

time

UDP Transmission Throughput (Downstream)

Run 1Run 10Run 2Run 3Run 4Run 5Run 6Run 7Run 8Run 9

0

10000

20000

30000

40000

50000

60000

70000

80000

00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35

Thr

ough

put (

byte

s/se

c)

Time

UDP Throughput over Bluetoth (Upstream)

Run 1Run 10Run 2Run 3Run 4Run 5Run 6Run 7Run 8Run 9

Figure 4.23: UDP Transmission Throughput over Bluetooth using an access point

It is shown, that in downstream direction the time of the throughput dropdown is distributed in

an interval of [3s - 30s]. In upstream direction, some TCP connections start to achieve a transmission

throughput of 60 kByte/s, but drop down to 50 kByte/s. Other TCP connections send only at a

transmission throughput of 50 kByte/s.

Two possible reasons can be given here:

• Additional traffic in low bandwidth direction

• Interference

To achieve maximum bandwidth in an unidirectional data flow, Bluetooth will adjust its wireless

link to a maximum asymmetry providing maximum bandwidth in the data flow direction while low-

ering the provided bandwidth in the reverse direction to a minimum. If packets are queued to send

in the low bandwidth direction, Bluetooth adjusts the asymmetry to more symmetry to send those

packets. This behavior ensures fairness between flows competing for bandwidth in downstream and

upstream direction. To realize this behavior in Bluetooth, the slave can send longer packets in low

bandwidth direction. Instead of sending 1-slot packets, the node would send 3-slot or even 5-slot

packets to transfer the queued packets. Bluetooth also defines support for basic flow control over a

Bluetooth wireless link. A node can transmit a flow control flag, that advises the communication

partner to stop sending further data packets until the flow control flag is deleted. Both options are

defined in [46], but each node can implement its own packet scheduling and flow control algorithms.

77

Page 96: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.3 TCP over Bluetooth

Since the probability of throughput dropdown have been observed to be higher in the access point

scenario compared to the adhoc scenario, this could be an indication for the effect described above,

since more background traffic is generated by the routers. In contrast, the appearance of background

traffic could not be matched timely to the throughput dropdown.

Interference of the wireless link can cause dropdown of the maximum achievable throughput. Blue-

tooth introduces an adaptive scheme, that chooses packet types and thus the level of error correction

and link layer retransmission according to the observed channel quality. This scheme is termed

”channel quality driven data rate” in Bluetooth. Thus, the overhead, that is introduced by error

correction codes, and potential retransmission lower the maximum achievable throughput. Changed

interference could cause the dropdown effect.

The algorithm for packet type choice and packet scheduling are usually implemented in hardware

into the Bluetooth device, so that the behavior on the baseband can not be measured and analyzed.

A detailed analysis of these effects can hence not be given here.

TCP Throughput over a Bluetooth link using an access point

The network architecture was reused to measure TCPs behavior over a Bluetooth access point. A

TCP connection was setup from the server to the mobile node. Bulk data was transferred over a

period of 30s. 10 measurements of the data transfer were made for upstream as well as downstream

direction. The results are shown in Figure 4.24.

0

10000

20000

30000

40000

50000

60000

70000

80000

00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35

Thr

ough

put (

byte

s/se

c)

Time

TCP Throughput over Bluetooth (Downstream)

Run 1Run 10Run 2Run 3Run 4Run 5Run 6Run 7Run 8Run 9

0

10000

20000

30000

40000

50000

60000

70000

80000

00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35

Thr

ough

put (

byte

s/se

c)

Time

TCP Throughput over Bluetooth (Upstream)

Run 1run 10Run 2Run 3Run 4Run 5Run 6Run 7Run 8Run 9

Figure 4.24: TCP Transmission Throughput over Bluetooth using an access point

78

Page 97: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.3 TCP over Bluetooth

The results show, that in downstream direction, some TCP connections start to approximate

to the high throughput level, but falls down to the low throughput level in short time. The other

TCP connections do not even start to reach the high throughput level and converge to the low level

directly. In upstream direction, no throughput drop is encountered and all runs show relatively

constant results. A conspicuous break after approximately 1s is visible in the upstream graphs. An

analysis of the packet flows showed that in TCPs slow-start phase packet loss is encountered due to

a high initial value of ssthresh. TCP reacts on packet loss in slow-start phase with retransmitting

the lost packets, halves ssthresh and restarts slow-start by setting the CWND to 1. Thus the packet

rate is set to 1 packet per RTT and doubled every RTT. This leads to low instantaneous throughput

and a drop in transmission throughput.

TCP RTTs over a Bluetooth link using an access point

The RTTs of Bluetooth observed using an access point showed similar behavior compared to the

adhoc scenario. The average values and its confidence intervals are visualized in Figure 4.25.

200.00

220.00

240.00

260.00

280.00

300.00

320.00

340.00

1 2 3 4 5 6 7 8 9 10

RT

T

Run

TCP RTTs over Bluetooth using an acces point (Downstream)

200.00

220.00

240.00

260.00

280.00

300.00

320.00

340.00

1 2 3 4 5 6 7 8 9 10

RT

T

Run

TCP RTTs over Bluetooth using an access point (Upstream)

Figure 4.25: TCP RTTs over a Bluetooth link using an access point

The RTTs measured using an access point follow the consideration and results about additional

network traffic and interference. A low throughput value and throughput drops caused by network

traffic or interference leads to a high RTT and vice versa.

79

Page 98: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.4 Conclusions

4.4 Conclusions

The chapter aimed to evaluate the performance of TCP in the considered scenarios using the given

performance metrics.

An evaluation of the backbone showed that the serial links provided sufficient bandwidth not to limit

the bandwidth of Wirless LAN 802.11b or Bluetooth. The influences of Ethereal and the operating

system on the measuremnts have also been presented.

The performance of TCP over WLAN and the influence of BERs, crosstraffic and handover situations

have been evaluated. The evaluation showed the following results:

• Single TCP connection: A single TCP connection in downstream direction reduced the

throughput about 15% compared to UDP. In upstream direction, influences of the WLAN

adaptor potentially caused UDP throughput dropdowns, when the card is overloaded.

• Influence of BERs: The influence of BERs was measured usign different distances betrween

the access point and the mobile node. The evaluation showed, that with higher expected

BERs, the throughput drops to a lower level, but does not trigger retransmission due to link

layer retransmissions. Hence, link layer retransmissions are an effective method to hide packet

losses on link layer to TCP.

• Influence of crosstraffic: Different concurrent crosstraffic has been applied while sending

data over a TCP connection. The evaluation showed, that TCP performs quite efficient in

downstream direction in presence of competing streams. In upstream direction, influences of

the WLAN adapter card might cause throughput degradation in case of overloading the card.

• Influence of handovers: The influence of handover events on the performance on TCP has

been evaluated. The evaluation showed, that TCP can potentially double the network handover

delay. It has also been investigated, that a problem with the IP address assignement via DHCP

adds 10s to the network layer handover delay. A proxy might improve the performance of TCP

in handover situations.

The performance of TCP over a Bluetooth link was investigated in an adhoc scenario and an

access point scenario. The evaluation showed the following results:

80

Page 99: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.4 Conclusions

• Adhoc scenario: The evaluation showed, that a throughput of approximately 80 kByte/s

can be gained with UDP and a TCP throughput of around 75 kByte/s in upstream direction.

In downstream direction, a lower throughput has been observed, since the average RTT was

higher compared to the upstream case and the CWND hit the RWND before running into

congestion.

• Access point scenario: In an access point scenario, throughput dropdowns have been ob-

served when sending UDP and TCP segments. These dropdowns could be caused by either

interference or additional traffic in low bandwith direction.

81

Page 100: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

4.4 Conclusions

82

Page 101: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

Chapter 5

Implementation of the TCP Proxy

An integration of a TCP Proxy into a wireless supporting network covers considerations about

the location of a proxy in the logical path from the sender to the receiver, the actual network

implementation of TCP proxy into the network as well as the software design.

5.1 Proxy Terminology

The concept of a proxy is well-known and wide-spreaded in the Internet. Mostly proxies are used to

cache data that is often needed to reduce response times for requests. For example, HTTP-Proxies

are used in larger networks to cache websites on a local permanent memory. A client is sending his

request not directly to the host containing the website, but to the HTTP-proxy. The HTTP-proxy

looks up the website on his local memory and delivers the page to the client in case the proxy had

cached the page before. In the other case, the proxy forwards the request to the server, stores the

reply of the server in his own memory and forwards it to the client. Performance improvements are

gained due to the shorter path from the client to the proxy, but depend on the size of the memory

and the strategy of keeping the cached websites up to date.

A proxy need not necessarily only enhance the performance, it can also enhance security. A proxy

might filter certain requests in order to protect the internal network. Those types of proxies are

thus combined with a firewall.

There are many different types of proxies [8] that are intended to enhance performance. Performance

enhancement does not necessarily mean to enhance only throughput. For example, M-TCP tries to

keep up the TCP connection over a wireless link while a handover is in progress. Proxies can be

83

Page 102: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.2 The Split TCP Approach

described and categorized in different classes:

• Layer: Application Proxies like an HTTP-Proxy or a Mail Transfer Agent (MTA) are based on

the application layer and try to improve performance and reliability. Transport Layer Proxies

are mainly used to improve throughput and round trip time (RTT) of data packets send over

an IP network. They mainly do modifications on packet sequences and acknowledgements.

They are not aware of the application protocol and its payload content transported in their

packets.

• Distribution: Integrated proxies consist of one node, where the performance improvements

are applied. Distributed proxies consist of two or more nodes enhancing the performance. For

example, distributed proxies could be used every time a wireless link meets a wired link to

overcome the performance problems of the different medium types.

• Symmetry: Symmetric proxies behave the same way on both sides, to the client and to the

server. Asymmetric proxies will behave different to both sides. For example, a proxy between

a wireless network and a wired network would typically be asymmetrical, an HTTP-Proxy on

the other side would be symmetrical. Symmetry is independent from the type of distribution.

An asymmetric proxy could be integrated, for example, when a wireless link meets a wired

link, but could also be distributed, with different implementations on both ends.

• Transparency: The proxy may be transparent on network layer, on transport layer, on

application layer or even on user level.

5.2 The Split TCP Approach

The split TCP approach is based on the consideration, that two different TCP connection can be

optimized for their performance individually for two different kind of links. Figure 5.1 shows the

approach from a protocol stack point of view.

A TCP connection that was established between a fixed host and the mobile node, would be

intercepted and broke int two parts by the TCP proxy. An application on the fixed host, that uses

a TCP connection to communicate with an application on the mobile node, would not notice any

changes to the functionality of the underlying TCP layer. A Split TCP daemon running on top of

84

Page 103: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.3 Security

Application

Fixed Host Mobile Node

Split TCP-Daemon

TCP

IP

LL / PHY

Application

TCP

IP

LL / PHY

TCP

IP

LL / PHY

TCP

IP

LL / PHY

TCP Proxy

Figure 5.1: The Split TCP Approach

the TCP layer in the proxy would watch for incoming data on the one TCP connection and forward

it to the associated TCP connection. The strength of this approach is the ability to optimize the

TCP implementation over the wireless link, while the TCP connection to the fixed host can remain

a standard implementation.

For the further thesis, it is assumed that the communication partner that initiates the TCP connec-

tion, is also the sender of data in the scenario. The TCP connection from the sender and initiator

to the TCP Proxy will be termed as ”left connection”, whereas the TCP connection from the TCP

Proxy to the receiver of data will be call ”right connection”.

5.3 Security

When using a proxy as an intermediary network node to enhance the performance of TCP, some

implications and limitations to security have to be considered.

Security can be applied on different layers. Applications using security on application layer can

benefit from the use of a TCP proxy, since the encrypted payload of TCP is not modified. Security

applied on transport layer, like Transport Layer Security (TLS, [19]) can be used in conjunction with

a TCP Proxy. Instead, network layer security like IPSec [28] cannot be used with an intercepting

proxy, since TCP packets are encrypted and thus not available for a proxy. In case the proxy

is non-transparent, IPSec can be used between the proxy and the mobile node and an additional

IPSec connection between the correspondent node and the proxy. In most cases, this behavior is

85

Page 104: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.4 Proxy Location

not desired as the end system cannot trust the proxy in general. Additionally, if different levels of

security are applied from the proxy to the end system, the end system with the higher security level

might not be aware of the lower security level on the other end system. This can lead to wrong

security implications. The proxy could prevent such a scenario by ensuring the same security levels

on both sides.

5.4 Proxy Location

Before integrating the proxy as a node into an existing network and implementing the software on

the node, some considerations about the location of the proxy in the network and its implications

will be presented.

5.4.1 Meta-Model

As a general meta model, the network can be divided into two parts. One part of the network

enables the mobile node to be wirelessly connected to the rest of the network. This part is called

”wireless supporting network”, while the counterpart of the network is termed ”wired network”.

By that definition, the border between the wireless supporting network and the wired network is

not defined clearly, since the wireless supporting network could on the one hand consist out of one

access point, on the other side, multiple access points or base stations could be connected via a

wired backbone to the wired network. This definition gets more precise by introducing the TCP

proxy into the network. The TCP simply resides between the wireless supporting network and the

wired counterpart. An overview of the meta-model is given in Figure 5.2.

TCP Proxy CorrespondantHost

Wireless supporting

network

Wired network

Mobile Node

Figure 5.2: Meta-Model for network integration of a TCP proxy

Note, that the part of the network, that provides wireless access need not necessarily provide

86

Page 105: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.4 Proxy Location

mobility in that network.

5.4.2 TCP proxy with handover support in intrasubnet handovers

Dimensioning the size of the wireless supporting network to optimize the performance is the main

issue and covers a lot of aspects. In TCPs case it would be desirable to implement the TCP proxy as

close as possible to the wireless access point or integrate it directly into the access point, so that the

TCP connection over the wired part of the network to the correspondent host is as long as possible.

The standard TCP implementations are well known to perform good under such conditions. On

the other side, the influence of the wireless link on the wireless TCP connection is maximized and

can be optimized best. Figure 5.3 shows one possible scenario with TCP proxies implemented as

close to the wireless access points as possible. In a quasi-static scenario, where the mobile node

TCP Proxy

Wired network

CorrespondantHost

Mobile Node

Handover

TCP Proxy

TCP Proxy

TCP Proxy

Figure 5.3: Model for Network Implementation of a TCP proxy supporting proxy Handover inintrasubnet Handover Scenarios of a Mobile Node

is always connected to one access point during the lifetime of the TCP connection, this scenarios is

expected to perform best. When mobility comes into play and the mobile node is handed over from

one access point to another access point, the associated proxy will be changed. Thus, the connection

state information maintained at the proxy has to be transferred to the new proxy1. This approach

would follow the model of a distributed proxy. The drawback of this location design - besides the

1It is assumed that the TCP proxy maintains a hardstate. In this thesis hardstate is defined, that a hard-stateTCP proxy makes a TCP connection unable to persist, if the state of the TCP proxy is lost or the TCP proxy stopsits service. A full definition of the terms hard- and soft-state can be found in [16]

87

Page 106: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.4 Proxy Location

hardware effort - would be frequent proxy handovers that can lead to enormous traffic if the number

of users and access points is very high.

5.4.3 TCP proxy with handover support in intersubnet handovers

The number of proxy handovers could be reduced by grouping all access points, that belong to the

same subnetwork and assigning them to the same TCP proxy. Figure 5.4 shows a possible scenario

with access points in the same subnetwork assigned to one TCP proxy.

TCP Proxy

Wired network

CorrespondantHost

Mobile Node

Handover

TCP Proxy

Handover

Figure 5.4: Model for Network Implementation of a TCP proxy supporting proxy Handover inintersubnet Handover Scenarios of a Mobile Node

This approach would reduce the number of handovers, since handovers in the same subnet do not

need to change the TCP proxy. In cellular networks, access points in the same subnet cover normally

neighboring cells, so that a mobile node would be connected to the same TCP proxy with some larger

geographicall area. Drawbacks of this approach are that the TCP proxy has to be handed over, if the

mobile node moves to another subnet, and that the TCP proxy is further away from the access point.

5.4.4 TCP proxy without proxy handover

In order to remove the necessity of TCP proxy handovers, when the mobile node hands over to a

new access point, support for inter-subnet handover of the mobile node has to be provided by the

TCP proxy. Thus, all access points, providing wireless access to a network, have to be assigned to

the TCP proxy. A model for this scenario is shown in Figure 5.5.

88

Page 107: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.4 Proxy Location

TCP Proxy

Wired network

CorrespondantHost

Mobile Node

Handover

Figure 5.5: Model for Network Implementation of a TCP proxy without proxy Handover in HandoverScenarios of a Mobile Node

The TCP proxy would reside at a single point of attachment of the wireless supporting network

to the wired network. A mobile node would always be connected to this single TCP proxy, regardless

if a intrasubnet or intersubnet handover is performed. Figure 5.5 shows the model of a single TCP

proxy supporting a wireless supporting network. Drawbacks concern the again increased distance of

the TCP proxy to the wireless link and scalability. Scalability could be a problem for a TCP proxy

in case many users are attached to the wireless supporting network: the TCP proxy has to maintain

every TCP connection, that is established over the wireless supporting network, which can easily

lead to memory shortage or increased processing time of the TCP proxy.

5.4.5 TCP proxy with Mobile IP

Mobile IP [38] introduces two entities into the logical path between the mobile node and the cor-

respondent host to support mobility: A Home Agent and a Foreign Agent. In case, the mobile

node is co-located in the visiting network, the data would flow from the correspondent node to the

Home Agent, which passes the packets in an IP Tunnel to the Foreign Agent, which forwards the

decapsulated IP packets to the mobile node. An overview of Mobile IP data flow can be found in

Figure 5.6

A TCP proxy could principally be inserted on any of the subpaths between the mobile node and the

correspondent node, resulting in different implications and drawbacks.

In order to locate the TCP proxy close to the wireless link, it could be inserted between the Foreign

89

Page 108: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.4 Proxy Location

CorrespondentHost

Mobile Node

IP network

Home AgentForeignAgent

IP Tunnel

Destination Routing Destination Routing

Visited Network Home Network

Figure 5.6: Model of Mobile IP Data-Flow

Agent and the mobile node. The TCP proxy would eavesdrop the TCP flow as well as the Mobile IP

requests and replies between the Foreign Agent and mobile node and thus get knowledge of mobile

nodes, that register in the new network. Once the mobile node is registered, its Layer 3 connectivity

is restored and the TCP connection could start sending data immediately instead of waiting for a

retransmission timeout. Mobile nodes, that left the network can be identified by expired registration

lifetimes of mobile IP. When a mobile node is handed over to the new subnet and the new TCP

proxy gets knowledge of it, the TCP proxy has to hand over the TCP connection from the old TCP

proxy to the new one. Since the mobile IP messages contain no information about the previous

location of the mobile node, some information between the Home Agent and the Foreign Agent

have to exchanged, to overcome this problem. Once the TCP Connection is handed over, it could

immediately proceed transmitting data.

Depending on the exact location of TCP proxy between the Foreign Agent and the mobile node,

frequent handovers of the TCP proxy implying the state transfer from the old to the new TCP proxy

would be necessary (assuming that the TCP proxy maintains a hard state). As a lower bound for

the number of handovers it has to be considered, that at least for every intersubnet handover the

state of the proxy has to be transferred.

Inserting the TCP proxy between the Home Agent and the Foreign Agent would lead to some other

implications on the functionality of the TCP proxy. The proxy would then need to intercept the

IP Tunnel that is setup between the Foreign Agent and the Home Agent as well as the Mobile IP

90

Page 109: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.5 Network Implementation

requests and replies. Another drawback of this approach is, that Mobile IP does not specify, that

the communication from the mobile node to the fixed host has to pass the Home Agent. Thus,

packets may not pass the tunnel from the Home Agent to the Foreign Agent and hence the TCP

proxy could not eavesdrop on the packets. The TCP proxy could thus not provide its service.

Inserting the TCP proxy between the Home Agent and the fixed host would eliminate the need for

decoding the IP tunnel, but the distance between the TCP proxy and the wireless link would be

decreased further.

In this work, the TCP proxy was located in the same subnet as the Home Agent intercepting the TCP

connection between the correspondent node and the Home Agent. The intercepting was achieved

using policy-based routing. Mobile IP registration messages have also been intercepted using policy

based routing. To ensure, that the TCP packets in the path from the mobile node to the correspon-

dent node are also intercepted by the TCP proxy, the Home Agent and Foreign Agent have been set

up to use reverse tunneling.

5.4.6 Conclusion about the Proxy Location

Concluding the considerations about the location of the Proxy in a network, a trade-off between a

highly distributed TCP Proxy implying a high number of handovers resulting in additional network

traffic with few hardware efforts on a single TCP Proxy and a integrated TCP Proxy with few

or no handovers, but eventually high hardware requirements for the TCP Proxy has to be found,

individually for every network.

The routing approach was chosen in the thesis.

5.5 Network Implementation

Depending on different demands to the proxy and the underlying network structure, several solutions

to integrate the TCP Proxy into the network are possible. The solutions can be divided into two

categories, depending on the transparency to the end-user.

Non-transparent proxies are known to the end-user. The end-user has to specify the IP address of

the TCP proxy in his mobile node in order to take advantage of the use of the TCP proxy. The TCP

packets are sent directly to the TCP proxy using the standard routing algorithms. Two approaches

91

Page 110: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.5 Network Implementation

for non-transparent proxy implementations will be introduced:

• Header option approach

• IP Tunneling approach

Intercepting and thus transparent proxies are hidden to the end-user. The end-user does not

need to specify any use of the proxy and takes automatically advantage of the functionality of the

TCP proxy. The proxy itself has to find a solution to get into the logical packet path between the

communicating nodes. Three approaches implementing a transparent proxy will be presented in the

following sections:

• In Path approach

• ARP approach

• Routing approach

5.5.1 Header Option Approach

A first approach to integrate a non-transparent TCP proxy is derived from the architecture of a

HTTP proxy. In an HTTP proxy, the request for an object is not send directly to the destination

host, but the HTTP request is send to the HTTP proxy. The HTTP requests itself contains the

requested HTTP object and the IP address of the destination host. The HTTP proxy forwards the

request to the destination host and sends the reply back to the originating host.

Based on this concept, a TCP connection has to be established to the TCP proxy carrying the

information about the destined host. The proxy itself would establish a standard TCP connection

to the destined host using the information carried in the header. The information of the destined IP

address could be carried in the IP or TCP header. A mobile node, that wants to use the TCP-Proxy,

has to specify the IP-Address of the proxy, like it has to be done for an HTTP-Proxy.

So, the source host would be able to send the IP packet with the piggybacked destination address

directly to the proxy. The proxy would extract the destination address and send the packet to the

right destination host. Figure 5.7 shows a typical data flow with the IP-Header Option implemented.

92

Page 111: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.5 Network Implementation

Mobile Node (MN) Proxy

Correspondent Node (CN)

IP Source Proxy

IP Dest: CN

TCP

data

IP Source CN

IP Dest: Proxy

TCP

data

IP Source MN

IP Dest Proxy

IP Option: Dest CN

TCP

data

IP Source Proxy

IP Dest MN

TCP

data

Figure 5.7: IP Header Modification model

Some problems arise with this approach. First, the IP or TCP layer has to be changed and the

new header option has to be implemented. If a host would not support this IP option, it would not

be possible to use the TCP proxy. Secondly, the TCP proxy has to be specified in the IP layer of the

source host. As every TCP/IP-packet has to carry the IP-Address of the destination host, at least

6 bytes (1 byte for the option type, 1 byte for the option length and 4 byte for the destination IP

address) are needed. Given a fixed maximum frame length on network layer, the maximum segment

size would be reduced by 6 bytes. In standard Ethernet, this would mean a reduction of the MSS

from 1460 bytes to 1454 bytes. Given the assumptions, that

• the network link provides constant bandwidth B and is fully utilized

• the maximum frame length (FL) is constant bytes

• every data packet transports maximum amount of data

• every second data packet is acknowledged

• TCP is in a constant state and sending as much data as possible

the application layer throughput (ALT ) can be computed by:

ALT =2 ∗MSS

2 ∗ FL + AL∗B (5.1)

93

Page 112: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.5 Network Implementation

where AL is the length of the acknowledgement. The accumulated header length of IP and TCP

(without options) would be 40 bytes, so that a data packet can carry 1460 bytes and an ACK would

be 40 bytes long. Given this numbers the throughput would be ALTst = 2∗1460bytes2∗1500bytes+40bytes ∗ B =

2920bytes3040bytes ∗B = 0, 9605 ∗B.

Implementing the header option would cause additional 6 bytes per packet, resulting in data pack-

ets, that carry 6 bytes less data and ACKs that are 6 bytes longer. Thus the throughput would be

ALTimp = 2∗1454bytes2∗1500bytes+40bytes ∗B = 2908bytes

3040bytes ∗B = 0, 9547 ∗B. The performance degradation would

be ALTst−ALTimpl

ALTst= 0,9605∗B−0,9547∗B

0,9605∗B = 0, 006 = 0, 6%. In case, the TCP proxy would only run on

one specific port to allow more services to be run on the TCP proxy host, the destination port has

to be added to the IP or TCP header causing additional 4 bytes overhead.

A problem when using this solution will be encountered, when the mobile node is not the connection

initiator. Since only the stack of the mobile node could be changed, the initiating fixed host would

send its connection request directly to mobile node instead of the proxy. The same problem arises,

when two mobile nodes using a TCP proxy are establishing a TCP connection.

5.5.2 IP Tunneling Approach

A similar approach to overcome the problem of changing the IP protocol by introducing a new IP

option in the IP header can be avoided by using an IP tunnel [14] from the mobile node to the TCP

proxy.

If one communication partner wants to send a packet to the receiver, the original IP packet containing

the TCP packet is encapsulated into a second IP packet, destined to the TCP proxy. This IP packet

would the be forwarded to the TCP proxy by the standard routing algorithms. The TCP proxy

receives the IP packet, decapsulates the inner IP packet and uses it for the TCP proxy algorithm.

The TCP proxy itself establishes a normal TCP connection to the destination host. Figure 5.8 shows

a model for an TCP proxy implementation using IP-Tunneling.

The drawback of this solution would be an additional amount of header data, by the second IP

header. This additional reduces the maximum segment size, that can be used for data over IP. A

standard additional IP header would increase the header size by 40 bytes and reduce the theoretical

maximum throughput to ALTIPIP = 2∗1440bytes2∗1500bytes+60bytes ∗ B = 2880bytes

3060bytes ∗ B = 0, 9412 ∗ B. The

94

Page 113: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.5 Network Implementation

IP Source MN

IP Dest Proxy

IP Source MN

IP Dest: CN

TCP

data

Mobile Node (MN) Proxy

Correspondent Node (CN)

IP Source Proxy

IP Dest: CN

TCP

data

IP Source CN

IP Dest: Proxy

TCP

data

IP Source Proxy

IP Dest MN

IP Source CN

IP Dest: MN

TCP

data

Figure 5.8: IP Tunneling Model

relative reduction would be ALTst−ALTIP IP

ALTst= 0,9605∗B−0,9412∗B

0,9605∗B = 0, 020 = 2%.

A consideration, that could invalidate the equation, covers the question, if the additional IP header

added by the IP tunnel will cause fragmentation of each individual packet. Since the mobile node

and the TCP proxy are endpoints of the IP tunnel, they both know about the reduced MSS and

thus can reduce the segment size accrodingly. Secondly, approaches like Path MTU Discovery [35]

prevent those problems.

The amount of additional header information can be reduced by deleting all the unneeded header

information of the encapsulated IP header. This technique is called ”Minimal Encapsulation” and

is described in detail in [15].

The problem of TCP connections not initiated by the mobile node will not be solved using an IP

tunnel.

5.5.3 In-Path Approach

An integration of an intercepting TCP proxy into a given network can be achieved by directly in-

tegrating the TCP proxy into the physical path from the mobile node to the correspondent host.

A model of this approach is shown in Figure 5.9. The TCP proxy would be equipped with 2

network interface cards and used as packet forwarder in the network. Packets, that arrive at one

network interface card will be decoded and inspected for the TCP protocol. TCP packets would be

95

Page 114: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.5 Network Implementation

TCP Proxy CorrespondentHost

Mobile Node

Figure 5.9: In-Path Model

sent to a running TCP proxy daemon, while all other packets would be forwarded directly to the

second interface card. The same logic applies to the reverse direction. A drawback of this scenario

is that breakdowns of the TCP proxy would directly lead to a disconnection of the complete wireless

supporting network. Secondly, not only the TCP traffic itself but all traffic would be sent over the

TCP proxy. Depending on the implementation of packet decoding and encoding, this could lead to

performance problems.

5.5.4 ARP Approach

In Ethernet networks, the Address Resolution Protocol (ARP, [23, 39]) is used to map IP addresses

to corresponding MAC addresses in a subnet. If a node in an Ethernet network has to send a packet

to a specific IP address in its subnet, the IP address is looked up in a local ARP table and the packet

is forwarded to the MAC address, if an ARP entry is available in the table. If no entry matches to

the IP address, an ARP request is broadcasted over the network do determine the mapping between

IP and MAC address. The requested node in the network sends a unicast ARP reply back to the

sender. If the IP address is outside the subnet, the packet is forwarded to the IP address of the

gateway.

The idea behind the ARP approach is to shadow the mobile node from the correspondent node and

vice versa. Since ARPs are only generated in a subnet, the IP address of the gateway, that forwards

the IP packets to the correspondent node, is spoofed. For the gateway, the TCP proxy would appear

at the IP address of the mobile node, and for the mobile node at the IP address of the gateway.

Thus, the ”trick” of the TCP Proxy would be to spoof the IP address of the mobile node as well as

of the gateway. This approach can be seen as a ”controlled” man-in-the-middle-attack. A model of

this approach can be found in Figure 5.10.

96

Page 115: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.5 Network Implementation

TCP Proxy

CorrespondentHost

Mobile Node Gateway

Figure 5.10: Model of the ARP Approach

To realize the spoofed IP address, the TCP Proxy would send a spoofed ARP reply containing

the TCP Proxy’s MAC address on every ARP request. Thus, the mobile node would add an entry

to its local ARP table containing the IP address of the gateway and the MAC address of the TCP

Proxy. The mobile node would then send every packet to the TCP Proxy. The same scheme is

applied to the ARP table of the gateway. Since ARP entries time out after a certain amount of

time, ARP request are sent to keep the ARP table up to date. Due to this behavior of ARP, an

automatic failover - assuming a soft-state proxy - would be possible. In case of a defective TCP

Proxy, that would not respond to ARP requests, the mobile node or gateway would reply to the

ARP request and thus reporting being available.

The drawback of this approach is a potential timing problem. Since the TCP Proxy and the com-

munication hosts are both replying to ARP requests, both network nodes race for the entry in the

ARP table. The implementation has to ensure, that the ARP entry containing TCP proxy address

is always the most recent in every network node, that uses the TCP proxy. The stability of the TCP

proxy will mostly depend on how effective this problem can be solved.

5.5.5 Routing Approach

To intercept TCP connections between the mobile node and the correspondent node, the TCP Proxy

does not necessarily need to be located on the physical path between them, but in the logical path.

This fact leads to a last solution that is based on an adapted routing process.

The standard routing process is based on destination based routing. With destination based routing,

an incoming IP packet is routed by its destination IP address and sent accordingly to the interface,

97

Page 116: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.5 Network Implementation

that is specified in the routing table. The routing table itself could either be setup with static

routes manually entered into the router or dynamically with routing protocols like Border Gateway

Protocol (BGP, [43]), Routing Information Protocol (RIP, [32]) or Open Shortest Path First (OSPF,

[36]).

Assuming, that all wireless supporting networks are connected via a central router to the wired

network, the routing tables could be altered in such a way, that the packets would be forwarded

to the TCP proxy located on one interface of the central router. Thus, the router has to support

three interfaces: one for the wireless supporting network, one for the TCP Proxy and one for the

wired network. In a first configuration, static routes could be used to route every packet to the

TCP Proxy. Every packet coming from the wireless supporting network as well as every packet from

the wired network would be sent to the TCP Proxy. But also packets, that are sent from the TCP

Proxy are sent back to the Proxy. Assuming that the TCP Proxy mirrors all packets, that cannot

be processed, the packets would oscilate between the router and the TCP Proxy and be discarded

after the time-to-live of the IP packets expires.

A routing process, that can route a packet based also on the source IP address or on the interface,

on which it has been received, would be needed to avoid this oscillation. Policy-based routing [18]

is the technology, that can be used to achieve this behavior. Policy-based routing can be applied on

different interfaces of the router individually and is performed before the standard destination-based

routing on incoming packets. It allows to reroute individual packets matching to specific criteria to

different interfaces. In this case, policy-based routing is applied on the interfaces connecting to the

wireless supporting network and to the wired network. Every packet that is identified as a TCP

packet, is forwarded to the TCP Proxy. If the TCP Proxy now sends a TCP packet back to the

router, this packet is processed by the destination based routing and forwarded to the destination

network. Figure 5.11 shows a model of the implementation with altered routing tables.

Automatic failover can be achieved by combining policy-based routing with periodical ARP re-

quests on the TCP Proxy. If the TCP Proxy goes down, the policy-based routing rules will be

disabled and destination-based routing ensures the correct routing to the according networks.

Policy-based routing requires a full inspection of the arriving packets and therefore needs more pro-

cessing power in the routers. The routers used in the experiments were reported to be capable of

98

Page 117: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.5 Network Implementation

TCP Proxy

CorrespondantHost

Mobile Node

Policy Based Routing applied

Figure 5.11: TCP proxy implementation using policy-based routing

processing 70.000 packets per second with destination based routing, whereas the performance drops

to 1.000 to 10.000 packets per seconds with policy-based routing [17].

Since this approach offered the most flexibility in most scenarios and did not need any changes in

the IP-stack of the mobile node, this implementation option was chosen.

5.5.6 Conclusion

Concluding the considerations about the network implementation of a TCP proxy results mainly in

the tradeoff of the advantages and disdavantages between transparent and non-transparent proxy

implementation. A non-transparent implementation would give the end-user the control wether to

trust and use the TCP proxy, but using a TCP proxy in case the mobile node is not the connection

initiator seems to be difficult if not even impossible. The individual non-transparent implementations

differ mainly in their ammount of additional overhead.

Transparent proxies move the decision wether using a proxy or not to the network operator. Thus,

end users not willing or in need of non-modified end to end connections are not able to disable the

proxy functionality. The same problem will be encountered, when specific TCP connections want to

use a TCP proxy while other connections need undmodified end to end connections. On the other

hand, TCP connections established to the mobile node could also benefit from the use of a TCP

proxy. The different transparent implementations differ mainly in their approach to get into the

logical path between the two end systems.

99

Page 118: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.6 Proxy Functionality and Software Architecture

5.6 Proxy Functionality and Software Architecture

The TCP Proxy was implemented in three phases. In a first phase, a TCP Proxy, that simply

mirrors all incoming packets and sends them back to the network was implemented. In a second

phase the full split TCP connection approach was designed and implemented into the TCP Proxy.

Finally, a module to eavesdrop on Mobile IP messages was added to the TCP Proxy to provide a

performance enhancement option in handover situations.

5.6.1 Mirroring Proxy Implementation

In a first implementation, the functionality of a mirroring proxy was implemented. The aim of

the mirroring proxy was on the one hand to figure out, which interfaces and possibilities for packet

capturing, decoding, encoding and sending are provided by the Linux operating system, on the other

hand to develop a flexible and stable packet capturing, decoding, encoding and sending engine.

The basic high level fesign is shown in Figure 5.12 and consists of three main entities:

• Packet listener module: This module provides functionality for capturing packets from the

network interface card (NIC) and storing the decoded packet into a buffer.

• Mirror Daemon: The daemon is responsible for mirroring the packets. It thus has to watch

for incoming packets, that are stored in the buffer of the packet listener module and forward

the packets to the packet sender.

• Packet sender module: The packet sender provides a buffer to the mirror daemon to send

packets to the NIC. The task of the packet sender was to encode the packets and send them

to the NIC.

To achieve better performance of the three modules, they have been implemented into three dif-

ferent threads. The reason for choosing a threaded model came from the consideration, that packets

might get lost by the packet listener, if the mirror (or especially the later split-TCP daemon) are

using too much processing time and thus the application could not start to listen fast enough for the

next packet. Since the modules are not independent from each other, but share common resources

(that are the packet buffers), each packet buffer has been protected by semaphores, that count the

100

Page 119: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.6 Proxy Functionality and Software Architecture

NIC

Packet Listener

NIC

Packet sender

Capture and decode Encode and send

Pop packet Push packet

Mirror Daemon

packet buffer packet buffer

Figure 5.12: High-Level-Design of the mirroring TCP Proxy

number of packets currently stored in the buffer.

Since the packet sender and the packet listener modules are independent from each other, the im-

plementation is flexible to the kind and number of interfaces, that are used for sending or receiving.

Thus, the implementation could be changed in this two particular modules to support multiple in-

terfaces or using the mirror as a switching entity in the network.

5.6.2 Split TCP Implementation

After implementing the mirroring TCP proxy, a design for the Split TCP proxy was developed. The

design of the split TCP daemon was adopted from the mirroring Proxy, so that the packet listener

and packet sender modules could be reused. Therefore, only the mirror daemon was exchanged by

the split TCP daemon. The architecture of the split TCP proxy is visualized in Figure 5.13. The

split TCP daemon itself is decomposed into different subcomponents:

• Connection Pool

• Packet Router

• TCP Connection

• Split TCP Connection

101

Page 120: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.6 Proxy Functionality and Software Architecture

Get data

establishclose

Process known TCP packet

Push packet(s)

send data

TCP ConnectionData buffer

Split-TCP Connection

Notify(old state, new state)

TCP ConnectionData buffer

Packet Router Connection PoolFind

Connection

Pop packetForward unknown packet

Split TCP Daemon

NIC

Packet Listener

NIC

Packet sender

Capture and decode Encode and send

packet buffer packet buffer

Figure 5.13: High-Level-Design of the Split TCP Proxy

The connection pool acts as a container for all established TCP connections. It provides func-

tionality mainly for searching TCP Connections for different properties.

The packet router is responsible for identifying the packet and forwarding the packet to the right

TCP connection. In case a TCP SYN-packet is received, the packet router creates a new split-

connection including a TCP connection (comparable to a TCP Control Block (TCB)) and adds the

newly created connection to the connection pool. If no active connection is found or the packet

could not be identified, the packet is forwarded to the packet sender, so that the packet is mirrored.

This behavior ensures, that data flows, that were previously established, are not interrupted and

packets that are misrouted to the proxy are sent back.

The TCP connection implements the TCP specification using the TCP Reno algorithm and offers

an interface to the packet router and an interface to the split TCP connection handler. For the

packet router, an interface to receive the TCP packet is provided. The interface offered to the split

TCP connection component is similar to the interface of a socket, that is implemented in operating

102

Page 121: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.6 Proxy Functionality and Software Architecture

systems. Methods for opening, closing and resetting the connection as well as methods for reading

and sending data are provided. The data is read and written to a connection-local data buffer, that

is used for sending or receiving data. After a TCP connection has processed a new packet, it informs

its split connection about its state before and after processing the packet. The split TCP connec-

tion thus gets noticed, when the TCP connection is setup, terminated, reset or received new data.

Since only two interfaces are needed to communicate to the packet router and the split connection

component, the implementation of the TCP connection can be changed dynamically.

The split TCP connection component acts as a daemon object, that basically interconnects two TCP

connection. Data, that is received on the one connection, is forwarded to the associated connection

via its in- and out-buffers.

Depending on the detailed implementation, different behaviors regarding performance and safety

of the TCP proxy are possible. The basic decision, that has to be made, is, wether the TCP Proxy

should acknowledge data packets, before an according ACK is received from the corresponding or

not. The same decision has to be made for connection setup and termination. In case the TCP

Proxy would send ACKs before a corresponding ACK is received, both TCP connections would

be completely independent and follow a true split-connection approach. It is expected, that the

throughput of this solution is higher, since there is less dependency on the second TCP connection

with its RTTs and throughput. The impact on TCP performance in a connection setup is visualized

in Figure 5.14.

If the TCP Proxy would delay ACKs until receiving the according ACK from the correspondent

host, the safety in case of failures would raise. In case, no listening process on the correspondent

host is attached to the requested port, a SYN would be rejected with a RST packet. In a true

split connection approach, the left connection would have been setup and maybe even started to

transfer data, before the right connection send the RST-packet. Thus, the communication partner

on the left connection believes, that the connection was setup correctly. In this implementation, the

performance argument lead to the choice of a true split connection approach.

In the design of the mirroring proxy, threads were used for the packet listener, for the packet sender

and for the mirror daemon. This concept of a three-thread model could be adapted to the split TCP

103

Page 122: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.6 Proxy Functionality and Software Architecture

Sender TCP Proxy

SYN

SYNACK

ACK

SYN

SYNACK

ACK

Receiver Sender TCP Proxy

SYN

SYNACK

ACK

SYN

SYNACK

ACK

Receiver

Figure 5.14: The qualitative impact of different proxy functionalities on connection setup delays inTCP

proxy. Thus, the complete split TCP daemon and every TCP connection would run in common

thread. Another approach would run every pair of TCP connection or even every TCP connection

could run in a separate thread. A per-connection-threaded model could mainly effect the perfor-

mance in multi-user and multi-connection scenarios. On the one hand, a per-connection-threaded

implementation would improve the processing fairness of the TCP connection threads, since process-

ing times would be divided nearly equally2 and every active TCP connection would be granted some

processing time. In a single-threaded implementation, a processing TCP connection would block all

other TCP connection connections in its processing.

On the other hand, a per-connection-threaded implementation increases the number of threads and

consequently the number of thread-switches, that have to be performed by the operating system

and the processor. The effects of thread switching could change the throughput significantly on

high load of the proxy. Secondly, an increased number of threads would also reduce the processing

time, that is given to the packet listener. In general, assuming a fair operating system, that assigns

processing time equally to every thread, the processing time would fall from 13 to 1

n+2 , if n is the

number of TCP connections. The additional two threads are the packet listener and packet sender.

For a large number of TCP connections and hence a large n, few processing time is assigned to the

2dependant on the process scheduling algorithm implemented in the operating system

104

Page 123: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.6 Proxy Functionality and Software Architecture

packet listener, potentially resulting in packet loss.

5.6.3 Mobile IP Daemon Implementation

To overcome the problems of long handover delays, a module was implemented, that eavesdropped

on incoming and outgoing MobileIP messages in order to use the Mobile IP messages an indication

of a reconnected mobile node. All TCP connections, that were established to the mobile node

could immediately start to send data instead of waiting for a retransmission timeout. The Mobile

IP daemon made an enhancement of the packet listener, packet sender and packet router and an

implementation of the Mobile IP daemon necessary. The extended part of the split TCP architecture

can be found in Figure 5.15.

TCP ConnectionData buffer

Packet Router Connection PoolFind Connection

Pop packetForward unknown packets

Mobile IP DaemonProcess known TCP packet

Process MobileIPpacket Find all

Connections

Recover

Packet-buffer Packet-buffer

Forward MIPpacket

Figure 5.15: High-Level Design of the Mobile IP Module

The packet listener and packet sender were extended to support the decoding and encoding of

MobileIP requests and replies. The packet router was extended to forward all Mobile IP messages

to the Mobile IP daemon.

The task of the Mobile IP daemon was to inform all TCP connections, that the mobile node is

available again. Thus, the daemon inspected the Mobile IP message for the advertised home address

and performed a lookup of the IP address in the TCP connection pool. All found TCP connections

are then informed about the reconnected mobile node.

105

Page 124: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

5.6 Proxy Functionality and Software Architecture

106

Page 125: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

Chapter 6

Evaluation of the TCP Proxy inWireless Scenarios

In this chapter, the implemented TCP proxy is evaluated. First, the influence of the TCP proxy on

RTTs is investigated since the additional processing and serialization and probably queuing delay

is expected to add delays to the RTTs of TCP. Secondly, the influence of different ACK handling

on the performance over a wireless link is investigated, since the number of delayed ACKs and the

value for a ACK timer is not specified in [41].

6.1 Influence on RTTs

In a first experiment, the influence of the mirroring proxy as well as of the split TCP proxy was

investigated over wireless LAN. The single access point network model as described in Section 3.1.1

was reused except that access point was replaced. The Cisco 350 WLAN access point was replaced

by a Cisco 1100 WLAN access point, since only that access point was available at the time this

evaluation was made. A TCP connection was setup from the mobile node to the correspondant

node and data was sent upstream from the mobile node to the correspondent node for 10s. The

mobile node was located 1m away from the access point and did not move during the duration of

the measurements. Running a TCP connection over the wireless link leads to Figure 6.1.

The graphs show, that integrating the TCP proxy into the network and running the proxy in

mirroring mode adds approximately 1,5 ms to the RTT. The additional budget of RTT is generated

by the additional queuing and processing amount of the TCP proxy. The main delay, that is added

107

Page 126: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

6.2 Influence of Delayed ACKs on Throughput

15.00

15.50

16.00

16.50

17.00

17.50

18.00

18.50

19.00

No Proxy Mirror Split TCP

RT

T (

ms)

Impact of TCP Proxy on RTTs

Figure 6.1: Influence of the TCP Proxy on RTTs

by the TCP proxy is caused by the decoding, buffering and subsequent encoding of packets.

Integrating a Split TCP proxy into the network, adds an additional budget of 1,5 ms to the RTT

compared to the mirroring proxy. This effect is caused by the Split-TCP daemon, that adds addi-

tional processing delay to the proxy. Additional proccessing time is caused by assigning the TCP

segment to the TCP connection and processing the TCP packet in the TCP algorithms.

6.2 Influence of Delayed ACKs on Throughput

In a second experiment, the influence of the number of delayed ACKs over the wireless link was

evaluated. The single access point network architecture was used to measure this influence. In a

first set of measurements, a TCP connection was setup in upstream direction and data was sent

over a period of 10s. The mobile node was situated 1m away from the access point and stayed

at that location. Measurements have been made for different number of delayed ACKs. In a first

measurements, every data packet was acknowledged, in the second every second data packet was

108

Page 127: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

6.2 Influence of Delayed ACKs on Throughput

acknowledged. The delayed ACK timer was set to 500ms. The measurements have been compared

to the mirroring proxy scenario. Plotting the transmission throughput leads to Figure 6.2.

0

100000

200000

300000

400000

500000

600000

700000

800000

00:00 00:10 00:20 00:30 00:40 00:50 01:00

Thr

ough

put (

byte

s/se

c)

Time

Receiver Throughput

ACK every packetACK every 3 packetACK every 5 packet

Mirroring

Figure 6.2: Transmission throughput with different number of ACKs transmitting 30 MByte of data

It can be observed, that acknowledging every data packet leads very fast to an aproximately

constant transmisison throughput, while acknowledging every fifth packets suffers from a long initial

slow start phase. After approximately 25s, the transmission throughput exceeds the throughput of

the mirroring proxy. In case of every third data packet is acknowledged, the initial setup time is

lower compared to acknowledging every fifth packet and exceeds the mirroring proxy more timely

after approximately 15s, but gains less average throughput.

Two effects cause those results. First, the more packets are cumulative acknowledged and hence

ACKs are more delayed, the less additional overhead is caused by the ACK packets. This leads

to a higher throughput when TCP is in congestion avoidance phase. In contrast, when TCP is in

slow-start, an immediate ACK would be needed to open up the congestion window quickly. Since

slow-start sends only one packet in the first RTT, a delayed ACK mechanism will delay the packet

109

Page 128: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

6.3 Influence of the Delay-ACK Timer on Throughput

resulting the sender to wait for a retransmission timeout. The more data packets are delayed, the

more retransmissions are triggered resulting in a lowered throughput.

6.3 Influence of the Delay-ACK Timer on Throughput

To overcome the problem of delaying ACK too long, TCP defines, that an ACK must not be delayed

longer than 500ms. In operating system implementations, the delayed ACK timer has a value of

200ms. Timers in an common operating system like Windows and Linux are usually implemented

coarse grained since they are costly. A 200ms timer is usually fired between 201ms and 400ms after

the timer was set.

In the proxy implementation, the timer was implemented with a finer granularity. Experiments

showed, that under averaged load the implemenmted timer algorithm showed a constant offset of

approximately 5ms with a resolution of 5ms.

In order to evaluate the TCP performance of different delayed ACK timer values, the measurement

from above was repeated with delaying every third packet with a measurement duration of 10s. The

results are visualized in Figure 6.3.

It can be observed that the shorter the timer value, the faster the connection reaches the con-

gestion avoidance phase and thus a steady throughput state. On the other hand, a low timer value

leads potentially to uneccessary ACKs since data packets may not arrive fast enough to shift a timer.

Reference [11] states, that an acknowledgement must be triggeredt at least after every second packet

or after 500ms, whatever comes first. It is also stated, that ACKs have to be sent immediately,

when data packets with an amount of data less than the MSS. The threshold of data amount, if a

packet must be triggered immediately, is not defined explicitly. The used TCP implementations in

the operating systems (see Appendix B.5 and B.7) use this requirement to force an ACK in the first

RTT of slow start. TCP sends a small data packet (payload sizes of 24 bytes have been observed in

Windows XP and Linux Kernel 2.4.18-3) although the initial congestion window would allow a fully

utilized packet. The receiver host then has to ACK immediately this packet resulting in a doubled

congestion window at the sender side. In the second RTT, the sender is allowed to send two data

packets, that are now fully utilized and acknowledged with a delayed ACK of the receiver.

110

Page 129: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

6.4 Further Evaluations and Future Work

0

100000

200000

300000

400000

500000

600000

700000

800000

00:00 00:01 00:02 00:03 00:04 00:05 00:06 00:07 00:08 00:09 00:10 00:11

Thr

ough

put (

byte

s/se

c)

Time

Transmission Throughput using different delayed ACK timeout values

10ms100ms50ms

500msMirroring

Figure 6.3: Transmission Throughput with different delayed ACK timeout values

6.4 Further Evaluations and Future Work

The TCP proxy was evaluated in scenarios with different BER rates. The experiments have been

made at the concurrently with the evaluation shown in Section 4.2.3. The evaluation did not show

any new results. The performance of TCP measured with proxy enabled was comparable to the

performance of TCP with the proxy disabled.

Different scenarios and evaluations of the proxy have not been performed. Coupling Mobile IP with

the TCP proxy in order to use the Mobile IP signaling to restart the TCP connection to the mobile

node has not been evaluated.

Another approach using a TCP proxy could improve the performance of TCP, in case different MSSs

are used for the TCP connections. The TCP Proxy thus could defragment smaller packets received

on the low MSS TCP connection and send a full MSS packet on the high MSS TCP connection.

The idea to implement this into the current architecture is to introduce two thresholds to the socket

buffer. One threshoild is used to trigger data packets to be sent if the socket buffer reaches the first

111

Page 130: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

6.4 Further Evaluations and Future Work

threshold. The second threshold is used to stop TCP to send more data and wait for new data from

the application. An additional timer could be used to ensure, that data is send at least in a certain

interval.

The influence of transparency has not been evaluated. If an ACK is not sent to the sender by the

proxy unless a corresponding ACK is received from the receiver, the throughput and RTTs might

be influenced as well as the initial connection setup time as shown in Figure 5.14.

112

Page 131: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

Chapter 7

Conclusion

The thesis aimed to verify, if the initial assumptions of TCP can be kept up when TCP is used over

Wireless LAN 802.11b and Bluetooth.

The evaluation of Wireless LAN showed, that high BERs did not trigger TCP retransmission thus

resulting in a good TCP performance. It can be concluded, that link layer retransmissions seem to

be an effective method to hide BERs on link layer to TCP. Even with competing crosstraffic flows

TCP performs good.

Indeed, handovers in Wireless LAN and hence a period of disconnection degrades the TCP per-

formance resulting in additional TCP handover delays. It has been observed that dynamic IP

assignment caused long handover times on network layer. A TCP proxy could solve this problem in

case Mobile IP is used to support IP mobility.

The evaluation over a Bluetooth link showed that different influnces can cause throughput drop-

downs of UDP or TCP. These dropdowns can be caused by interference or additional traffic. A

detailed evaluation could not be done, since a sniffing tool on baseband level was not available.

The concept of a TCP proxy has been introduced and the various options to integrate the proxy into

an existiung network as well as implementation options have been discussed. A discussion about

the different parameters of the acknowledgement mechansim of TCP has been shown. A successfull

implementation of the Mobile IP coupling approach has not been done.

For future work, a more detailed analysis has to be done to evaluate the influences of BERs on

the performance of TCP. It has not been proven, that a different implementation of TCP or a TCP

proxy can perform better than the current TCP implementation in Windows or Linux with link layer

113

Page 132: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

retransmissions. An evaluation of TCP Vegas over a wireless link as well as TCP over W-CDMA or

GPRS is also considered for future work.

114

Page 133: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

List of Abbreviations

ACK Acknowledgement

ARP Address Resolution Protocol

ARQ Automatic Repeat Request

AWND Advertised Receiver Window

BER Bit Error Rate

CWND Congestion Window

DHCP Dynamic Host Configuration Protocol

FDMA Frequency Division Multiple Access

FEC Forward Error Correction

GPRS General Packet Radio System

GSM Global System for Mobile Communication

HTML Hypertext Markup Language

HTTP Hypertext Transfer Protocol

IP Internet Protocol

MAC Medium Access

115

Page 134: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

MN Mobile Node

NIC Network Interface Card

PAN Personal Area Network

QoS Quality of Service

RTO Retransmission Timeout

RTT Round trip time

ssthresh slow-start-threshold

TCP Transmission Control Protocol

TDMA Time Division Multiple Access

WLAN Wireless Local Area Network

WPAN Wireless Personal Area Network

ZWA Zero Window Advertisement

ZWP Zero Window Probes

116

Page 135: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

Appendix A

TCP

A.1 The TCP Header

LISTEN

SYN_RCV SYN_SENT

Rcv SYN; Send SYNACK

ESTABLISHED

FIN_WAIT_1 CLOSING

FIN_WAIT_2 TIME_WAIT

CLOSE_WAIT

LAST_ACK

Send SYN

Rcv SYN; Send SYNACK

Rcv ACK Rcv SYNACK; Send ACK

Send FIN Rcv FIN; Send ACK

Send FIN

Rcv FIN; Send ACK

Rcv ACK Rcv ACK

Rcv FIN; Send ACK

2 MSL timeout

timeout

Rcv ACK

Send FINRcv FIN; Send ACK

Figure A.1: TCP State Machine

The TCP State Machine describes the different states of a TCP connection during its lifetime.

In a brief description, its states can be described as follows:

117

Page 136: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

A.2 TCP Flavours

• LISTEN is the initial state of a TCP connection and represents the state, that TCP is either

waiting for a request from a remote TCP or a request from the local port to initiate the

connection.

• SYN-SENT describes the state after sending the first SYN-packet. TCP now waits for

receiving a matching connection request.

• SYN-RECEIVED represents the state after both connection partners have signaled the

connection request via a SYN-packet. In this state, TCP waits for the acknowledgement of

the SYN-packet.

• ESTABLISHED describes an open connection, able to send and receive data. The TCP

connection will stay most of the time in this state.

• FIN-WAIT-1 describes TCP’s state after a connection termination request was sent to the

remote TCP. TCP waits for an acknowledgement of the termination request or a termination

request from the remote TCP.

• FIN-WAIT-2 represents TCP’s state waiting for a connection termination request.

• CLOSE-WAIT describes the state waiting for a connection termination by the local port.

• CLOSING represents the state waiting for a connection termination request acknowledgement

from the remote TCP

• LAST-ACK represents TCP’s state waiting for the connection termination request acknowl-

edgement.

• TIME-WAIT describes a final timeout wait state, that is used to ensure that the remote

TCP received the acknowledgement of its connection termination request

A.2 TCP Flavours

A.2.1 Vegas

TCP Vegas [12] is an alternative implementation of the TCP specification, that increases TCPs

throughput compared to the Reno implementation. To achieve this aim, TCP Vegas introduces

three new schemes:

118

Page 137: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

A.2 TCP Flavours

• A new retransmission scheme that retransmits dropped packets more timely

• A modified congestion control algorithm, that adjusts more adequate to the available band-

width

• A new slow-start algorithm that avoids packet loss, while finding the maximum available

bandwidth

In standard operating systems, the timers are usually implemented coarse-grained with a granu-

larity of 500ms, so that the values for the actual retransmission timeout is not very accurate. TCP

Vegas overcomes this problem by placing timestamps to the TCP segments, when they are injected

to the network. If an ACK is received at the senders side, a second timestamp is used to calculate

the RTT. On the reception of a DUPACK, the time difference between the actual system clock

and the timestamp recorded at sending is compared. If the value exceeds the retransmission timer,

a retransmission is triggered without waiting for three DUPACKs. If an ACK is received after a

retransmission, TCP Vegas compares the time interval with the retransmission timeout and retrans-

mits the packet. This will recover multiple lost packets in one window. Concluding TCP Vegas uses

ACKs as indicator for a retransmission timeout, but uses still the coarse grained timers as a fallback

case.

TCP Vegas introduces a new congestion control algorithm, that is based on a proactive approach.

TCP Reno uses a reactive approach, that estimates the maximum available bandwidth by artificially

creating congestion. TCP Vegas instead, uses the following phenomenon to measure the available

bandwidth without artificially creating congestion. If the congestion window is increased, the ex-

pected sending rate (ESR) is increased as well. The available sending rate (ASR) at receivers

side, may not be increased if the sending rate is close to the maximum available bandwidth, since

the packets will be queued at one of the intermediate routers. TCP Vegas measures the expected

sending rate (ESR) by dividing the window size by the RTT that was measured for the first sent

segment. The actual sending rate is calculated for a specific packet by the amount of data, that was

injected into the network between the time, the specific packet was sent, and the time the ACK was

received. This amount of data is divided by the RTT, that was measured via timestamps. TCP

Vegas increases its congestion window linearly, if ESR − ASR < α and decreases the congestion

window linearly, if ESR − ASR < β, with α < β. if the ESR − ASR is between α and β, the

119

Page 138: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

A.2 TCP Flavours

congestion window remains unchanged. α and β are suggested to α = 1 and β = 3.

TCP Vegas also modifies TCP’s slow-start algorithm to avoid packet loss, while finding the maxi-

mum available bandwidth. The slow-start algorithm is also adapted to the new congestion control

algorithm. So, TCP Vegas increases its sending rate exponentially, but measures the difference of

the sending rates meanwhile. So, every second RTT the congestion window is kept constant to get an

accurate measurement of ESR and ASR. If the value of ASR falls below the expected value ESR

by γ, slow-start is left and the congestion avoidance algorithm is entered, otherwise the congestion

window is increased exponentially.

The evaluation [3] showed, that TCP Vegas improves the performance compared to TCP Reno and

NewReno by reducing the number of retransmission rapidly. Since TCP Vegas is less aggressive to

the use of intermediary router buffers, TCP Vegas performs poorly when competing with a TCP

Reno stream. An evaluation of TCP Vegas over large bandwidth delay networks [30] showed, that

TCP Vegas performs poor in its slow-start phase, since it underestimates the maximum available

bandwidth and it leads to temporary buffer queue buildup.

Modifications that improve the fairness of TCP Vegas, have been proposed in [29].

A.2.2 Forward Acknowledgements

Forward Acknowledgements (FACK) [34] try to estimate the outstanding data in the network in er-

ror recovery states in a more appropriate way under multiple packet losses. In a standard TCP Reno

implementation, multiple packet losses can result in loosing the self-throttling behavior of TCP and

cause unnecessary retransmission timeouts and a slow-start. During that timeout and additional

slow-start, the network bandwidth is underutilized resulting in poor TCP performance. Reference

[34] pointed out the inability of TCP Reno to perform congestion control in error recovery states.

The FACKs are an extension to SACKs and introduce two new state variables at the senders side:

snd.fack and retran data. The sender keeps not only track of the first byte of unacknowledged

data (snd.una) and the first byte of unsent data (snd.nxt), but also of the highest sequence number

of correctly received data (snd.fack) plus one. In the normal data processing, the snd.fack value

equals the snd.una value. In recovery states, the snd.fack utilizes the SACK blocks and updates the

sequence number of highest number of correctly received data, if necessary. The second variable,

120

Page 139: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

A.2 TCP Flavours

retran data, reflects the number of retransmitted data, that is outstanding in the network.

The FACK implementation uses this new variables in order to compute the amount of data out-

standing in the network. This value was called awnd. In non-recovery state, awnd is computed

as per normal, i.e. awnd = snd.nxt − snd.fack. Note that in non-recovery states, snd.una and

snd.fack have the same values and therefore it is the same equation as in TCP Reno. In recov-

ery state, the value of outstanding retransmitted data, retran data, has to be added and leads to

awnd = snd.nxt − snd.fack + retran data. The sender sends data as long as enough congestion

window is available to fill up the outstanding data, i.e. data is send if awnd < cwnd. The robust-

ness is derived from the fact, that the sender either increases retran data in case of a retransmitted

packet or snd.nxt if a new packet was send. Received ACKs decrease either the value of retran data

or increase snd.fack. If snd.fack acknowledges data beyond snd.nxt, this indicates that the packet

was lost. To improve TCPs behavior in case of packet loss, the recovery condition was changed.

Recovery is additionally triggered in case that more than three packets have to be reassembled. This

condition can be computed by (snd.fack − snd.nxt) > 3 ×MSS. Two additional algorithm were

also implemented and evaluated in [34], Overdamping and Rampdown.

The new congestion control scheme showed substantly better performance in case of multiple packet

loss in a simulation environment, since it is not as bursty as the common TCP Reno implementation.

The additional Overdamping algorithm showed too conservative performance whereas Rampdown

seemed to perform good.

A.2.3 Total Acknowledgements

Total Acknowledgements (TACK) [53] address the problems of asymmetric links, that cause frequent

ACK losses. TACKs follow the same approach as FACKs, i.e. estimating the outstanding data in

the network, but suggest negative ACKs (NACKs) recover from multiple data or ACK losses.

TCP FACK may overestimate the number of outstanding data in some cases. TCP FACK assumes,

that SACK blocks beyond the lost packets are contiguous. In a scenario, where 6 packets are injected

into the network and packet 3 and 5 are dropped due to congestion, FACK would receive an ACK

for packet 2, expecting packet 3 and set snd.fack to 7, since packet 6 was the last successful received

packet reported by SACK blocks. Therefore, the estimated outstanding data is 4 instead of 3. TCP

121

Page 140: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

A.2 TCP Flavours

TACK suggests the use of total acknowledgements tack, that can be computed by the number of

cumulative acknowledged bytes cack and the number if additionally, out-of-order received bytes

additional, i.e. tack = cack − 1 + additional. This additional value is transmitted by the receiver

in an additional option field.

[53] also proposes a scheme for using NACKs instead of positive ACKs like in SACK, since they tend

to be more robust against ACK loss than positive ACKs. They also showed, that the TCP TACK

implementation including NACKs enhanced the TCP performance in asymmetrical links.

A.2.4 Header Checksum Option

The TCP Header Checksum Option (TCP HACK) [6] uses the premise, that in case of packet

corruption it is more likely, that the data will be corrupted and not the header. In standard

networks, the amount of data is much larger than the amount of the header. With this knowledge

the receiving TCP can identify the corrupted data and forward this information to the sender. The

sender could then retransmit the corrupted packet without unnecessarily lowering the congestion

window. The result is a more appropriate bandwidth estimation and a higher throughput.

Two new TCP options were added to the TCP implementation. One option carries the header

checksum in a data packet, the second option contains the sequence number of the corrupted packet

in ACK packet.

The data processing algorithm of the TCP implementation was changed at three specific locations.

After negotiating the TCP Header Checksum Option in the SYN- and SYNACK-packets, the sending

TCP calculates the TCP checksum including an IP-pseudo header and adds this value to the TCP

Options. When the receiving TCP receives this data packet including the option, it checks first the

standard TCP checksum. If the checksum fails, it can determine wether the corruption took place in

the header or the data by calculating the header checksum and comparing it to the value transmitted

in the header checksum option. If the corruption occurred in the header, the normal TCP processing

is applied, i.e the packet is discarded. In case of corrupted data, a ”special ACK” is sent. This special

ACK contains the sequence number of the corrupted data packet in the TCP options. When the

sending TCP receives an ACK, it looks first for the option containing the sequence number of the

corrupted data. If it is not present, the normal TCP processing is applied. If the option is present,

122

Page 141: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

A.2 TCP Flavours

the corrupted data is retransmitted in a new packet, while the ACK itself is discarded without

further processing at the senders TCP.

The evaluation of TCP HACK showed, that TCP HACK performs much better in lossy links than

SACK alone. It also showed, that SACK and HACK can benefit heavily from the characteristics of

each other since they are disjoint algorithms.

A.2.5 Transactional TCP

Transactional TCP (T/TCP) [7, 10, 9] was designed to improve TCPs performance in transactional

scenarios. Transactional scenarios are mainly characterized by a short request message from the

client to the server and a response message by the server. For instance, HTTP is a typical transac-

tional service, that runs on top of TCP. The client, i.e. the web browser, sends a small HTTP-Request

to the server and the server responds by delivering the appropriate HTML document encapsulated

in an HTTP response.

TCP performs poor in those scenarios, since the 3-way-handshake to establish a connection and the

connection close outweigh the request and the response messages. In typical transaction scenarios,

the request messages as well as the response message fits to one packet each. A typical TCP flow

would then need 3 packets for establishing the connection, 1 packet for the request, one for the

response and 4 for the connection termination. In sum 9 packets would be needed to perform a

transaction, where just two packets are ”effective” for the transaction.

T/TCP eliminates the 3-Way-Handshake by introducing a new number called a ”connection count”

(CC), that is carried in a TCP option in each segment. T/TCP assigns monotonically increasing

CC values to successive connections that. A request packet carries the SYN and FIN bit enabled,

the connection count in an option field and the request itself in its payload. The server would then

respond with a SYN/FIN packet, acknowledging the received SYN/FIN packet and carrying the

response data of the transaction in the data field.

T/TCP is also capable of turning the 3-way-handshake bypass off for the sake of backward compati-

bility. If no connection count option is sent in the SYN packet of a TCP Connection, T/TCP reacts

like a normal TCP implementation. T/TCP was successfully implemented into the Linux Kernel

[7].

123

Page 142: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

A.2 TCP Flavours

A.2.6 Multicast TCP

Multicast TCP [44] proposes a TCP based congestion control scheme for reliable multicast. Stan-

dard TCP implementations suffer mainly from the problem, that is introduced by the multicast tree.

A standard TCP implementation that sends data to the multicast group, considers every DUPACK,

that is generated in one of the branches of the multicast tree, as indication of congestion. Thus, TCP

reduces the congestion window to every third DUPACK, regardless which path the packet along the

multicast tree took.

To handle this problem, Multicast TCP introduces sender agents (SA) in every node of the multicast

tree from the sender to the receiver. Every SA stores the received packets in local buffer and set

a retransmission timer for every packet. When the SA receives an ACK from every receiver of the

data packet, the packet is discarded. If a packet gets lost, indicated by a negative Acknowledgement

(NACK) or a timeout, the packet is retransmitted via unicast.

To compute an appropriate congestion window, every ACK from the SA’s receivers includes a con-

gestion summary, i.e. an actual congestion level report of the receiver. The SA the computes its

own congestion level by calculating the minimum of all the current congestion level reports. This

mechanism ensures, that the congestion level, that is reported to the sender, represents the conges-

tion level at the bottleneck link. The Congestion Avoidance mechanism, that was used in Multicast

TCP, was derived from TCP Vegas. Packets, that have to be retransmitted due to loss in one of the

subtrees of the SAs, are not send with the actual congestion window, but with an additional window

called retransmission window.

As the RTT cannot be measured accurately and thus a retransmission timer cannot be calculated

by the RTT, Multicast TCP introduces the ”relative time delay (RTD)”, that is computed by the

time difference between the time of the departure of a packet at the sender and the time of arrival

of the corresponding ACK. To compute the RTO, a weighted average of the RTD is used.

In different scenarios, [44] showed, that Multicast TCP ensures the fairness between the different

receivers of the multicast group, as well as that Multicast TCP is interfair. A protocol is interfair,

if it does not take more bandwidth than a conforming TCP data flow.

124

Page 143: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

A.2 TCP Flavours

A.2.7 Smooth-Start + Dynamic Recovery

[54] proposes two new schemes to improve the performance of TCP: Smooth-Start and Dynamic

Recovery.

Smooth-start is supposed to replace the standard slow-Start algorithm in TCP. In slow-start, the

sending rate is doubled every RTT until the slow-start threshold (ssthresh) is reached. For a good

performance, the initial value for ssthresh should be set to the maximum available bandwidth.

Since the maximum available bandwidth is unknown to TCP, there are often default values used for

ssthresh. If ssthresh is overestimated, this can result in multiple packet loss in one window.

Smooth-start resolves this problem by dividing the slow-start phase into two different, new phases.

In the first phase smooth-start acts like slow-start and increases its sending rate exponentially until

ssthresh/2 is reached. Beyond ssthresh/2, smooth-start enters the probing phase and increases

its sending rate more slowly than exponential, but still faster than in the congestion avoidance. In

the probing phase the congestion window is increment every (N + i) received ACK, where i is the

number of the subsequent RTT and N the number of RTTs it takes to reach ssthresh. Thus, N

controls how fast the desired ssthresh is reached. A high value for N would result in a very accurate

probe for ssthresh, but takes long probing times. In turn, a slow value for N takes short time to

reach ssthresh, but gives a very inaccurate probe for sstrhesh. For slow-start N would be set to 1.

In [54], a value of 2 for N is suggested.

Dynamic recovery is suggested as a replacement for the Fast Recovery Mechanism. The standard

Fast Recovery mechanism ends the recovery phase, when an ACK for the first packet, that was lost

and retransmitted, is received. Fast Recovery keeps cwnd constant during the recovery phase and

cuts the congestion window (cwnd) into half after the recovery ended, regardless how many packets

were lost. Dynamic recovery tries to take advantage of counting the number of lost packets and to

give a more accurate estimation for cwnd after a multiple packet loss. Dynamic recovery itself is

divided into two phases: the damping phase and the probing phase.

In the damping phase, the numbers of DUPACKs are counted into a state variable (dupcwnd) and

for every three DUPACKs, two new data packets are injected into the network. When a new non-

duplicate ACK is received, the damping phase ends and Dynamic Recovery enters the probing phase.

In the probing phase, the number of packets, that are in transit are counted in an new state variable

125

Page 144: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

A.2 TCP Flavours

(actnum). In Dynamic Recovery, actnum takes the role of cwnd to determine, how many packets

can be injected into the network without running into congestion. At the end of the damping phase

actnum is initially set to dupwnd ∗ 23 , since this is the number of packets that were sent in the

damping phase. After entering the probing phase, actnum is increased by one per RTT, similar

to the Congestion Avoidance algorithm in standard TCP. For every further packet loss Dynamic

Recovery is invoked recursively. After recovering the most recently packet, Dynamic Recovery sets

cwnd to actnum, actnum to 0 and enters Congestion Avoidance.

[54] shows in their evaluation, that implementing smooth-start results in a 25%-200% higher through-

put depending on the underlying TCP Version, that smooth-start was implemented in. Even if the

initial estimate for ssthresh was very poor, smooth-start showed a good performance. Dynamic

Recovery showed a throughput improvement 33% compared to NewReno and 5-8% compared to

TCP SACK and TCP FACK discussed next.

A.2.8 TCP Pacing

As described above, TCP uses a window to determine how much data can be sent over the network

and ACKs to signal when data can be injected into the network. In the common implementations

of TCP, data, fragmented into several packets, is sent immediately after receiving an ACK, while

the sender idles after sending the packets until one RTT expires. This leads to a bursty behavior of

TCP. Even if multiple connections share a bottleneck link, the packets send from one TCP connection

remain together. In case of a bottleneck router with a drop-tail queueing strategy, one connection

would suffer from multiple packet losses and lower its sending rate dramatically while the other

connections continue sending packets at their higher sending rate.

Using a pacing scheme, the packets send over the network are spaced evenly over the RTT. Referring

to the example above, every connection should suffer from a single packet loss and adjust its sending

rate accordingly.

[2] shows, that in a single TCP scenario, the performance of a standard TCP Reno implementation

is comparable to its paced correspondent. The spaced TCP implementation performs better in the

slow-start phase, since it doesn’t overwhelm the network with too many packets but saturates the

bottleneck link. In steady state, the two implementations perform comparable.

126

Page 145: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

A.2 TCP Flavours

In a multi-TCP scenario, paced TCP seems to show synchronization effects. Due to the spaced

packets, packets from the different connections are dropped, when the bottleneck link is overutilized,

resulting in lowering the sending rate of all connections and thus underutilizing bandwidth. With

TCP Reno implementations, one randomly chosen flow suffers from multiple losses, while the other

connections keep up sending at the higher rate. Over time every TCP connections likely suffers from

multiple losses resulting in comparable fairness at higher throughput.

In scenarios, where TCP Reno competed with paced TCP Reno for bandwidth, the paced version

showed even worse performance.

127

Page 146: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

A.2 TCP Flavours

128

Page 147: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

Appendix B

Hardware Specification

Aalborg, Frankfurt and Delft Tokyo and San FransicoModel CISCO 3620 CISCO 3631IOS 12.2(5d) 12.2(13)T

Table B.1: Hardware specifications of used routers

Toronto and ShanghaiModel CISCO Catalyst 2950IOS 12.1(11)EA1

Table B.2: Hardware specifications of used switches

Dhaka IstanbulModel CISCO Aironet 350 CISCO Aironet 350

Standard IEEE 802.11b IEEE 802.11bIOS 12.01T 12.01T

Mode Bridge BridgeTransmit Power 50mW 50mWFrag. Threshold 2338 2338RTS Threshold 2339 2339

Max. RTS Retries 32 32Max.Data Retries 32 32Beacon Period 100 100

Data Beacon Rate 2 2Default Radio Channel 13 (2472 MHz) 1 (2412 MHz)

SSID enabled enabledWEP Keys 128 Bit 128 Bit

Table B.3: Hardware specifications of used WLAN access point

129

Page 148: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

ChinaModel Ericson BlipNodeRange 100m

Used Application Profile PAN

Table B.4: Hardware specifications of used PAN access point

Fixed HostCPU PentiumPro 166 MHzRAM 32 MBHDD 2GOS Redhat Linux 7.3

Kernel 2.4.18-3TCP Flavour TCP Reno + TCP SACK

Table B.5: Hardware specifications of used fixed hosts

Fixed HostCPU Pentium III 666 MHzRAM 512 MBHDD 20GOS SuSE linux 9.0

Kernel Linux linux 2.4.21-192TCP Flavour TCP Reno + TCP SACK

Table B.6: Hardware specifications of the proxy hosts

Mobile NodeCPU Pentium 4 2.66 GHzRAM 512 MBHDD 40GOS Windows XP

Kernel 2.4.166WLAN-driver supplied by vendor (2.1.1.3005)Blueooth-Stack supplied WidComm-Stack by vendor 1.4.2.10TCP Flavour TCP Reno + TCP SACK

Table B.7: Hardware specifications of Mobile Node 1

Mobile NodeCPU Pentium 4 2.88 GHzRAM 512 MBHDD 40GOS Windows XP

WLAN-driver supplied by vendor (2.1.1.3005)Blueooth-Stack supplied WidComm-Stack by vendor 1.4.2.10TCP Flavour TCP Reno + TCP SACK

Table B.8: Hardware specifications of Mobile Node 2

130

Page 149: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

WLAN AdapterModel 3Com 3CRWE6209B

Adapter Type PCMCIA Type IIStandard IEEE 802.11b

SSID enabledWEP Key 128 Bit

Table B.9: Hardware specifications of the 3Com WLAN adapter card

Bluetooth AdapterModel Belkin FT8003

Adapter Type USBRange 10m

Standard Bluetooth v1.1Used Profile PAN

Table B.10: Hardware specifications of the Belkin Bluetooth adapter

131

Page 150: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

132

Page 151: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

Appendix C

Additional Related MeasurementResults

UDP / UDP Upstream UDP / UDP DownstreamΛtransm.,MN1

[ kBytes

]

Λtransm.,MN2

[ kBytes

]

Λtransm.,MN1+Λtransm.,MN2

[ kBytes

]

Λtransm.,MN1

[ kBytes

]

Λtransm.,MN2

[ kBytes

]

Λtransm.,MN1+Λtransm.,MN2

[ kBytes

]

Run 1 486,9 158,5 645,4 435,8 233,4 669,1Run 2 486,9 180,2 667,0 444,8 230,5 675,3Run 3 486,4 186,2 672,7 445,0 216,3 661,3Run 4 484,1 169,7 653,9 432,3 233,5 665,8Run 5 486,5 182,6 669,1 437,1 223,7 660,8Run 6 487,5 159,8 647,2 429,7 224,6 654,3Run 7 487,6 179,8 667,4 446,7 221,3 668,0Run 8 487,9 179,3 667,1 427,1 252,7 679,8Run 9 487,7 156,7 644,4 441,8 216,6 658,4Run 10 487,1 174,6 661,7 440,9 218,0 659,0Average 486,8 172,8 659,6 438,1 227,0 665,2Std.Dev 1,0 10,3 10,2 6,5 69,2 7,6

Table C.1: Throughput of two competing UDP streams over Wireless LAN

133

Page 152: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

3 MBit UDP / TCP Upstream 3 MBit UDP / TCP DownstreamΛtransm.,UDP

[ kBytes

]

Λtransm.,TCP

[ kBytes

]

Λtransm.,UDP +Λtransm.,TCP

[ kBytes

]

Λtransm.,UDP

[ kBytes

]

Λtransm.,TCP

[ kBytes

]

Λtransm.,UDP +Λtransm.,TCP

[ kBytes

]

Run 1 157,6 454,6 612,2 118,7 537,3 655,9Run 2 156,6 387,6 544,2 112,1 537,8 649,9Run 3 149,5 391,2 540,8 109,5 545,8 655,4Run 4 151,4 431,4 582,8 117,1 535,9 653,0Run 5 159,8 475,5 635,4 112,7 531,3 644,1Run 6 143,2 369,6 512,7 118,0 542,3 660,3Run 7 151,2 445,0 596,3 116,3 534,7 650,9Run 8 142,4 378,2 520,5 113,6 535,8 649,4Run 9 145,7 422,2 567,9 114,6 528,1 642,7Run 10 144,3 448,2 592,5 107,7 538,2 645,9Average 150,2 420,3 570,5 114,0 536,7 650,7Std.Dev. 6,0 34,7 40,6 3,4 4,8 5,3

Table C.2: Throughput of a TCP stream competing with a 3 MBit UDP stream over Wireless LAN

TCP / TCP Upstream TCP / TCP DownstreamΛtransm.,MN1

[Bytess

]

Λtransm.,MN2

[Bytess

]

Λtransm.,MN1+Λtransm.,MN2

[Bytess

]

Λtransm.,MN1

[Bytess

]

Λtransm.,MN2

[Bytess

]

Λtransm.,MN1+Λtransm.,MN2

[Bytess

]

Run 1 307,6 119,9 427,4 570,0 84,6 654,6Run 2 344,0 124,9 468,9 574,8 106,8 681,5Run 3 291,3 130,3 421,6 565,2 106,2 671,5Run 4 320,4 120,5 440,9 556,0 114,1 670,1Run 5 274,2 127,0 401,2 567,6 99,1 666,7Run 6 303,5 139,7 443,1 570,4 99,4 669,8Run 7 339,8 127,7 467,5 580,9 95,5 676,4Run 8 311,1 118,7 429,8 561,2 101,9 663,1Run 9 353,8 122,5 476,2 576,1 98,9 675,0Run 10 366,1 127,7 493,8 574,5 102,1 676,7Average 321,2 125,9 447,0 569,7 100,9 670,5Std.Dev. 27,6 5,9 33,5 171,9 7,4 7,3

Table C.3: Throughput of a two competing TCP streams over Wireless LAN

134

Page 153: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

1 MBit Burst UDP / TCP Upstream 1 MBit Burst UDP / TCP DownstreamΛtransm.,TCP

[ kBytes

]

Λtransm.,UDP

[ kBytes

]

Λtransm.,TCP +Λtransm.,UDP

[ kBytes

]

Λtransm.,TCP

[ kBytes

]

Λtransm.,UDP

[ kBytes

]

Λtransm.,TCP +Λtransm.,UDP

[ kBytes

]

Run 1 518,6 95,0 613,6 607,4 184,1 791,5Run 2 435,9 181,0 616,9 636,3 114,0 750,3Run 3 422,7 137,6 560,2 597,6 235,9 833,5Run 4 432,4 180,2 612,6 618,1 188,8 806,9Run 5 412,6 152,1 564,7 609,3 191,4 800,7Run 6 472,3 130,6 602,9 604,7 187,8 792,5Run 7 366,3 194,9 561,1 592,4 236,5 828,9Run 8 491,7 119,2 610,9 578,5 246,4 824,9Run 9 401,7 220,3 622,0 632,9 142,3 775,2Run 10 448,1 186,5 634,6 591,0 188,7 779,7Average 440,2 159,7 599,9 606,8 191,6 798,4Std.Dev. 42,4 37,1 79,5 17,4 39,4 25,0

Table C.4: Throughput of a TCP streams competing with 1 MBit burst UDP stream over WirelessLAN

Λtransm. Upstream [ kBytes

] Λtransm. Downstream [ kBytes

]

Run 1 51,4 59,3Run 2 50,5 58,5Run 3 51,4 58,4Run 4 51,5 59,6Run 5 50,8 59,1Run 6 53,2 58,9Run 7 51,8 59,0Run 8 52,0 59,3Run 9 50,7 59,0Run 10 50,7 58,6Average 51,4 59,0Std.Dev. 0,8 0,4

Table C.5: Transmission Throughput of a TCP streams over a Bluetooth link using an access point

135

Page 154: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

136

Page 155: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

Appendix D

Performance Test Tools

The next appendix chapter covers a list of the used evaluation tools and its description as it currenlty

appears on its website.

D.1 IPerf

While tools to measure network performance, such as ttcp, exist, most are very old and have confus-

ing options. Iperf was developed as a modern alternative for measuring TCP and UDP bandwidth

performance.

Iperf is a tool to measure maximum TCP bandwidth, allowing the tuning of various parameters and

UDP characteristics. Iperf reports bandwidth, delay jitter, datagram loss.

D.2 Ethereal

Ethereal is a GUI network protocol analyzer. It lets you interactively browse packet data from a

live network or from a previously saved capture file. Ethereal’s native capture file format is libpcap

format, which is also the format used by tcpdump and various other tools. In addition, Ethereal

can read capture files from snoop and atmsnoop, Shomiti/Finisar Surveyor, Novell LANalyzer, Net-

work General/Network Associates DOS-based Sniffer (compressed or uncompressed), Microsoft Net-

work Monitor, AIX’s iptrace, Cinco Networks NetXRay, Network Associates Windows-based Sniffer,

AG Group/WildPackets EtherPeek/TokenPeek/AiroPeek, RADCOM’s WAN/LAN analyzer, Lu-

cent/Ascend router debug output, HP-UX’s nettl, the dump output from Toshiba’s ISDN routers,

137

Page 156: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

D.3 tcptrace

the output from i4btrace from the ISDN4BSD project, the output in IPLog format from the Cisco

Secure Intrusion Detection System, pppd logs (pppdump format), the output from VMS’s TCPIP-

trace/TCPtrace/UCX$TRACE utilities, the text output from the DBS Etherwatch VMS utility,

traffic capture files from Visual Networks’ Visual UpTime, the output from CoSine L2 debug, and

the output from Accellent’s 5Views LAN agents. There is no need to tell Ethereal what type of file

you are reading; it will determine the file type by itself. Ethereal is also capable of reading any of

these file formats if they are compressed using gzip. Ethereal recognizes this directly from the file;

the ’.gz’ extension is not required for this purpose.

Like other protocol analyzers, Ethereal’s main window shows 3 views of a packet. It shows a sum-

mary line, briefly describing what the packet is. A protocol tree is shown, allowing you to drill down

to exact protocol or field that you interested in. Finally, a hex dump shows you exactly what the

packet looks like when it goes over the wire.

In addition, Ethereal has some features that make it unique. It can assemble all the packets in a

TCP conversation and show you the ASCII (or EBCDIC, or hex) data in that conversation. Display

filters in Ethereal are very powerful; more fields are filterable in Ethereal than in other protocol

analyzers, and the syntax you can use to create your filters is richer. As Ethereal progresses, expect

more and more protocol fields to be allowed in display filters.

Packet capturing is performed with the pcap library. The capture filter syntax follows the rules of

the pcap library. This syntax is different from the display filter syntax.

Compressed file support uses (and therefore requires) the zlib library. If the zlib library is not

present, Ethereal will compile, but will be unable to read compressed files.

The pathname of a capture file to be read can be specified with the -r option or can be specified as

a command-line argument.

D.3 tcptrace

tcptrace is a tool written by Shawn Ostermann at Ohio University, for analysis of TCP dump files. It

can take as input the files produced by several popular packet-capture programs, including tcpdump,

snoop, etherpeek, HP Net Metrix, and WinDump. tcptrace can produce several different types of

138

Page 157: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

D.4 UDPBurst

output containing information on each connection seen, such as elapsed time, bytes and segments

sent and recieved, retransmissions, round trip times, window advertisements, throughput, and more.

It can also produce a number of graphs for further analysis.

D.4 UDPBurst

UDPBurst is a self written UDP packet genarator that is compatible to IPerf. It allows to send

bursty UDP streams in an ON/OFF traffic model. Data is sent at a constant bandwidth in an

ON period and afterwards idles for an OFF-period. The ON and OFF period are exponentially

distributed and its mean value can be specified by command line parameters.

139

Page 158: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

D.4 UDPBurst

140

Page 159: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

Bibliography

[1] ANSI/IEEE Std. 802.11. Wireless LAN Medium Access Control (MAC) and Physical Layer

(PHY) Specifications, 1999.

[2] Amit Aggarwal, Stefan Savage, and Thomas Anderson. Understanding the performance of TCP

pacing. In INFOCOM (3), pages 1157–1165, 2000.

[3] Jong Suk Ahn, Peter B. Danzig, Zhen Liu, and Limin Yan. Evaluation of TCP vegas: Emulation

and experiment. In SIGCOMM, pages 185–205, 1995.

[4] Ajay Bakre and B. R. Badrinath. I-TCP: Indirect TCP for mobile hosts. 15th International

Conference on Distributed Computing Systems, 1994.

[5] Hari Balakrishnan, Srinivasan Seshan, and Randy H. Katz. Improving reliable transport and

handoff performance in cellular wireless networks. ACM Wireless Networks, 1(4), 1995.

[6] Rajesh Krishna Balan, Lee Boon Peng, K. Renjish Kumar, Lillykutty Jacob, Winston K. G.

Seah, and A. L. Ananda. TCP HACK: TCP header checksum option to improve performance

over lossy links. In INFOCOM 2001, April 2001.

[7] Ren Bin and Zhang Xiaolan. Implementation of Transaction TCP in Linux Kernel 2.4.2. Tech-

nical report, Department of Electronic Engineering, School of Electronics and Information Tech-

nology, Shanghai Jiaotong University.

[8] J. Border, M. Kojo, J. Griner, G. Montenegro, and Z. Shelby. Performance Enhancing Proxies

Intended to Mitigate Link-Related Degradations. IETF, RFC 3135, June 2001.

[9] R. Braden. T/TCP – TCP extensions for transactions, functional specification. IETF, RFC

1644, July 1994.

[10] R. Braden. Extending TCP for transactions – concepts. IETF, RFC 1379, November 1992.

141

Page 160: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

BIBLIOGRAPHY

[11] R. Braden. Requirements for Internet Hosts – Communication Layers. IETF, RFC 1122,

October 1989.

[12] Lawrence S. Brakmo, Sean W. OMalley, and Larry L. Peterson. TCP Vegas: New Techniques

for Congestion Detection and Avoidance. Technical report, Department of Computer Science,

University of Arizona, 1994.

[13] Kevin Brown and Suresh Singh. M-TCP: TCP for mobile cellular networks. ACM Computer

Communication Review, 27(5), 1997.

[14] C. Perkins. IP Encapsulation within IP. IETF, RFC 2003, October 1996.

[15] C. Perkins. Minimal Encapsulation within IP. IETF, RFC 2004, October 1996.

[16] J. Noel Chiappa. Soft and Hard State. Webpage http://users.exis.net/˜jnc/tech/hard soft.html.

[17] Cisco. Cisco IOS Quality of Service Solutions Configuration Guide,

http://www.cisco.com/en/US/products/sw/iosswrel/ps1835/products configuration guide chapter09186a00800c75d2.html,

April 2004.

[18] Cisco. Cisco IOS Quality of Service Solutions Configuration Guide,

http://www.cisco.com/univercd/cc/td/doc/product/software/ios122/122cgcr/fqos c/fqcprt1/qcfclass.pdf,

April 2004.

[19] T. Dierks and C. Allen. The TLS Protocol Version 1.0. IETF, RFC 2131, Jan. 1999.

[20] R. Droms. Dynamic Host Configuration Protocol. IETF, RFC 2131, March 1997.

[21] Gerald Combs et al. Ethereal 10.0, Feb 2004.

[22] Julian Faraway. Practical Regression and Anova using R. July 2002.

[23] R. Finlayson, T. Mann, J.C. Mogul, and M. Theimer. Reverse Address Resolution Protocol.

IETF, RFC 903, June 1984.

[24] S. Floyd. The NewReno Modification to TCP’s Fast Recovery Algorithm. RFC 2582, 1999.

[25] Tom Goff, James Moronski, Dhananjay S. Phatak, and Vipul Gupta. Freeze-TCP: A true

end-to-end TCP enhancement mechanism for mobile environments. In INFOCOM (3), pages

1537–1545, 2000.

142

Page 161: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

BIBLIOGRAPHY

[26] V. Jacobson. Congestion avoidance and control. ACM Computer Communication Review;

Proceedings of the Sigcomm ’88 Symposium in Stanford, CA, August, 1988, 18, 4:314–329,

1988.

[27] V. Jacobson and R. Braden. TCP extensions for long-delay paths. IETF, RFC 1072, October

1988.

[28] S. Kent and R. Atkinson. Security Architecture for the Internet Protocol. IETF, RFC 2401,

Nov. 1998.

[29] Soo-Hyeong Lee, Byung G. Kim, and Yanghee Choi. Improving the Fairness and the Response

Time of TCP Vegas. Technical report, Department of Computer Science and Engineering, Seoul

National University.

[30] Soo-Hyeong Lee, Byung G. Kim, and Yanghee Choi. TCP Vegas Slow-Start Performance in

Large Bandwidth Delay Networks. Technical report, Department of Computer Science and

Engineering, Seoul National University.

[31] M. Krunz and S.K. Tripathi. On the Characteristics of VBR MPEG Streams. In ACM SIG-

METRICS, pages 192–202.

[32] G. Malkin. RIP Version 2 – Carrying Additional Information. IETF, RFC 1723, November

1994.

[33] M. Mathis, J. Mahdavi, S. Floyd, and A. Romanow. TCP selective acknowledgment options.

IETF, RFC 2018, October 1996.

[34] Matthew Mathis and Jamshid Mahdavi. Forward acknowledgement: Refining TCP congestion

control. In SIGCOMM, pages 281–291, 1996.

[35] J.C. Mogul and S.E. Deering. Path MTU Discovery. IETF, RFC 1191, Nov. 1990.

[36] J. Moy. OSPF version 2. IETF, RFC 1247, July 1991.

[37] Helsinki University of Technology. Dynamics mobile ip. Webpage

http://dynamics.sourceforge.net.

[38] C. Perkins. IP Mobility Support for IPv4. IETF, RFC 3344, August 2002.

[39] D.C. Plummer. Ethernet Address Resolution Protocol: Or converting network protocol ad-

dresses to 48.bit Ethernet address for transmission on Ethernet hardware. IETF, RFC 826,

Nov 1982.

143

Page 162: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

BIBLIOGRAPHY

[40] Jon Postel. Internet protocol. IETF, RFC 791, September 1981.

[41] Jon Postel. Transmission control protocol. IETF, RFC 793, September 1981.

[42] GNU public license. GNUplot 3.8k, Feb 2004.

[43] Y. Rekhter and P. Gross. A Border Gateway Protocol 4 (BGP-4). IETF, RFC 1771, March

1995.

[44] Injong Rhee, Nallathambi Ballaguru, and George N. Rouskas. MTCP: Scalable TCP-like con-

gestion control for reliable multicast. Technical Report TR-98-01, 2, 1998.

[45] Hans-Peter Schwefel. Modeling of Packet Arrivals Using Markov Modulated Poisson Processes

with Power-Tail Bursts. Master’s thesis, Technical University Munich, Aug 1997.

[46] Bluetooth SIG. Bluetooth Specification v1.1, Feb. 2001.

[47] W. Stevens. TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery

Algorithms. IETF, RFC 2001, January 1997.

[48] Inc. The MathWorks. MatLab 6.5, Online Help, June 2002.

[49] Ajay Tirumala, Feng Qin, Jon Dugan, Jim Ferguson, and Kevin Gibbs. Iperf 1.7.0, March 2003.

[50] Sarma Vangala and Miguel A. Labrador. Performance of TCP over Wireless Networks with the

Snoop Protocol. Technical report, Department of Computer Science and Engineering, University

of South Florida, Tampa.

[51] Sarma Vangala and Miguel A. Labrador. The TCP SACK-Aware Snoop Protocol for TCP

over Wireless Networks. Technical report, Department of Computer Science and Engineering,

University of South Florida.

[52] W. N. Venables, D. M. Smith, and the R Development Core Team. Introduction to R. Nov

2003.

[53] J. Waldby, Upamanyu Madhow, and T. V. Lakshman. Total acknowledgements: A robust

feedback mechanism for end-to-end congestion control (extended abstract). In Measurement

and Modeling of Computer Systems, pages 274–275, 1998.

[54] H. Wang and Carey L. Williamson. A new scheme for TCP congestion control: Smooth-start

and dynamic recovery. In MASCOTS, pages 69–76, 1998.

144

Page 163: IMPLEMENTATION AND EVALUATION OF A PERFORMANCE ENHANCING ...kom.aau.dk/group/03gr996/documentation/master_thesis_aau.pdf · implementation and evaluation of a performance enhancing

BIBLIOGRAPHY

[55] Raj Yavatkar and Namrata Bhagawat. Improving end-to-end performance of tcp over mobile

internetworks. In IEEE Workshop on Mobile Computing Systems and Applications, Dec. 1994.

145