Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Page | 1
Transactions of the Japan Society for Computational Engineering and Science–ISSN NO. 13449443
Novel Model to Inculcate Proactive Behaviour in Programmable
Switches for FloodLight controlled Software Defined Network
Mohammed Asif Khan[1], Bhargavi Goswami[2], Joy Paulose[1], Libin Thomas[1]
[1] CHRIST University, Bangalore, India.
[2] Queensland University of Technology, Brisbane, Australia.
Abstract Software Defined Networks have been the subject of focus due to their applicability in fields such as
VANETs, IOT and cloud computing. After in depth study about the scalability of various controllers used in diverse
networking scenarios, the authors of this paper have come up with the implementation and testing of novel model
developed to inculcate proactive behaviour of programmable switches controlled by software defined network
controller using REST API, python, mininet, iperf and other research tools. The authors of this research have
implemented and demonstrated a Proactive Static Entry Pusher to Flow Tables over FloodLight Controller in an SDN
networking environment. With this paper, an opportunity has been created for the researchers and the industry to
develop technology that can control the behaviour of networks through controllers when a particular type of packet is
encountered. The profound structure of the research article will help the readers to clearly follow the step by step
procedure of implementation which can then be recreated and enhanced further for research and development or
industry solution.
Keywords: SDN, Floodlight, Mininet, OpenFlow, Reactive entry insertion, Proactive entry insertion, Static entry
pusher, iPerf, RestAPI, gnuplot
1. Introduction
The programmable switches in SDN can interact with a controller by using OpenFlow protocol. It is necessary for the
controller to obtain control over switches through flow entries. Through OpenFlow protocol, operations such as
addition, deletion and updation of flow entries can be performed in flow tables by the controllers reactive and proactive
behaviours[1]. Fig. 1 demonstrates how OpenFlow switch process a packet. For the packets to match the entries of
flow entries of the flow table, matching column values, number of instructions and counters are used. Once a match
is found, the list of instructions stored as a set of actions, gets executed and forwarded to next Flow Table. This
continues until the matching flow entry has no specification of the next table where the table pipeline processing stops.
This is the stage where the packet is modified or forwarded to output port. Thus, OpenFlow packets are obtained and
accepted at ingress port, processed in OpenFlow Pipeline Table Entries and forwarded to Output port [2].
The actions performed during the process of flow entry has to be controlled by the controller where the type of actions
that switches need to take is in accordance with controller specification. This can be done with two methods, reactive
and proactive [3] [4]. The purpose of this research is to impose proactive behaviour of management of flow entries in
SDN OpenFlow enabled switches by making Static Flow Entries to the Flow Tables and specifying the set of actions.
Based on the previous work by the researchers, the choice of controllers and platform prevailed which includes Ryu
[5], Onos [6], OpenDayLight [7], Beacon [8], Floodlight [9] and cross platform hybrid SDNs [10]. But, in this
experiment our research is confined to Floodlight Controller [11]. Since the inception of Floodlight, it has been enabled
to install flow table entries to the switches in reactive fashion [12]. Here, what is been implemented is proactive
behaviour of the switches using static flow entry modelling in flow table entries using REST API. This research is the
enhancement of detailed literature review and multiple prior successful experimental research which has been
Page | 2
Conducted before coming up with this model as profound solution [13][14][15]. After this research implementation,
authors have opened doors to the circumstances of permitting users to customize its network behaviour through
programming and allow insertion of flow entries in OpenFlow Network to enjoy proactive network behaviour.
Fig. 1. The flowchart of the packet passing through OpenFlow enabled Switch.
Problem Statement: The aim of this research is to develop a model that represents proactive method of insertion of
flows on OpenFlow protocol of Software Defined Networks using RestAPI. Implement, experiment, perform testing
and performance parameter analysis of small to medium scale SDN Network using Simulation Environment based on
obtained results. Further, investigation and comparison of behavioral changes of Software Defined Networks before
and after static entry pusher.
Objectives: Main objective of this research is to develop a model that represents steps for implementation of proactive
flow entry pusher functionality. The experiment aims to test the developed model on SDN Network using Floodlight
Controller which is exposed via a REST API by pushing the flow entries between the configured switches and also
by using the ovs-ofctl command line tool. Thereby performing testing of predetermined parameters on simulation
environment. Further analysis is carried out and a comparison is made between the results obtained before and after
the implementation of static entry pusher functionality on SDN networks with FloodLight controller.
The paper is arranged in the following sections. Section 2 specifies about experimental tools used. Section 3 discuss
the methodology of implementation, which includes experimental issues faced during the research and its resolution,
performing first reactive, then proactive entry insertion followed by traffic generation and capturing event logs into
result files. Section 4 specifies how data source is clustered as per the parameters taken under consideration for
performance evaluation. Section 5 is data interpretation. Section 6 presents conclusion, future scope followed by
references.
Page | 3
2. Equipments and Tools:
The platform tools and configuration are provided along with their versions inside Table 1.
Table 1. Tools and Configuration for the platform used during Experiment
Tools & Platform Configuration
Oracle VM Virtual Box Manager [16] 5.2.24 r128163 (Qt5.6.2)
Ubuntu [17] 14.04 64-bit
Mininet [18] 2.2.1
Iperf [19] 2.0.13
Gnuplot [20] 4.6
Openflow [21] [22] 1.4
Processor 2CPUs
Base Memory 2.5GB
Floodlight [23] [24] Master version
Python [25] Ver. 3.7
Xterm [26] XFree86 3.1.2B
REST API N.A
The system specifications for the experiment are specified in the above table, the Virtual Machine [16] platform was
created with 2.5GB of base memory having 2 CPUS, with Ubuntu [17] as operating system. Few of the research tools
that were installed are Mininet[18], iPerf [19], Gnuplot [20], OpenFlow [22], Floodlight Controller [23], etc are used.
3. Methodology:
The phases of the entire research is distributed as depicted in Fig.2. One can observe that it starts with Modelling,
followed by Designing of implementation. After removal of reactive behaviour on Floodlight Controller,
Implementation of Proactive behaviour is done. Once the model is implemented, testing environment is developed
with topology implementation, traffic generation and logging followed by rigorous testing. Last stage is result analysis.
Page | 4
Fig. 2 Phases of Implementation of Proactive Static Entry Pusher for Flow Entries.
3.1. Experimental challenges and solutions:
Meanwhile, the issues faced during the experiment were resolved with consistent efforts.
(a) Iperf version issue:
The SDN hub contains only a specific version of the iperf i.e. iperf-2.0.5. This version of iperf supports very basic
commands and options. But the later versions of iperf-2 series comes up with more options in terms of calculating
different network parameters. One cannot just update the present version of the iperf from the terminal. It is needed
to download the specific version of tar.gz file from the official website, as in this case it is iperf-2.0.13 and install this
after extraction of the downloaded file.
(b) Why should one have this version?
When the man page of iperf-2.0.13 is seen, it has more new and different options for capturing events based on
different network parameters compared to earlier versions. The reason that this version of iperf needed is that the
experiment was to measure network performance based on congestion window, round trip time and latency. This
version of iperf offers an option ‘-e’ which is used to generate and display enhanced output in reports for both TCP
and UDP. One can see this option being used in the later part of this experiment when the event capturing phase starts
occurring between clients and servers.
(c) Few other issues that can be faced during the usage of iperf are:
After simulating the topology with mininet, mininet CLI is visible. Next step is to create clients and servers using the
xterm commands. When execution of the xterm command is done for specific hosts, respective consoles are opened
for those hosts. Here the linux operating system is usually operated in bash shell root mode inside these consoles. One
might encounter ‘-e’ as an invalid option for the iperf commands given. To resolve this, execution of the command is
necessary: sudo su. This gives root access. Now, execute the iperf commands with ‘-e’ option.
(d) REST(Representational State Transfer) API
In the later part of the experiment, while executing the staticentry.py script, inserting flow entries between the switches
through the same script was tried. But the REST API does not seem to work at this level as eventually flow entries
were only added to those switches which had direct links with the hosts in the network. So the ovs-ofctl command
tool was used for this purpose in order to insert flow entries at switch level to create a communication. These are
general OpenFlow commands, and are executed in the mininet CLI.
3.2. Performing Reactive Entry Insertion
Reactive entry insertion is one of the default properties of the floodlight controller. In this type of insertion whenever
a new packet arrives at an OpenFlow switch without any matching flow in a flow table, the packet is sent to the
Page | 5
controller for evaluation, where the controller adds the appropriate entries into its flow table and further allows the
switches to continue with their forwarding [28]. In order to perform reactive entry insertion on a network. A fat tree
topology is used, as this is the best network scenario for efficient communication [29] [30]. The data links in the higher
hierarchy are thicker than the data links at the lower hierarchy. These types of data links allow more efficient and
technology specific use.
Fig. 3 Network Topology developed using Floodlight Controller. Blue colored switched network centrally controlled by the Floodlight
Controller connected to 8 host representing each the set of end hosts.
Step 1: In a new terminal, start running the floodlight controller from the floodlight folder location where the
floodlight jar is situated. This is done by using the command: java -jar target/floodlight.jar. This command starts the
floodlight controller and in order to avoid any type of packet loss, it is necessary to observe the LLDP packets that are
sent from the enabled ports. This can be observed on the terminal logs.
Page | 6
Fig 4. Python script for network topology
Step 2: Since the controller is up and running. The fat tree topology python script shown in Fig. 4, is to
inculcate the network scenario present in Fig. 3. This is done by executing the python script relative to the floodlight
environment in the system that is exposed via the port number and the ip address respectively. The command used is:
sudo mn --custom topology.py --topo mytopo --controller=remote,ip=127.0.0.1,port=6653. The connectivity and
reachability of the hosts within the network can be tested using the command: pingall.
Fig. 5 Real-time topology view of the Floodlight Controller.
Step 3: The topology script executed in Step 2 leads to real-time network creation as shown in Fig. 5. This is
the network scenario that is perceived by the floodlight controller. Now, select the hosts and make them as client and
server using Xterm. This is to further check the communication between the selected hosts.
Step 4: In this experiment, our research is confined to defining only two clients and their respective two
servers. The command to do so is: xterm h1 h2 h7 h8. Each of these host’s terminals will be prompted with. In order
to check the configuration details, type the command: ifconfig.
Step 5: The traffic generated between the client and server are recorded using the iperf tool.
(a) Between Host-1 and Host-7: To make h7 as the server ‘-s’, which will be listening to the client request on port ‘-
p’ 6653 that captures the flow of UDP packets by using the ‘-u’ option. The flow of events occurring between the
client and server are captured in a file resulting with all the enhanced ‘-e’ reports. Inside the h7 console, this command
is executed: iperf -s -p 6653 -e -u -i 1 > B-H1. ‘-i’ is used for interval/pause seconds between periodic bandwidth
reports. ‘B-H1’ is the name of the file. When the above command is successfully executed, the server whose ip address
is 10.0.0.7 starts listening on port 6653 for client ‘-c’ request where target bandwidth ‘-b’ is set to 10m/second. Now,
inside the h1 console, the command executed is: iperf -c 10.0.0.7 -p 6653 -e -u -b 10m -t 100. 100 is time in seconds.
(b) Between Host-2 and Host-8: In order to avoid server conflicts in the next step, the h7 server is stopped. To make
h8 as the server ‘-s’, which will be listening to the client request on port ‘-p’ 6653 that captures the flow of UDP
packets by using the ‘-u’ option. The flow of events occurring between the client and server are captured in a file
resulting with all the enhanced ‘-e’ reports. Inside the h8 console, this command is executed: iperf -s -p 6653 -e -u -i
1 > B-H2. ‘-i’ is used for interval/pause seconds between periodic bandwidth reports. ‘B-H2’ is the name of the file.
When the above command is successfully executed, the server whose ip address is 10.0.0.8 starts listening on port
6653 for client ‘-c’ request where target bandwidth ‘-b’ is set to 10m/second. Now, inside the h2 console, the command
executed is: iperf -c 10.0.0.8 -p 6653 -e -u -b 10m -t 100. 100 is time in seconds.
Step 6: The output files generated from the previous step are used for obtaining the results filtered according
to the jitter parameter needed for this research. The command to do so is as follows: a) Filtering H1 to H7 traffic: The
command used for this: cat B-H1 | head - 106 | grep sec | awk ‘{print $3,$9}’>jitter-h1. ‘jitter-h1’ is a filtered result
file. b) Filtering H2 to H8 traffic: The command used for this: cat B-H2 | head - 106 | grep sec | awk ‘{print
$3,$9}’>jitter-h2. ‘jitter-h2’ is a filtered result file.
Page | 7
The contents of the above files can be checked using the command: more jitter-h1 and more jitter-h2. These final
output files will be used later to generate graphs on selected parameters. Here are the output files which have generated
for this part of the experiment
Fig 6. Left: A-H1 containing the information about the events occurred between host 1 and host 7. Right: jitterA-H1, final output file
for measuring Jitter.
As shown in Fig. 6, the result file contains the different log events that has been captured in course of time intervals
for UDP transmission between the hosts.
Further jitter is chosen as the parameter for measuring network performance. And only that is filtered out in accordance
with the time interval from Fig. 6 left log file to obtain a filtered resultant file as shown in Fig. 6 right.
3.3. Implementing proactive entry insertion
The floodlight controller has provided the ability for the user to manually insert flows into an OpenFlow network,
which is exposed via a REST API. This type of insertion module is called Static Entry Pusher.
Entries can be inserted proactively by the controller in switches before the packets arrive. In this case, the packet will
never be sent to the controller for evaluation since it matches the proactively-inserted flows. The Static Entry Pusher
module is generally useful for proactive entry insertion. To exclusively use static entries in this experiment, few
changes to the default properties file of the floodlight controller are made which is provided in Fig. 7.
Fig 7. Floodlightdefault.properties file
Step 1: Open the file with name ‘floodlightdefault.properties’, look for the file under the path
/home/ubuntu/floodlight/src/main/resources/floodlightdefault.properties. Inside the file, remove
Page | 8
‘net.floodlightcontroller.forwarding.Forwarding’ property, which is the default forwarding property for reactive entry
insertion at line number 12. Save the file after changes.
Step 2: In a new terminal, again start running the floodlight controller from the floodlight folder location
where the floodlight jar is situated. This is done by using the command: java -jar target/floodlight.jar.
When the controller starts running, the first thing that happens is it will load all its modules/properties available for it
to run. So one such file is ‘floodlightdefault.properties’. While the execution of the above command one can see this
file getting read in the terminal log information. Once changes have been made to this file, it is unclear to the controller
of how to carry out the packet flows in the network. So none of the hosts will be able to communicate with any other
hosts.
Step 3: The python script is executed relative to the floodlight environment in the system that is exposed via
the port number and the ip address respectively. The command used is: sudo mn --custom topology.py --topo mytopo
--controller=remote,ip=127.0.0.1,port=6653. The connectivity and reachability of the hosts within the network can be
tested using the command: pingall. When pingall is executed, this will show that the result is that none of the hosts
are neither reachable nor able to communicate with each other. This is because the default forwarding property of the
controller is removed. Though the controller has the overview of the network but is unaware of how to forward the
incoming packets within the network.
Step 3: Inserting flows on switch-1 and switch-4 ports port-1 and port-2 which are used to directly connect
to the end hosts H7 and H8. This is done by executing a python script called staticentry.py, which will have the flow
rules written in JSON string format for the above switches. Logic is demonstrated in Fig. 8 code snippet. Need to
specify the ip address of the system on which the network is being run. This will in turn enable the staticentrypusher
module for statistic collection by determining the local server. The command to execute this is : python staticentry.py.
As shown in Fig. 8, to insert flows on a particular switch appropriate properties like switch, in_port, active, actions
are selected. Further these properties are written according to the network requirements inside the variables flow1,
flow2, flow3 and flow4. For example, in flow1, we have selected switch1 where DPID of the switch is written. Naming
is done for the flow i.e. flow_mod_1, and this name should be globally unique. Cookie can be a hexadecimal leading
with 0x or decimal. Priority is set to default value. in_port is the packet’s ingress port and it is either 1 or 2. Active
property is set to true, in order to show that a particular switch port has been activated. The action is used in place of
instruction_apply_actions for OpenFlow 1.1+ which is used to flood packets at that port. In the same way all the other
flow rules are also written. These flows are treated as data in the above script. These data are sent to the set function
which is used to send HTTP POST to the controller. With the ip address provided and url given in the path variable
(also enables the staticentrypusher module) is where it is used to dump the flows. The data as an object is serialized
to a JSON formatted string using the dumps() method. An HTTPConnection instance represents one transaction with
an HTTP server.
Page | 9
Fig.8 code snippet for inserting flows
Further in the same way, one can use different functions like GET and DELETE, in order to retrieve entries from a
switch flow table and clear entries from a switch flow table respectively.
Step 4: After successful execution of the staticentry.py script. Packets will be inserted for the flows on switch-
1 and switch-4 ports, and floods them except ingress ports and those ports that are disabled for flooding. Four messages
will be displayed on the terminal confirming that the Entry is pushed.
Step 5: In the mininet CLI, type: pingall. Now, host-1 is only reachable to host-2 and host-7 is only reachable
to host-8. But still host-1 is not reachable to host-7 and host-2 is not reachable to host-8. This is because the flow
entries are only made at switch-1 and switch-4.
Step 6: The flows are yet to be inserted between the switches for communication between the desired hosts. This
is done so by adding flows into the switch tables by selecting the respective ports of the switches. The switches and
their ports chosen for this communication include: Switch-18 port 1 and 2, Switch-10 port 1 and 3, Switch-1 port 4,
Switch-22 port 1 and 3 and Switch-4 port 3. To add flow entries through these switches a series of commands are
executed in the mininet CLI. They are:
Page | 10
(a) sh ovs-ofctl add-flow s18 in_port=1,action=flood
(b) sh ovs-ofctl add-flow s18 in_port=2,action=flood
(c) sh ovs-ofctl add-flow s10 in_port=1,action=flood
(d) sh ovs-ofctl add-flow s10 in_port=3,action=flood
(e) sh ovs-ofctl add-flow s1 in_port=4,action=flood
(f) sh ovs-ofctl add-flow s22 in_port=1,action=flood
(g) sh ovs-ofctl add-flow s22 in_port=3,action=flood
(h) sh ovs-ofctl add-flow s4 in_port=3,action=flood
sh executes the commands read from terminal where ovs-ofctl is a command line tool for monitoring and administering
OpenFlow switches. The above commands is used to add flow to switch <number> table at in_port <number> whose
action is to flood the packets except ingress port and those disabled for flooding. This is how openflow applies
instructions in the switch flow table for actions.
Step 7: In order to show the current state of a switch, including features, configuration, and table entries,
execute the command: sh ovs-ofctl dump-flows s18. This command allows ovs-ofctl to monitor and administer
OpenFlow switches. In the same way, the selected switch number for the experiment can be given in the command to
display all the flow entries at each switch that match flows.
Step 8: This step is to verify if the selected hosts can be reached by the others. In mininet CLI with the
command: pingall. The result shows that Host-1 is reachable to Host-2, Host-7 and Host-8. In the same way, Host-2
is reachable to Host-1, Host-7 and Host-8. Also, Host-7 is reachable to Host-1, Host-2 and Host-8. And Host-8 is
reachable to Host-1, Host-2 and Host-7.
3.4. Traffic Generation and obtaining event logs in result files
Step 1: In this experiment, our research is confined to defining only two clients and their respective two
servers. The command to do so is: xterm h1 h2 h7 h8. Each of these host’s terminals will be prompted with. In order
to check the configuration details, type the command: ifconfig.
Step 2: The traffic generated between the client and server are recorded using the iperf tool.
(a) Between Host-1 and Host-7: To make h7 as the server ‘-s’, which will be listening to the client request on port ‘-
p’ 6653 that captures the flow of UDP packets by using the ‘-u’ option. The flow of events occurring between the
client and server are captured in a file resulting with all the enhanced ‘-e’ reports. Inside the h7 console, this command
is executed: iperf -s -p 6653 -e -u -i 1 > A-H1. ‘-i’ is used for interval/pause seconds between periodic bandwidth
reports. ‘A-H1’ is the name of the file. When the above command is successfully executed, the server whose ip address
is 10.0.0.7 starts listening on port 6653 for client ‘-c’ request where target bandwidth ‘-b’ is set to 10m/second. Now,
inside the h1 console, the command executed is: iperf -c 10.0.0.7 -p 6653 -e -u -b 10m -t 100. 100 is time in seconds.
(b) Between Host-2 and Host-8: In order to avoid server conflicts in the next step, the h7 server is stopped. To make
h8 as the server ‘-s’, which will be listening to the client request on port ‘-p’ 6653 that captures the flow of UDP
packets by using the ‘-u’ option. The flow of events occurring between the client and server are captured in a file
resulting with all the enhanced ‘-e’ reports. Inside the h8 console, this command is executed: iperf -s -p 6653 -e -u -i
1 > A-H2. ‘-i’ is used for interval/pause seconds between periodic bandwidth reports. ‘A-H2’ is the name of the file.
When the above command is successfully executed, the server whose ip address is 10.0.0.8 starts listening on port
6653 for client ‘-c’ request where target bandwidth ‘-b’ is set to 10m/second. Now, inside the h2 console, the command
executed is: iperf -c 10.0.0.8 -p 6653 -e -u -b 10m -t 100. 100 is time in seconds.
Step 3: The output files generated from the previous step are used for obtaining the results filtered according
to the jitter parameter needed for this research. The command to do so is as follows: a) Filtering H1 to H7 traffic: The
command used for this: cat A-H1 | head - 106 | grep sec | awk ‘{print $3,$9}’>jitterA-h1. ‘jitterA-h1’ is a filtered
result file. b) Filtering H2 to H8 traffic: The command used for this: cat A-H2 | head - 106 | grep sec | awk ‘{print
$3,$9}’>jitterA-h2. ‘jitterA-h2’ is a filtered result file.
Page | 11
The contents of the above files can be checked using the command: more jitterA-h1 and more jitterA-h2. These final
output files will be used later to generate graphs on selected parameters.
4. DATA SOURCE
Fig. 9 Code snippet for generating Gnuplot graphs.
Filtered results obtained in the above sections before static entry pusher and after static entry pusher module is used
to generate graphs. These graphs are generated using Gnuplot, once the simulation data is filtered to specific
parameters based on the network criteria required. Fig 9 shows the gnuplot file that is used to plot graphs.
The script in Fig 9 starts with creating png images using libgd, with additional support for various software platforms
for viewing. The pngs generated can be conveniently viewed by piping output to the ‘result.png’ file. The X-axis is
defined for the graph representing the set for time and is displayed in seconds and a label is given for the X-axis. Using
autoscale, gnuplot will adjust the maximum or minimum of the axis. Further define the Y-axis of the graph, and select
the format in which the values are to be displayed such as the number of decimal values that can be displayed. Range
is set for the same based on the values obtained in the filtered result files. Then set the title of the graph and set grid,
which allows grid lines to be drawn on the plot. Set style allows you to set the display of the data as in this case lines
are drawn as linespoints plotting style where points are combined by lines. In the end using plot, the resulting files are
loaded that is to be read from and graphs are generated for the same.
4.1. Jitter
The first network parameter considered was Jitter which is the variation in packet loss of a packet flow that is to
traverse from one host to another in a network. In this experiment, static entries for the switches are performed, so it
is necessary to know the differing delays that have occurred in the path to reach from Host-1 to Host-7 and Host-2 to
Host-8 once we perform proactive insertion. Further this is compared with reactive entries done by the controller.
(a) To perform the analysis through the graphs, the resultant files generated in sub-section 3.2. and 3.4. are considered.
These files are: jitter-h1, jitter-h2, jitterA-h1 and jitterA-h2 generated before and after Static Entry Pusher is performed
respectively.
(b) With respect to Fig.10, the script for generating the graphs is written. This is done for the hosts Host-1->Host-7
and Host-2->Host-8, where the script names are: plot_jit.plt and plot_jit1.plt respectively.
(c) The script files from the above step are executed using the command: gnuplot plot_jit.plt and gnuplot plot_jit1.plt.
The respective graphs are generated in the form of png files and are presented in the following section.
Page | 12
4.2. Latency
It is necessary to know the responsiveness of the network between the selected hosts Host-1 to Host-7 and Host-2 to
Host-8. Thus it is required to understand how quickly the data is travelling between these hosts before the static entry
pusher and after the static entry pusher which is observed from the moment of execution and continued for 100
seconds. Here are the steps that have been followed:
The commands used for this are the same which were used to generate the files A-H1,A-H2,B-H1 and B-H2 in the
previous sections. In Fig. 6, there is a column which says latency and displays latency value for each time interval.
While filtering the results for this parameter, choose the latency column. So the steps here are to be followed are from
filtering steps and is as follows:
Step 1: The output files generated from the previously are used for obtaining the results filtered according
to the latency parameter needed for this research. The command to do so is as follows: a) Filtering H1 to H7 traffic:
The command used for this: cat B-H1 | head - 106 | grep sec | awk ‘{print $3,$14}’>Lat-h1. ‘Lat-h1’ is a filtered result
file. b) Filtering H2 to H8 traffic: The command used for this: cat B-H2 | head - 106 | grep sec | awk ‘{print
$3,$14}’>Lat-h2. ‘Lat-h2’ is a filtered result file.
Step 2: The output files generated from the previous step are used for obtaining the results filtered according
to the latency parameter needed for this research. The command to do so is as follows: a) Filtering H1 to H7 traffic:
The command used for this: cat A-H1 | head - 106 | grep sec | awk ‘{print $3,$14}’>LatA-h1. ‘LatA-h1’ is a filtered
result file. b) Filtering H2 to H8 traffic: The command used for this: cat A-H2 | head - 106 | grep sec | awk ‘{print
$3,$14}’>LatA-h2. ‘LatA-h2’ is a filtered result file.
The contents of the above files can be checked using the command: more LatA-h1 and command: more LatA-h2.Also,
the same using the command: more Lat-h1 and more Lat-h2.
Step 3: To perform the analysis through the graphs, the resultant files generated in the above steps are
considered. These files are: Lat-h1, Lat-h2, LatA-h1 and LatA-h2 generated before and after Static Entry Pusher is
performed respectively.
Step 4: With respect to Fig.9, the script for generating the graphs is written. This is done for the hosts Host-
1->Host-7 and Host-2->Host-8, where the script names are: plot_lat.plt and plot_lat1.plt respectively.
Step5: The script files from the above step are executed using the command: gnuplot plot_lat.plt and gnuplot
plot_lat1.plt. The respective graphs are generated in the form of png files and are presented in the following section.
4.3. Congestion Window
While performing static entries at the switches for the host to communicate, it is necessary to understand how
congested the link is. Thus a comparison before static entry pusher and after static entry pusher is to be done for the
hosts. In this part of the experiment, unlike how file name was given at server side where the events between the client
and server were captured. Now, one can find out how the TCP state variable limits the amount of data that TCP can
send into the network before receiving an acknowledgement(ACK) from the receiver side. So capturing these events
at client side by providing the file name rather than at the server side.
Step1: In this experiment, our research is confined to defining only two clients and their respective two
servers. The command to do so is: xterm h1 h2 h7 h8. Each of these host’s terminals will be prompted with. In order
to check the configuration details, type the command: ifconfig.
Step 2: The traffic generated between the client and server are recorded using the iperf tool.
(a) Between Host-1 and Host-7: To make h7 as the server ‘-s’, which will be listening to the client request on port ‘-
p’ 6653 and where the TCP buffer size ‘-l’ is set to 8 bin histogram of the socket reads returned byte count. The flow
of events occurring between the client and server are captured in a file resulting with all the enhanced ‘-e’ reports.
Inside the h7 console, this command is executed: iperf -s -p 6653 -e -i 1 -l 8K > BCR-H1. ‘-i’ is used for interval/pause
seconds between periodic bandwidth reports. ‘BCR-H1’ is the name of the file. When the above command is
Page | 13
successfully executed, the server whose ip address is 10.0.0.7 starts listening on port 6653 for client ‘-c’ request. Now,
inside the h1 console, the command executed is: iperf -c 10.0.0.7 -p 6653 -e -i 1 -t 100. 100 is time in seconds.
(b) Between Host-2 and Host-8: In order to avoid server conflicts in the next step, the h7 server is stopped. To make
h8 as the server ‘-s’, which will be listening to the client request on port ‘-p’ 6653 and where the TCP buffer size ‘-l’
is set to 8 bin histogram of the socket reads returned byte count. The flow of events occurring between the client and
server are captured in a file resulting with all the enhanced ‘-e’ reports. Inside the h8 console, this command is
executed: iperf -s -p 6653 -e -i 1 -l 8K > BCR-H2. ‘-i’ is used for interval/pause seconds between periodic bandwidth
reports. ‘BCR-H2’ is the name of the file. When the above command is successfully executed, the server whose ip
address is 10.0.0.8 starts listening on port 6653 for client ‘-c’ request. Now, inside the h2 console, the command
executed is: iperf -c 10.0.0.8 -p 6653 -e -i 1 -t 100. 100 is time in seconds.
Step 3: The traffic generated between the client and server is also recorded during static entry pusher is
performed and recorded using iperf tool. The first 10 steps in sub-section 3.3. are executed. Further after that the
experiment is performed as the respective next step.
Step 4: (a) Between Host-1 and Host-7: To make h7 as the server ‘-s’, which will be listening to the client
request on port ‘-p’ 6653 and where the TCP buffer size ‘-l’ is set to 8 bin histogram of the socket reads returned byte
count. The flow of events occurring between the client and server are captured in a file resulting with all the enhanced
‘-e’ reports. Inside the h7 console, this command is executed: iperf -s -p 6653 -e -i 1 -l 8K > ACR-H1. ‘-i’ is used for
interval/pause seconds between periodic bandwidth reports. ‘ACR-H1’ is the name of the file. When the above
command is successfully executed, the server whose ip address is 10.0.0.7 starts listening on port 6653 for client ‘-c’
request. Now, inside the h1 console, the command executed is: iperf -c 10.0.0.7 -p 6653 -e -i 1 -t 100. 100 is time in
seconds.
(b) Between Host-2 and Host-8: In order to avoid server conflicts in the next step, the h7 server is stopped. To make
h8 as the server ‘-s’, which will be listening to the client request on port ‘-p’ 6653 and where the TCP buffer size ‘-l’
is set to 8 bin histogram of the socket reads returned byte count. The flow of events occurring between the client and
server are captured in a file resulting with all the enhanced ‘-e’ reports. Inside the h8 console, this command is
executed: iperf -s -p 6653 -e -i 1 -l 8K > ACR-H2. ‘-i’ is used for interval/pause seconds between periodic bandwidth
reports. ‘ACR-H2’ is the name of the file. When the above command is successfully executed, the server whose ip
address is 10.0.0.8 starts listening on port 6653 for client ‘-c’ request. Now, inside the h2 console, the command
executed is: iperf -c 10.0.0.8 -p 6653 -e -i 1 -t 100. 100 is time in seconds.
Step 5: The output files generated from step 2 are used for obtaining the results filtered according to the
congestion window parameter needed for this research. The command to do so is as follows: a) Filtering H1 to H7
traffic: The command used for this: cat BCR-H1 | head - 106 | grep sec | awk ‘{print $3,$12}’>Cwnd-h1. ‘Cwnd-h1’
is a filtered result file. b) Filtering H2 to H8 traffic: The command used for this: cat BCR-H2 | head - 106 | grep sec |
awk ‘{print $3,$12}’>Cwnd-h2. ‘Cwnd-h2’ is a filtered result file.
Step 6: The output files generated from step 4 are used for obtaining the results filtered according to the
congestion window parameter needed for this research. The command to do so is as follows: a) Filtering H1 to H7
traffic: The command used for this: cat ACR-H1 | head - 106 | grep sec | awk ‘{print $3,$12}’>CwndA-h1. ‘CwndA-
h1’ is a filtered result file. b) Filtering H2 to H8 traffic: The command used for this: cat ACR-H2 | head - 106 | grep
sec | awk ‘{print $3,$12}’>CwndA-h2. ‘CwndA-h2’ is a filtered result file.
The contents of the above files can be checked using the command: more CwndA-h1 and more CwndA-h2.Also, the
same using the command: more Cwnd-h1 and more Cwnd-h2.
Step 7: To perform the analysis through the graphs, the resultant files generated in step 5 and 6 are considered.
These files are: Cwnd-h1, Cwnd-h2, CwndA-h1 and CwndA-h2 generated before and after Static Entry Pusher is
performed respectively.
Page | 14
The following are the output files which were generated for this part of the experiment.
Fig 10. Left: BCR-H1 containing the information about the events occurred between host 1 and host 7. Right: Cwnd-h1 containing the
filtered results for the congestion window from BCR-H1.
As seen in Fig10, the 12th column contains information for both Congestion Window (CWND) and Round trip
time(RTT). Have used these metrics separately and plotted graphs for the same. Filtered results file to have results
like the output shown in right of Fig 10.
Step 8: With respect to Fig.10, the script for generating the graphs is written. This is done for the hosts Host-
1->Host-7 and Host-2->Host-8, where the script names are: plot_cw.plt and plot_cw1.plt respectively.
Step 9: The script files from the above step are executed using the command: gnuplot plot_cw.plt and gnuplot
plot_cw1.plt. The respective graphs are generated in the form of png files and are presented in the following section.
4.4 Round Trip Time
It is an important metric to determine the health of a connection in the network, the time a request traverses from
source to destination and back to source. So it is necessary to know the time delay before static entry pusher and after
static entry pusher is performed for the hosts.
The commands used for this are the same which were used to generate the files ACR-H1,ACR-H2,BCR-H1 and BCR-
H2 in the congestion window part. In Fig. 11, there is a column which says cwnd/rtt and displays respective values
for each time interval divided by a forward slash delimiter. It is just that while filtering the results for this parameter,
the 12th column values are eliminated with cwnd values or values before the forward slash. The steps followed for
filtering are below.
Step 1: The output files mentioned above are used for obtaining the results filtered according to the round
trip time parameter needed for this research. The command to do so is as follows: a) Filtering H1 to H7 traffic: The
command used for this: cat BCR-H1 | head - 106 | grep sec | awk ‘{print $3,$12}’>RTT-h1. ‘RTT-h1’ is a filtered
result file. b) Filtering H2 to H8 traffic: The command used for this: cat BCR-H2 | head - 106 | grep sec | awk ‘{print
$3,$12}’>RTT-h2. ‘RTT-h2’ is a filtered result file.
Step 2: Again the output files mentioned above are used for obtaining the results filtered according to the
round trip time parameter needed for this research. The command to do so is as follows: a) Filtering H1 to H7 traffic:
The command used for this: cat ACR-H1 | head - 106 | grep sec | awk ‘{print $3,$12}’>RTTA-h1. ‘RTTA-h1’ is a
filtered result file. b) Filtering H2 to H8 traffic: The command used for this: cat ACR-H2 | head - 106 | grep sec | awk
‘{print $3,$12}’>RTTA-h2. ‘RTTA-h2’ is a filtered result file
The contents of the above files can be checked using the command: more RTTA-h1 and more RTTA-h2. Also, the
same using the command: more RTT-h1 and more RTT-h2.
Step 3: To perform the analysis through the graphs, the resultant files generated in the above steps are
considered. These files are: RTT-h1, RTT-h2, RTTA-h1 and RTTA-h2 generated before and after Static Entry Pusher
is performed respectively.
Page | 15
Step 4: With respect to Fig.9, the script for generating the graphs is written. This is done for the hosts Host-
1->Host-7 and Host-2->Host-8, where the script names are: plot_rtt.plt and plot_rtt1.plt respectively.
Step 5: The script files from the above step are executed using the command: gnuplot plot_rtt.plt and gnuplot
plot_rtt1.plt. The respective graphs generated in the form of png files are presented in the following section..
4.5. Throughput
Throughput is the average of data rate obtained over a specific data link which is divided by the observation time.
𝑇ℎ𝑇𝑇𝑇𝑇ℎ𝑇𝑇𝑇 = [𝑇𝑇𝑇𝑇𝑇 𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 𝑇𝑇𝑇𝑇 ∗ 8] / 𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 𝑇𝑇𝑇𝑇
In this part of the analysis, Throughput is measured. Which is to measure what amount of information can be processed
for the given time. The steps to perform this are as follows:
Step1: In this experiment, our research is confined to defining only two clients and their respective two
servers. The command to do so is: xterm h1 h2 h7 h8. Each of these host’s terminals will be prompted with. In order
to check the configuration details, type the command: ifconfig.
Step2: The traffic generated between the client and server are recorded using the iperf tool.
a) Between Host-1 and Host-7: To make h7 as the server ‘-s’, which will be listening to the client request on port ‘-p’
6653. The flow of events occurring between the client and server are captured in a file. Inside the h7 console, this
command is executed: iperf -s -p 6653 -i 1 > BT-H1. ‘-i’ is used for interval/pause seconds between periodic bandwidth
reports. ‘BT-H1’ is the name of the file. When the above command is successfully executed, the server whose ip
address is 10.0.0.7 starts listening on port 6653 for client ‘-c’ request. Now, inside the h1 console, the command
executed is: iperf -c 10.0.0.7 -p 6653 -t 100. 100 is time in seconds.
b) Between Host-2 and Host-8: In order to avoid server conflicts in the next step, the h7 server is stopped. To make
h8 as the server ‘-s’, which will be listening to the client request on port ‘-p’ 6653. The flow of events occurring
between the client and server are captured in a file. Inside the h8 console, this command is executed: iperf -s -p 6653
-i 1> BT-H2. ‘-i’ is used for interval/pause seconds between periodic bandwidth reports. ‘BT-H2’ is the name of the
file. When the above command is successfully executed, the server whose ip address is 10.0.0.8 starts listening on
port 6653 for client ‘-c’ request. Now, inside the h2 console, the command executed is: iperf -c 10.0.0.8 -p 6653 -t
100. 100 is time in seconds.
Step 3: The traffic generated between the client and server is also recorded during static entry pusher is
performed and recorded using iperf tool. The first 10 steps in sub-section III.III are executed. Further after that the
experiment is performed as the respective next step.
Step4: The traffic generated between the client and server are recorded using the iperf tool.
(a) Between Host-1 and Host-7: To make h7 as the server ‘-s’, which will be listening to the client request on port ‘-
p’ 6653. The flow of events occurring between the client and server are captured in a file. Inside the h7 console, this
command is executed: iperf -s -p 6653 -i 1 > AT-H1. ‘-i’ is used for interval/pause seconds between periodic
bandwidth reports. ‘AT-H1’ is the name of the file. When the above command is successfully executed, the server
whose ip address is 10.0.0.7 starts listening on port 6653 for client ‘-c’ request. Now, inside the h1 console, the
command executed is: iperf -c 10.0.0.7 -p 6653 -t 100. 100 is time in seconds.
(b) Between Host-2 and Host-8: In order to avoid server conflicts in the next step, the h7 server is stopped. To make
h8 as the server ‘-s’, which will be listening to the client request on port ‘-p’ 6653. The flow of events occurring
between the client and server are captured in a file. Inside the h8 console, this command is executed: iperf -s -p 6653
-i 1> AT-H2. ‘-i’ is used for interval/pause seconds between periodic bandwidth reports. ‘AT-H2’ is the name of the
file. When the above command is successfully executed, the server whose ip address is 10.0.0.8 starts listening on
port 6653 for client ‘-c’ request. Now, inside the h2 console, the command executed is: iperf -c 10.0.0.8 -p 6653 -t
100. 100 is time in seconds.
Step 5: The output files generated from step 2 are used for obtaining the results filtered according to the
throughput parameter needed for this research. The command to do so is as follows: a) Filtering H1 to H7 traffic: The
command used for this: cat BT-H1 | head - 106 | grep sec | awk ‘{print $3,$5}’>BTP-h1. ‘BTP-h1’ is a filtered result
Page | 16
file. b) Filtering H2 to H8 traffic: The command used for this: cat BT-H2 | head - 106 | grep sec | awk ‘{print
$3,$5}’>BTP-h2. ‘BTP-h2’ is a filtered result file.
Step 6: The output files generated from step 4 are used for obtaining the results filtered according to the
congestion window parameter needed for this research. The command to do so is as follows: a) Filtering H1 to H7
traffic: The command used for this: cat AT-H1 | head - 106 | grep sec | awk ‘{print $3,$12}’>ATP-h1. ‘ATP-h1’ is a
filtered result file. b) Filtering H2 to H8 traffic: The command used for this: cat AT-H2 | head - 106 | grep sec | awk
‘{print $3,$12}’>ATP-h2. ‘ATP-h2’ is a filtered result file.
The contents of the above files can be checked using the command: more ATP-h1 and command: more ATP-h2.Also,
the same using the command: more BTP-h1 and more BTP-h2
Step 7: To perform the analysis through the graphs, the resultant files generated in step 5 and 6 are considered.
These files are: BTP-h1, BTP-h2, ATP-h1 and ATP-h2 generated before and after Static Entry Pusher is performed
respectively.
Step 8: With respect to Fig.9, the script for generating the graphs is written. This is done for the hosts Host-
1->Host-7 and Host-2->Host-8, where the script names are: plot_tp.plt and plot_tp1.plt respectively.
Step 9: The script files from the above step are executed using the command: gnuplot plot_tp.plt and gnuplot
plot_tp1.plt. The respective graphs are generated in the form of png files and are presented in the following section.
5. DATA INTERPRETATION
Numerous iterations of experiment were performed before the final graph is plotted and analyzed. Followed by taking
the weighted average of the obtained values and plotting graph of each parameter for before and after stating entry
pusher. The purpose of plotting both the values of static entry pusher (before and after) was to obtain the comparison
of results for better performance evaluation. Each of the parameters has an impact on the overall performance of the
network and therefore, plays a vital role in the development of a new technology keeping focus on large scale future
implementation.
It can be observed that throughout the experiment the duration of the simulation run stays the same while observing
each parameter individually, that is one minute and forty seconds which becomes x axis for all the graphs. The y axis
is variable based on the range of values obtained during the experiment run.
5.1. Throughput
This is one of the critical measurements to be tested for any alterations done through research and development on
existing networks. It is the rate of data transfer in unit time across the network. It has to be observed in comparison
with the network conditions before implementing the changes. Therefore, in this section it is observed that the two
scenarios, and the communication between a) near nodes and b) far located nodes. Each graph has two lines where
one is a) before implementing static entry pusher and one is b) after implementing static entry pusher. This brings out
a clear picture of network throughput performance throughout the experimental run.
Page | 17
Fig. 11. Left: Throughput parameter for communication between Host 1 and Host 7 Before Static Entry Pusher(BSEP) and After Static
Entry Pusher(ASEP). Right: Throughput parameter for communication between Host 2 and Host 8 Before Static Entry Pusher(BSEP) and
After Static Entry Pusher(ASEP).
Fig. 11 Left graph Depicts the throughput between Host 1 to Host 7 communicating nodes before and after static entry
pusher. It can be observed from the figure that due to proactive behaviour of the controller to take action before the
packets arrive at the switches, the time spent in the queue reduces which results in more number of packets being
transferred in the same given time of simulation in comparison of absence of static entry pusher. Again, multiple lows
are observed in red lines which represent transfer in Before Static Entry Pusher. This is not there because the packets
are known to the switches and timeout situations are rare in After Static Entry Pusher. Overall the throughput
throughout the simulation run remains higher in After Static Entry Pusher in comparison of Before Static Entry Pusher
which can be clearly observed in Fig. 11.
Fig. 11 Right graph represents transfer for Before Static Entry Pusher and After Static Entry Pusher between Host 2
to Host 8 communicating nodes. Unstable behaviour of red lines representing Before Static Entry Pusher shows that
packets getting timed out inspiring further retransmission due to not being able to determine the set of actions to be
performed when flow appears to be the first time. Contrary, blue lines have hardly any steps observed due to proactive
behaviour where hardly any unknown packets keep waiting in queue. Overall throughput increases after After Static
Entry Pusher due to improved queue congestion. Avoiding timeouts will reduce retransmitted packets existing in the
network which results in more packets getting delivered successfully on time.
5.2. Jitter
Jitter is one of the key measurement parameters to evaluate the performance variance before and after inculcating
change in the network. In this experiment, it is necessary to evaluate the difference in network behaviour after
implementing static entry pusher. Jitter will permit the identification of variation coming in delayed packet delivery
at the receiver's end. This helps to analyse whether the new behavior is creating additional overhead or reducing the
load on the network. Again, it will help predicting its suitability if implemented in large scale networks.
Page | 18
Fig. 12 Left: Jitter parameter for communication between Host 1 and Host 7 Before Static Entry Pusher(BSEP) and After Static Entry
Pusher(ASEP). Right: Jitter parameter for communication between Host 2 and Host 8 Before Static Entry Pusher(BSEP) and After Static
Entry Pusher(ASEP).
As shown in Fig. 12, the jitter is represented in the left graph before and after static entry pusher observed during the
communication between 1st and 7th host. It can be observed from the obtained graph that jitter stays in the range of
20 milliseconds and 100 milliseconds with not more than 4 steep highs. Again, it is observed that only once Before
Static Entry Pusher value has steep high whereas there are 3 events for After Static Entry Pusher. The weighted average
of obtained value is 0.050 seconds which seems to be acceptable for implementing for large number of nodes.
It is observed from Fig.12 that the jitter range of events for jitter throughout the experiment stays in the range of 20
milliseconds and 100 milliseconds. But, the number of steep high or lows are not present during the experimentation
proving its stability during the execution. The behaviour of jitter in After Static Entry Pusher appears to be more stable
in comparison of Before Static Entry Pusher which can be observed from the graph of Fig.12 right graph.
5.3. Latency
The latency is observed with the purpose of obtaining the time taken by each element to reach the destination in the
presence of existing traffic. This parameter is observed to match at par with throughput to check the behaviour of the
network before and after providing static entry push. If throughput is the rate of transmission, latency is the time taken
for the transmission. Equal importance of latency exist the way throughput does as it is necessary for each network to
obtain the time taken by each flow to transmit the data at desired throughput rate. One can observe from the graph
provided in Fig. 13 that the range of latency has larger (70 ms to 160 ms) in comparison of latency after the static
entry push. This proves the behavioural stability After Static Entry Pusher which ranges between 120 ms and 150 ms.
Again latency for communication between Host 2 and Host 8 represented in Fig. 13 right graph is also similar where
the range of latency in Before Static Entry Pusher (75 ms to 160 ms) is far more in comparison of After Static Entry
Pusher ( 130 ms to 180 ms). Surely, the weighted average of latency in Before Static Entry Pusher is less than After
Static Entry Pusher but that can be considered as trade of the advantage of improved performance for both the
scenarios.
Page | 19
Fig. 13 Left: Latency parameter for communication between Host 1 and Host 7 Before Static Entry Pusher(BSEP) and After Static Entry
Pusher(ASEP). Right: Latency parameter for communication between Host 2 and Host 8 Before Static Entry Pusher(BSEP) and After
Static Entry Pusher(ASEP).
5.4. Congestion Window
Whether the network behaves immature such as a juvenile or mature such as a responsible family man is determined
by the congestion window stability. The stability of the congestion window also makes a positive impact on stabilizing
the other parameters. Therefore, it was necessary to check the behaviour of the network after implementing a novel
approach of static entry pusher on SDN environment. As provided in Fig. 14 left and right, the results show that the
stability of congestion window cannot be better than the one obtained in the resultant graphs. Comparing the unstable
high and low in values of Congestion Window in Before Static Entry Pusher and stable low value of After Static Entry
Pusher throughout the experiment. The low range of values (40 to 50) of Congestion Window also shows that the
transmission does not reach bottleneck and reduces the probability of burst traffic conditions in presence of static
entry pusher. This assures that network will behave stable throughout the transmission and will provide the stability
even for other network parameters for optimum network performance.
Fig. 14 Left: Congestion Window parameter for communication between Host 1 and Host 7 Before Static Entry Pusher(BSEP) and After
Static Entry Pusher(ASEP). Right: Congestion Window parameter for communication between Host 2 and Host 8 Before Static Entry
Pusher(BSEP) and After Static Entry Pusher(ASEP).
5.5. Round Trip Time
To determine how much time is required for each flow to complete the communication, there is a need to calculate
the round trip time which is the propagation time for each signal. Round Trip Time is the parameter that is directly
Page | 20
responsible for optimal user experience as far as service quality is concerned. The major factor that affects this
parameter is the distance between the communicating nodes. It is therefore an important factor if the network is
supporting remote connectivity or the network is spread geographically apart. As the majority of the practical
implementation of the networks built today demands remote connectivity, it was a significant parameter for
observation. In current scenario, the network is not studded with large scale networks nor does it impose propagation,
processing, queueing or encoding delay, therefore, it stays stable in this experiment which can be observed for both
the scenarios in Fig. 15 graph. By limiting the observation to the stated two scenarios, Round Trip Time stays stable
throughout the simulation run with one major advantage, achieving optimum congestion window value in less number
of Round Trip Time in After Static Entry Pusher in comparison of Before Static Entry Pusher.
Fig. 15 Left: Round Trip Time parameter for communication between Host 1 and Host 7 Before Static Entry Pusher(BSEP) and After Static Entry
Pusher(ASEP). Right: Round Trip Time parameter for communication between Host 2 and Host 8 Before Static Entry Pusher(BSEP) and
After Static Entry Pusher(ASEP).
6. Conclusion
In this research work, an novel approach in Software Defined Networks has been successfully implemented by
applying the concept of Static Entry Pusher using Floodlight controller on Fat Tree topology. The experiment was
conducted on a fat tree topology which is considered to be crucial topology being widely implemented in Software
Defined Networks. The research paper clearly demonstrates the phases of implementation and step by step procedure
followed during the experimentation with Static Entry Pusher on SDN switches. The packet flows were observed from
the starting hosts to the end hosts of the same network located sparsely across the large geographical area. The analysis
of the obtained result is done based on the graphs of networking parameters which provides a positive signal to
floodlight researchers improving SDN architecture. The network parameters which were considered for the
experiment are throughput, congestion window and round trip time were observed for TCP based transmission and
jitter, latency were observed for UDP based transmission. The results obtained are significantly stable for the network
simulation in comparison with reactive entry insertion of the flow. As far as the scope of software defined networks
is concerned there is room for further improvement and additional implementations. The graphs can be seen as a
positive sign for the SDN research community working with Floodlight controller as its base controller. With this
experiment the community can contribute with further addition and refinements to the floodlight’s static entry pusher
module API. This paper can be considered for further extension by not only with insertion of entries into switches but
also with deleting, listing and clearing flow entries in the switches.
Page | 21
REFERENCES
1. McKeown, N., Anderson, T., Balakrishnan, H., Parulkar, G., Peterson, L., Rexford, J., ... & Turner, J. (2008). OpenFlow: enabling
innovation in campus networks. ACM SIGCOMM Computer Communication Review, 38(2), 69-74
2. OpenFlow Switch Specification. https://www.opennetworking.org/wp-content/uploads/2013/04/openflow-spec-v1.3.1.pdf. (last
accessed on Dec 2019)
3. Thomas D. Nadeau and Ken Gray. (2013), SDN: Software Defined Networks. O'Reilly Media, Inc. Ebook, 1st edition, 9-20. August
2013.
4. Goswami, B., & Asadollahi, S. S. (2018). Enhancement of LAN infrastructure performance for data center in presence of network security.
In Lobiyal, D., Mansotra, V., & Singh, U. (Eds.), Next-generation networks. Advances in intelligent systems and computing (vol. 638).
Springer, Singapore
5. Asadollahi, S., Goswami, B., & Sameer, M. (2018). Ryu controller’s scalability experiment on software defined networks. In
Proceedings of IEEE International Conference on Current Trends in Advanced Computing (ICCTAC) (pp. 1–5). IEEE, Bangalore,
India.
6. Sameer, M., & Goswami, B. (2018). Experimenting with ONOS scalability on software defined network. Journal of Advanced Research
in Dynamical & Control Systems, 10(14), 1820–1830.
7. Goswami, B., & Asadollahi, S. (2017). Implementation of SDN using OpenDayLight controller. IJIRCCE, 5(2), 218–227. .8. Tony M., Bhargavi G., (2019), Experimenting With Scalability of Beacon Controller in Software Defined Network , International
Journal of Recent Technology and Engineering (IJRTE), Volume-7 Issue-5S2, Pg. 550-555.
9. Ahmad S., Hedmilson, Saleh A, Bhargavi G, (2017), Scalability of software defined network on floodlight controller using OFNet,
International Conference on Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT), Pages: 1 –
5, IEEE, Mysore, India.
10. Kumar A, Goswami B, Augustine P, (2019), Experimenting with Resilience and Scalability of Wifi Mininet on Small to Large SDN
Networks, International Journal of Recent Technology and Engineering (IJRTE), SCOPUS, Volume-7, Issue-6s5, Page 201-207, April
2019.
11. Asadollahi, S., & Goswami, B. (2017). Experimenting with scalability of floodlight controller in software defined networks. In:
International Conference on Electrical, Electronics, Com- munication, Computer, and Optimization Techniques (ICEECCOT) (pp. 1–
5). IEEE, Mysore, India.
12. S Das, B Goswami, S Asadollahi, (2017), Investigating Software-Defined Network and Networks-Function Virtualization for Emergent
Network-oriented Services, IJIRCCE, Vol.5, Special Issue 2, April 2017, Pg. No. 201 – 205, DOI:10.15680
13. Asadollahi, S., & Goswami, B. H. (2017). Revolution in existing network under the influence of software defined network. In
Proceedings of the 11th INDIACom (pp. 1012–1017). IEEE, New Delhi, India
14. Goswami, B., & Asadollahi, S. (2016). Performance evaluation of widely implemented con- gestion control algorithms over
diversified networking situations. In ICCSNIT—2016, Pattaya, Thailand. Open Access.
15. Asadollahi S., Goswami B. (2017), Software Defined Network, Controller Comparison, IJIRCCE, Vol.5, Special Issue 2, April 201 7,
Pg. No. 211 – 217
16. Lowe, S. (2011). Mastering vmware vsphere 5. John Wiley & Sons.
17. Galicia, J. D., Carlyle, J. C., & Tzakis, A. N. (2016). Multi-environment operating system. U.S. Patent No. 9,348,633. Washington,
DC: U.S. Patent and Trademark Office.
18. Mininet: Emulator. Available at http://mininet.org/ (accessed on Dec 2019)
19. IPERF: Networks tool. Available at https://iperf.fr/ (accessed on Dec 2019)
20. Gnuplot: Graph tool. Available at http://www.gnuplot.info/ (accessed on Dec 2019)
21. Open Network Foundation (2012), Software-Defined Networking: The New Norm for Networks. Open Networking Foundation (ONF),
White Paper. April 13, 2012.
22. OpenFlow: SDN Protocol. https://github.com/mininet/openflow. (accessed on Dec 2019)
23. Wallner, R., & Cannistra, R. (2013). An SDN approach: quality of service using big switch’s floodlight open -source controller.
Proceedings of the Asia-Pacific Advanced Network, 35, 14-19
24. Floodlight Controller: Open Source SDN Controller. Available at https://github.com/floodlight/floodlight (accessed on Dec 2019)
25. Python: Scripting network topologies. Available at https://www.python.org/ (accessed on Dec 2019)
26. Xterm: Emulator. Available at https://invisible-island.net/xterm/ (accessed on Dec 2019)
27. Pautasso, Cesare; Wilde, Erik; Alarcon, Rosa (2014), REST: Advanced Research Topics and Practical Applications, Springer, ISBN
9781461492986
28. Brian Underdahl and Gary Kinghorn. (2015), Software Defined Networking for Dummies, Cisco Special Edition, John Wiley & Sons,
Inc., Hoboken, New Jersey, 2015.
29. Charles E. Leiserson. “Fat-trees: universal networks for hardware-efficient supercomputing”. IEEE Transactions on Computers, Vol.
34 , no. 10, Oct. 1985, pp. 892-901.
30. Bartosz Bogdanski. “Optimized Routing for Fat-Tree Topologies”. Department of Informatics, Faculty of Mathematics and Natural
Sciences, University of Oslo, Norway. January, 2014.