Upload
pritesh-ranjan
View
814
Download
2
Embed Size (px)
Citation preview
Enhancing Load Balancer For OpenFlow Compliant
SDN Architecture MIT COLLEGE OF ENGINEERING BY- Pritesh Ranjan
Pankaj Pande Ramesh Oswal Zainab Qurani
Contents
• Introduction to SDN• Project idea• Load balancing methods• Our approach• Controller selection• Environmental setup• Reactive/Proactive approach• Partitioning algorithm• Transitioning algorithm• Questions
Packet Forwarding Hardware
App
App
App
Packet Forwarding Hardware
App
App
App
Hardware Packet Forwarding
App
App
App
Packet Forwarding Hardware
OperatingSystem
OperatingSystem
OperatingSystem
OperatingSystem
App
App
App
Network Operating System
App App App
INTRODUCTION TO SDN
1. Open Interface to HW (South Bound API)
3. Open API for business Applications (NorthBound API)
2. Operating System (controller Platforms)
Load BalancerSwitch
Project Idea
Transform into
The Plan
OpenFlow Compatible
Controller
Load Balancing Module
That’s it !
Our Approach
For Weighted Load Balancing ?
To ensure Connection Persistence ?
For non-uniform traffic pattern?
Partitioning Algorithm
Load Redistribution Algorithm
Transitioning Algorithm
Controller Selection
??
Coming Up Next
Environment Setup Setup a network with 10 hosts on 1 switch
Time Required ??
Setup a network with 100 hosts on 5 switches.
Time Required ??
Tired / Bored ??
Solution : “Mininet”
Switch
10.0.0.10 Port 2 LowSource Subnet Forward to Priority
Rule Table
Default Controller
h1 h2
Srcip=10.0.0.10 Srcip=10.0.0.20
s1
Mininet : Using Inbuilt Wrapper “mn”
Create Custom Network with own Scripts
Controller
Controller Design
To redirect traffic destined for “Service IP” to one of the backend replica servers A/c to assigned weighted load.
Network Design Decisions
Distributed or Centralized ?
Goal of the Application
Flow Based or Aggregated?
Reactive or Proactive?
Centralized
Both – Microflow and wildcard rules
Proactive
Match(exact & wildcard) Action Statistics
Match(exact & wildcard) Action Statistics
Match(exact & wildcard) Action Statistics
Match(exact & wildcard) Action Statistics
---------------
Srcip=10.0.23.23) Output port = 2 No. of Packets=10
Srcip=10.0.0.0/10Priority=Low Output port = 4 No. of bytes
Srcip=10.0.0.0/10Priority=High Send to controller No of received packets
Microflow rules
Wildcard rules
Rules/Flow Entries
Controller
Switch
Source Forward to PriorityRule Table
A R4 MediumB R2 High
R1
R2
R3
R4
Load Balancer
Reactive Approach
Drawback: High Setup time
Controller
Switch
Source Subnet Forward to PriorityRule Table
10.0.0.0/11 R4 Medium10.32.0.0/11 R2 High
R1
R2
R3
R4
Load Balancer
10.64.0.0/11 R1 Low
10.224.0.0/11 R4 Medium
Configuring switch
Table Generated
Proactive Approach
Drawback: Wildcard rules are
expensive
Implementation Details
AIM:
Reduce initial setup timeServers get load in proportion to the assigned weightsMinimum number of wildcard rules
APPROACH:
Proactively install wildcard rules to smaller sub-subnetsAssign each server some subnets according to weighted loadMinimization technique
Coming Up Next
Partitioning Algorithm
Partitioning AlgorithmDeciding the no of subnets:
Server R1Alpha = 2
Server R2Alpha = 3
Server R3Alpha = 1
Total alpha = 2 + 3 + 1 = 6
Nearest 2n = 8
Normalization Factor = 8/6 = 1.333
Weighted Load = 3
Weighted Load = 4
Weighted Load = 1
Weighted Load:R1 = 1.333 * 2 = 2.666 = 3 R2 = 1.333 * 3 = 3.999 = 4 R3 = 1.333 * 1 = 1.333 = 1
No of subnet = 3 + 4 + 1 = 8Partition The subnet into 8
subgroups
10.0.0.0/8
10.128.0.0/910.0.0.0/9
10.0.0.0/10 10.64.0.0/10 10.128.0.0/10 10.192.0.0/10
10.0.0.0/11 10.32.0.0/11
10.64.0.0/11 10.96.0.0/11
10.128.0.0/11 10.160.0.0/11
10.192.0.0/11 10.224.0.0/11
Server R1Weighted Load
= 3
Server R3WL = 1
Server R2Weighted Load = 4
Company Network
Partitioning Algorithm
000*
R1
001*
R1
010*
R1
011*
R2
100*
R2
101*
R2
110*
R2
111*
R3
0 1
0 1 0 1
0 1 0 1 0 1 0 1
/ 8
/ 9
/ 10
/ 11
Number of wild card rules = 8
Partitioning Algorithm – Contd.
Partitioning Algorithm - Analysis
Benefit :
Limitation :
Improvement :
Reduced initial setup timeMinimal involvement of controller
Too Much wildcard rules
Minimization technique
Coming Next
Dynamic Load redistribution
Coming Soon
For uniform client traffic pattern
000*
R1
001*
R1
010*
R1
011*
R2
100*
R2
101*
R2
110*
R2
111*
R3
Swap
011*
111*
Minimization Technique
000*
R1
001*
R1
010*
R1
111*
R2
100*
R2
101*
R2
110*
R2
011*
R3
1*
R2
00*
R1
Number of wild card rules = 4
Minimization Technique
Load Shift Operation
Situation:
Goal:
Conditions:
Solution:
Server R1 needs to be taken down for maintenance.
Traffic of R1 (old) should be allocated to R2 (New)
Ongoing connections should be continued with old server(R1)New connections should be forwarded to new server(R2)R1 can be taken down only when all the connections have expired.
Transitioning Algorithm
Subnet A
Subnet B
R1(Old)
R2(New)
Server R1 is to be taken down, Shift its load
To R2
Ok, let me check the connections
for SYN
Rule Table
Source Subnet Forward to Priority
A R1 Low
B R2 Low
Transitioning Algorithm
Subnet A
Subnet B
R1(Old)
R2(New)
Rule Table
Source Subnet Forward to Priority
A R1 Low
B R2 Low
1. Adds new flow entry2. Modify Old Flow Entry
R2
A Controller High
Rule Table
Source Subnet Forward to Priority
A R2 Low
B R2 Low
A Controller High
Transitioning Algorithm
IP=10.0.0.1Subnet A
Subnet B
R1(Old)
R2(New)
Add micro flow rule
Rule Table
Source Subnet Forward to Priority
A R2 Low
B R2 Low
A Controller High
SYN flag NOT SET
10.0.0.1 R1 Highest
Transitioning Algorithm
IP=10.0.0.1Subnet A
R1(Old)
R2(New)
Add micro flow rule
Rule Table
Source Subnet Forward to Priority
A R2 Low
B R2 Low
A Controller High
SYN flag SET
10.100.0.1 R2 Highest
IP=10.100.0.1Subnet A
Transitioning Algorithm
IP=10.0.0.1Subnet A
R1(Old)
R2(New)
Flow Entries get deleted after Idle time-out
Rule Table
Source Subnet Forward to Priority
A R2 Low
B R2 Low
A Controller High
10.100.0.1 R2 Highest
IP=10.100.0.1Subnet A
Now R1 can be taken down
Transitioning Algorithm
R1 (X=2)
R2 (X=2)
2*x
2*x
00*
01*
10*
11*
x
x
x
x
00* R101* R110* R211* R2
Uniform Client traffic pattern
Each subnet has same no of Connections Each server gets proportional no
of Connections (weighted Load)
R1 (X=2)
R2 (X=2)
2*x
2*x
00*
01*
10*
11*
2*x
1*x
1*x
0*x
00* R101* R110* R211* R2
Non-Uniform Client traffic pattern
Subnets have unequal no of requests Load gets unequally distributed
among servers
R1 (X=2)
R2(X=2)
3*x
1*x
00*
01*
10*
11*
2*x
x
x
0*x
00* R1
Overloaded Server
Underloaded Server
Read StatisticsFind over and underloaded serverShift appropriate load from over to under loaded server
01* R1
10* R2
11* R2
01* R2
2*x
2*x
Load Redistribution Algorithm
Project Demo: Videos
Topology Creation
Partitioning Algorithm- Video 1
Transitioning Algorithm
Load Redistribution Algorithm
Partitioning Algorithm- Video 2
Questions..??
THANK YOU….!!!
Topology
Extra slides
Comparative StudyParameter H/w Load Balancer POX S/w Load Balancer Our Solution
Methods IP Sticky
Round RobinCookie StickyWeighted Load
IP StickyRandom
IP StickyPersistent
Weighted load
Layer Layer 4Layer 7
Layer 4 Layer 4Layer 7
Server health Monitoring
PINGHTTP GET
ARP ARPPING
Speed Fast Slow Slow
Cost Costly Free Free
Switch
10.0.0.1 Port 2 LowSource Subnet Forward to Priority
Rule Table
Default Controller
Default Switch
h1h2
Srcip=10.0.0.1Srcip=10.0.0.2
s1
Controller
Switch
Source Subnet Forward to PriorityRule Table
10.0.0.0/11 R1 Low10.32.0.0/11 R1 Low
R1
R2
R3
R4
Load Balancer
10.64.0.0/11 R1 Low
10.224.0.0/11 R4 Medium
Configuring switch
Table Generated(No. of Wildcard rules 8)
Partitioning Algorithm (Flow Table)
Partitioning Algorithm (Cont.)
000*
R1
001*
R1
010*
R1
011*
R2
100*
R2
101*
R2
110*
R2
111*
R3
0 1
0 1 0 1
0 1 0 1 0 1 0 1
/ 8
/ 9
/ 10
/ 11
Number of wild card rules = 8Achieved Benefit- Reduced initial setup time. Drawbacks- Very large number of rules installed.Improvement-Minimization Techniques
Minimization Technique(Cont.)
000*
R1
001*
R1
010*
R1
011*
R2
100*
R2
101*
R2
110*
R2
111*
R3
Swap
011*
111*
Minimization Technique(Cont.)
000*
R1
001*
R1
010*
R1
111*
R2
100*
R2
101*
R2
110*
R2
011*
R3
1*
R2
00*
R1
Number of wild card rules = 4
Controller
Switch
Source Subnet Forward to PriorityRule Table
10.0.0.0/11 R1 Low10.64.0.0/11 R1 Low
R1
R2
R3
R4
Load Balancer
10.96.0.0/11 R2 Low10.128.0.0/9 R3 Low
Configuring switch
Table Generated
Minimization Technique (Flow Table)
Scapy• Scapy is a Python framework for crafting and
transmitting arbitrary packets
• Scapy also performs very well on a lot of other specific tasks that most other tools can’t handle, like sending invalid frames, injecting your own 802.11 frames
ARP
Replica R1
Replica R2
Replica R3
Replica R4
Replica R5
Source MAC: 00:00:00:00:00:01 Dest MAC: ff:ff:ff:ff:ff:ffSource IP: 10.24.24.24Dest IP :10.0.0.2
TCP/HTTP
Replica R1
Replica R2
Replica R3
Replica R4
Replica R5
Source IP: 10.24.24.24Dest IP :10.0.0.2Src Port:RandomDst Port :80Protocol:TCP
Source IP: 10.134.4.2Dest IP :10.0.0.2Src Port:RandomDst Port :80Protocol:TCP