Wireless scheduling analysis (With ns3) By Pradeep Prathik
Saisundatr
Slide 2
EFFORT LIMITED FAIR SCHEDULING Wireless links exhibit
substantial rates of link errors resulting in significant and
unpredictable loss of link capacity. The scheduler model
distinguishes between effort (air time spent by flow) and outcome
(actual throughput achieved by the flow). Weighted Fair Queuing:
Assumes an error free link and distributes effort according to
weights provided by an admission module Effort Limited Fair
Scheduling: Extension of WFQ but it limits how much effort is given
to any specific flow, so that one flow experiencing very high error
rates cannot degrade the performance of the entire link. Power
Factor: In ELF the scheduler should adjust flow weights in response
to errors in order to create an hybrid between effort fairness and
outcome fairness, this is called Power Factor.
Slide 3
Power Factor In order to characterize the behaviour of ELF
Scheduler, we introduce the following notation. Let us assume we
have N flows sharing a link with Bandwidth B. Each flow has a
weight W i, a Power Factor P i and an Error Rate E i. We can define
the adjusted weight of the flow as : A i = min ( W i / 1 E i, P i x
W i ) The throughput T i for for flow i is given by the product of
the transmission time it receives and its success rate, T i = (A i
/ j A j ) x B x (1- E i ) The highest fraction of the link time
that flow can take is P i x W i / (( P i x W i ) + j I x W j )
Thus, An ELF scheduler strives to achieve the outcome that is
envisioned by users (e.g., weighted link sharing or fixed rate
reservations) while limiting the effort spent on a flow using a per
flow parameter called the power factor, which can be used to
administratively implement a variety of fairness and efficiency
policies.
Slide 4
DISTRIBUTED FAIR SCHEDULING Wireless links exhibit substantial
rates of link errors resulting in significant and unpredictable
loss of link capacity. The DFS protocol borrows SCFQs idea of
transmitting the packet whose finish tag is the smallest, as well
as SCFQs mechanism for updating the virtual time. A distributed
approach for determining the smallest finish tag is employed, using
the back off interval mechanism from IEEE 802.11 MAC. The essential
idea is to choose a back off interval that is proportional to the
finish tag of the packet to be transmitted. Power Factor: In ELF
the scheduler should adjust flow weights in response to errors in
order to create an hybrid between effort fairness and outcome
fairness, this is called Power Factor.
Slide 5
IMPLEMENTATION Each node i maintains a local virtual clock v i
(t) where v i (t)=0. Now P i k represents the kth packet arriving
at the flow at node i on the LAN. Each transmitted packet is tagged
with its finish tag. With finish tag Z, node i sets its virtual
clock equal to maximum (v i (t), Z). Start and finish tags for a
packet are not calculated when the packet arrives. Instead, the
tags for a packet are calculated when the packet reaches the front
of its flow. When packet P i k reaches the front of its flow at
node i, the packet is stamped with start tag S i k calculated as S
i k = v(f i k ) denotes the real time when packet reaches the front
of the flow. f i k is calculated as follows : f i k = S i k +
Scaling Factor * (L_i^K)/_k = v(f i k ) + Scaling Factor *
(L_i^K)/_k The objective of the next step is to choose a back off
interval such that a packet with a smaller finish tag will ideally
be assigned a smaller back off interval. This step is performed at
time f i k Specifically, node i picks a back off interval B i for
packet P i k as a function of f i k and the current virtual time v
i (f i k ), as follows, B i = | f i k - v i (f i k ) | Now observe
that, f i k = S i k + Scaling Factor * (L_i^K)/_k. Finally, to
reduce the possibility of collisions, we randomize the B i value
chosen as B i = * B i Where is a random variable with mean 1. In
our simulations, is uniformly distributed in the interval.When this
step is performed, a variable named Collision Counter is reset to
0.
Slide 6
COLLISION Collision handling: If a collision occurs (because
back off intervals of two or more nodes count down to 0
simultaneously), then the following procedure is used. Let node i
be one of the nodes whose transmission has collided with some other
node(s).Node i chooses a new back off interval as follows:
Increment Collision Counter by 1. Choose new B i, which is
uniformly distributed. The above procedure tends to choose a
relatively small B i (in the range [1,CollisionWindow]) after the
first collision for a packet. The motivation for choosing small Bi
after the first collision is as follows: The fact that node i was a
potential winner of the contention for channel access indicates
that it is node is turn to transmit in the near future. Therefore,
Bi is chosen to be small to increase the probability that node i
wins again soon. However, to protect against the situation when too
many nodes collide, the range for B i grows exponentially with the
number of consecutive collisions.
Slide 7
Fair Real Time Scheduling over a Wireless LAN :Scheduling
parameters Periodic packets with soft deadline Packets have
constant BitRate Each flow i represented by a tuple f i which
consists of Period v, Deadline D and Packet loss rate e A(P i k )=
Time at which K th packet of a flow f i is ready to be transmitted.
D i = Deadline of the flow
Slide 8
Earliest Deadline First A packet P i k is scheduled at time t
if A(P i k ) t d(P i k ) Where d(P i k ) is the minimum among all
the packets d(P i k ) = A(P i k ) + D i
Slide 9
Greatest Degradation First At a scheduling time t, a packet
with maximum degradation value is scheduled. Scheduling performance
measure through Throughput System Degradation
Slide 10
NS3 open source licensing (GNU GPLv2) and development model
Python scripts or C++ programs alignment with real systems
(sockets, device driver interfaces) alignment with input/output
standards (pcap traces, ns-2 mobility scripts) testbed integration
is a priority modular, documented core
Slide 11
11 ns-3 models Project focus has been on the software core, to
date
Slide 12
12 Application The basic model Application Protocol stack Node
NetDevice Application Protocol stack Node NetDevice Sockets-like
API Channel Packet(s)
Slide 13
13 Node basics A Node is a husk of a computer to which
applications, stacks, and NICs are added Application DTN
Slide 14
14 NetDevices and Channels- similar to NIC cards NetDevices are
strongly bound to Channels of a matching type Nodes are architected
for multiple interfaces WifiNetDevice WifiChannel
Slide 15
15 Node basics Two key abstractions are maintained: 1)
applications use an (asynchronous, for now) sockets API 2) the
boundary between IP and layer 2 mimics the boundary at the
device-independent sublayer in Linux i.e., Linux Packet
Sockets