Upload
goran-lazarevski
View
215
Download
0
Embed Size (px)
Citation preview
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
1/26
Performance evaluation of
DBA Gated allocation in an EPON
Andrea Cicalese, Daniele De Sensi
June 30, 2011
Contents
1 Introduction 1
1.1 PON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 EPON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Modelling 4
2.1 Model assumption . . . . . . . . . . . . . . . . . . . . . . . . . 42.2 Fiber links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.3 Splitter modelling . . . . . . . . . . . . . . . . . . . . . . . . . 52.4 OLT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4.1 Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . 82.4.2 MPCP . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.4.3 Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 ONU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3 Simulation 15
3.1 Packet delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.2 Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1 Introduction
The aim of this project is to analyze a specific upstream Dynamic Band-width Allocation algorithm (DBA) in an EPON network. In particular weevaluated the performance in terms of upstream average packet delay andthroughput, as a function of the traffic generated by the ONUs. We testedtwo different types of traffic generators at the ONU:
1
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
2/26
Exponential
Pareto
We constructed a simplified OPNET model of an EPON network con-taining an Optical Line Terminal (OLT), an optical passive splitter and fourOptical Network Units (ONUs).The network model implements EPONs gated Time Division Multiple Ac-cess (TDMA) multiplexing scheme. This model can be reused in order totest different DBAs.
1.1 PON
PON [1] is an emerging access technology that doesnt use active elec-tronics splitter nor amplifier. Instead it uses passive optical splitters in orderto reduce the costs and complexity of the deployment and maintenance ofthe network.A PON consists of a single duplex fiber from an OLT (in the Central Office)to a 1:N splitter. Each fiber exiting from the splitter is connected to a sub-scriber node called ONU that serves one or more users.The OLT is responsible for control and management of the PON and acts as
the gateway to the adjacent networks.Information flowing from the ONUs to the OLT is called upstream while theterm downlink is referred to the information flowing from the OLT to theONUs.On the downstream all the informations are broadcasted to all ONUs. EachONU will discard the traffic that is not directed to it. Moreover, the trafficcan be encrypted in order to ensure the reservation of informations.
1.2 EPON
EPON [2, 3] is a PON architecture that uses traditional 802.3 Ethernetframing and allows very high speed communications (1Gb/s or 10Gb/s).The protocol itself doesnt limit the number of users, however there can be upto 16 ONUs connected to the splitter (due to the attenuation of the opticalsignal during the splitting phase).In the downstream direction there is no need of synchronization, so the OLTcan send traffic at any time. In the upstream direction ONU must communi-cate without collisions. In order to do that, the EFM (Ethernet in the FirstMile) task force defined the Multi Point Control Protocol (MPCP). MPCP isa network management protocol which defines a TDMA multiplexing scheme.
2
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
3/26
Figure 1: PON topology
OLT assigns timeslots to each ONU in which they are allowed to send traffic.All the ONUs have to synchronize their own clocks with the OLT clock and
in the messages transmitted between ONUs and OLT there is a timestampused for this purpose.
Bandwidth Allocation in EPON The upstream bandwidth allocationalgorithm is not defined by a standard and there are many possible imple-mentations.The simplest one is the static allocation, in which each ONU has a fixedlength timeslot, which is reallocated at constant intervals. With this alloca-tion, if an ONU doesnt need to transmit, bandwidth is wasted without the
possibility for the other ONUs to transmit. For this reason static allocationis inefficient and is the worst one.Indeed, dynamic algorithms have been designed in order to improve the band-width allocation efficiency and provide fairness to end-users.The OLT sends the grant in MPCP GATE messages, which inform each ONUabout start time and duration of its transmission. The ONUs send MPCPREPORT messages, informing the OLT of their queue status. The specificuse of this information is not specified by a standard, so every algorithm usesit in a different way. In any case, will be granted at least one slot for the
3
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
4/26
transmission of a REPORT message for the next cycle.
Different types of DBAs has been proposed and Interleaved Polling withAdaptive Cycle Time (IPACT) is one of the earliest proposed schemes.For the description of the IPACT schemes we define wi,n the n-th size grantedto the i-th ONU and with vi,n the requested size for the (n+1 )-th cycle re-quested by the i-th ONU.The IPACT schemes are:
Fixed wi,n+1 = MaximumTransmissionWindow(MTW). In this case vi,nis ignored and the granted window size is always the same. Thereforethe cycle time is constant.
Limited wi,n+1 = min{vi,n,MTW}. OLT grants requested number of bytes,but no more than MTW.
Gated wi,n+1 = vi,n. OLT grants always the requested number of bytes.The cycle time may increase unboundedly but in any case an ONUcannot request more than Q bytes (its output buffer size).
Constant credit wi,n+1 = min{vi,n+const,MTW}. OLT grants requestednumber of bytes plus a constant credit because it assume that x bytesarrived between the time when an ONU sent a REPORT and receivedthe grant.
Linear credit wi,n+1 = min{vi,nconst,MTW}. In this case the size of thecredit is proportional to the requested window (more suited for burstytraffic).
Elastic wi,n+1 = min{vi,n, NMTWn
c=nN+1
wi,c}. Attempt to get rid of
a fixed MTW limit, the only limiting factor being the maximum cycletime. If only one ONU has data to send, it may get a grant of size upto NW max.
2 Modelling
2.1 Model assumption
Several simplifications have been made during the modelling phase inorder to focus better on the bandwidth allocation analysis:
Fixed number of ONUs In this model we used 4 ONUs and the discoveryand registration procedures are not implemented.
4
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
5/26
Equidistant ONUs Is assumed that all the ONUs are equidistant from the
OLT to avoid the ranging procedure.
ONUs identifier We didnt implement the point-to-point emulation (P2PE).Instead, each ONU (and also the OLT) has a unique identifier.
REPORT message In the REPORT message only one queue is reportedbecause each ONU has only one queue instead of 8. Moreover we usedonly a queue set because the OLT will grant every time the requestedbandwidth (gated algorithm).
Clock synchronization We supposed that the ONUs and the OLT are
perfectly synchronized and for this reason we dont use the timestampinside the REPORT and GATE messages.
2.2 Fiber links
The fiber has been modelled using the standard point-to-point link be-tween OLT and splitter and between each ONU and the splitter. The linkhas been modified in order to model the propagation delay and transmissionrate of a 1Gb/s fiber link. The propagation delay has been set in order tosimulate a 10km fiber from the OLT to the splitter and another 10km fiber
from the splitter to each ONU.
2.3 Splitter modelling
The splitter has been implemented as a point-to-point transmitter/re-ceiver pair for each ONU and for the OLT. Two process models are used toforward the packets (one for upstream and one for downstream), as shown infigure 2 and 3.
5
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
6/26
Figure 2: Splitters node model
Figure 3: Splitter downstream and upstream process models
Packets from the OLT are copied and broadcasted to all the ONUs whilepackets from ONUs are sent to the OLT. The code of the upstream anddownstream functions is shown below.
1 static void upstream(void){
3 Packet pkptr;FIN(upstream());
5 pkptr = op pk get(op intrpt strm());op pk send (pkptr, TOOLT);
7 FOUT;}
9
11 static void downstream(void){
6
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
7/26
13 Packet pkptr;
FIN(downstream());15 pkptr = op pk get(op intrpt strm());
/ broadcast the messages to all the ONUs /17 for(int i=0; i
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
8/26
2.4 OLT
For the OLT we created a new node model, shown in figure 5.
Figure 5: OLT node model
2.4.1 Classifier
The classifier (figure 6) is in charge of distinguish between data packetsand control packets. Using the function forward shown below, the datapackets are forwarded to the Sink, while the control packets are forwardedto the MPCP process model.
8
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
9/26
Figure 6: Classifier process model
static void forward(void){2 Packet pkptr;
OpT Int64 id=0,src=0;4 int type=0,id attr=0;
double packet delay=0;6 FIN(forward());
pkptr = op pk get(op intrpt strm());
8 op pk nfd get(pkptr,type,&type);op pk nfd get int64(pkptr,dst addr,&id);
10 op pk nfd get int64(pkptr,src addr,&src);op ima obj attr get(op topo parent (op id self()), identifier, &id attr);
12 if(id==id attr){if (type == CONTROL TYPE){
14 / control message received /op pk send (pkptr, TO CORE);
16 }else{
18 / data message received /
packet delay=op sim time()op pk creation time get(pkptr);
20 switch (src){case 1:
22 op stat write(pkt delay 1,packet delay);break;
24 case 2:op stat write(pkt delay 2,packet delay);
26 break;case 3:
9
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
10/26
28 op stat write(pkt delay 3,packet delay);
break;30 case 4:
op stat write(pkt delay 4,packet delay);32 break;
}34 op pk send forced (pkptr, TO WORLD);
}36 }else
/ this is not the destination of the packet /38 op pk destroy(pkptr);
FOUT;
40 }
2.4.2 MPCP
The Multi-Point-Control-Protocol is used to generate GATE messagesbased on the REPORT received from the ONUs. All the requested size isgranted because the DBA implemented is Gated. This process model (figure7) is composed by two states: init and loop.In the init state all the variables are initialized and the first 4 GATE mes-sages are sent. In the loop state for each REPORT received the correspond-
ing GATE message is generated and sent to the corresponding ONU usingthe function generate_gate shown below.
Figure 7: MPCP process model
static void generate gate(void){2 Packet pkptr,gate,report,eth;
double local time=0;
10
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
11/26
4 OpT Int64 grant dest=0;
int report size=0;6 FIN(generate gate());
eth=op pk get(op intrpt strm());8 / read the report message /
op pk nfd get int64(eth,src addr,&grant dest);10 op pk nfd get pkt(eth,data,&report);
op pk nfd get(report,q report 0,&report size);12 op pk destroy(report);
op pk destroy(eth);14 / grant all the requested size (gated DBA) /
report size+=REPORT SIZE;
16 pkptr=op pk create fmt(ethernet);gate=op pk create fmt(gated pkt);
18 op pk nfd set int64(pkptr,src addr,0);op pk nfd set int64(pkptr,dst addr,grant dest);
20 op pk nfd set(pkptr,type,CONTROL TYPE);op pk nfd set(gate,num grants,1);
22 local time=op sim time();if(next available ts < local time+RTT/2+TPROCESS)
24 / if the gate would arrive after the start time, update the start time /
next available ts=local time+RTT/2+TPROCESS;
26 op pk nfd set(gate,start 0,next available ts);op pk nfd set(gate,length 0,report size);
28 / compute the next avaliable time slot /next available ts += TIME QUANTAreport size + TGUARD;
30 op pk nfd set pkt(pkptr,data,gate);op pk send(pkptr, TO QUEUE);
32 FOUT;}
To determine the starting point of the transmission window we used astate variable called next_available_ts. If the starting time is previous
than the current time, we update it considering the transmission and pro-cessing delay of the GATE message to the ONU, in such a way that thetransmission window starts when the GATE is received by the ONU.Finally next_available_ts is updated to next_available_ts + dura-tion_of_the_ONU_transmission.The constant used for the processing time has been taken by experimen-tal simulations. The guard time constant instead has been taken from [2]considering the worst case.
11
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
12/26
2.4.3 Queue
The queue receives GATE messages from the MPCP and data traffic fromFROM_WORLD receiver. It is built in such a way that the control messages havehigher priority than data packets in order to avoid that the GATEs aredelayed by the data traffic. Actually we dont send anything in downstreambecause the aim of the project was to analyze the upstream behaviour.
2.5 ONU
The ONU (figure 8) is composed by the same classifier of the OLT andby a queue.
Figure 8: ONU node model
The queue is shown in figure 9 and is implemented as an active queue,in the sense that it can send packets autonomously. In the idle state threedifferent events can happen:
Arrival of data to send in upstream direction The packets are insertedinto the queue for the following forwarding.
GATE message arrival The ONU will schedule a self interrupt for thetime specified in the start_time field of the GATE message.
12
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
13/26
Figure 9: ONU queue process model
13
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
14/26
Self interrupt The ONU goes in the sending state and sends data packets
and the new REPORT.
The code for the schedule function is shown below.
static void schedule(void){2 Packet gate, pkptr;
double start time=0;4 FIN(schedule());
pkptr = op pk get(op intrpt strm());6 / receive gate message/
op pk nfd get pkt(pkptr,data,&gate);8 op pk nfd get(gate,start 0,&start time);
length=0;10 op pk nfd get(gate,length 0,&length);
op pk destroy(gate);12 op pk destroy(pkptr);
/ if the start time is in the past, start now /14 if(start time
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
15/26
3 Simulation
We expected to have around 1Gb/s throughput and a delay in the orderof milliseconds. This results have been reached, as shown below.
3.1 Packet delay
Global Packet delay
In order to collect the global packet delay statistics, we computed thedifference between the reception time of the data packet in the classifier ofthe OLT and its creation time. This statistics has been collected for different
types of traffic:
Exponential interarrival time distribution varying the mean interarrivaltime (figure 10).
Send rate of a single ONU Global delay745 Mb/s 2.1 ms372 Mb/s 1.97 ms248 Mb/s 1.3 ms124 Mb/s 0.6 ms4 Mb/s 0.5 ms
Pareto interarrival time distribution varying the shape parameter from2 to 64 with a fixed location of 0.000033 (It corresponds to 248Mb/sfor each ONU for a total rate of about 1Gb/s to the OLTs) (figure 11).
Pareto shape Global delay64 0.8 ms32 0.64 ms16 0.62 ms8 Mb/s 0.60 ms4 0.58 ms2 0.56 ms
Pareto interarrival time distribution varying the location parameterwith a shape fixed to 2 (figure 12).
Pareto location (rate of a single ONU) Global delay745 Mb/s 1.8 ms372 Mb/s 0.6 ms248 Mb/s 0.6 ms4 Mb/s 0.5 ms
15
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
16/26
Figure 10: Packet delay with exponential distribution varying the mean in-terarrival time
Figure 11: Packet delay with Pareto distribution varying the shape parameter
16
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
17/26
Figure 12: Packet delay with Pareto distribution varying the location param-eter
In all the cases the packet delay is lower for greater interarrival time andincreases with higher data rate. Nevertheless, the maximum packet delay is
about 2 ms which is a very small value.
Individual Packet delay
In order to collect the packet delay statistics for each ONU we computedthe difference between the reception time of the data packet in the classifier ofthe OLT and its creation time at the ONU. This statistics has been collectedfor two types of traffic:
Pareto interarrival time distribution with shape 2 and location of 0.000011
for all the ONUs. (It corresponds to 745Mb/s for each ONU for a totalrate greater than 2Gb/s. In this case some packets are dropped in eachONU queue.) (figure 13).
17
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
18/26
Figure 13: Packet delay using the same Pareto distribution for each ONU
18
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
19/26
Exponential interarrival time distribution with different mean for each
ONU (figure 14).
ONU Send rate Individual packet delayONU 1 745 Mb/s 1.45 msONU 2 248 Mb/s 1.09 msONU 3 248 Mb/s 1.09 msONU 4 124 Mb/s 1.016 ms
Figure 14: Packet delay using different exponential interarrival times for eachONU
Both the statistics have been collected using ten different seeds for therandom generator and computing the average of the results. For each valuein the figures is also shown the error bar representing the 80% confidenceinterval.
19
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
20/26
From these results we can state that the DBA used is fair for the delay
of the packets experienced by each ONU. In fact, in the first scenario, whenall the ONUs transmit at the same rate, the packet delay is the same. Inthe second scenario, the ONU transmitting at higher data rate experiencesa slightly higher packet delay with respect to the other because they have tosend more packets and so the packets will stay in the queue for more time.
3.2 Throughput
Global Throughput
In order to collect the global throughput statistics we have used the sinkstatistics Traffic received (bit/sec) at the OLT. The results have been col-lected for different types of traffic:
Exponential interarrival time distribution varying the mean interarrivaltime (figure 15).
Send rate of a single ONU Global throughput745 Mb/s 1 Gb/s372 Mb/s 1 Gb/s248 Mb/s 1 Gb/s124 Mb/s 500 Mb/s
4 Mb/s 16 Mb/s
Figure 15: Throughput with exponential distribution varying the mean in-terarrival time
20
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
21/26
Pareto interarrival time distribution varying the location parameter
with a shape fixed to 2 (figure 16).
Send rate of a single ONU Global throughput745 Mb/s 1 Gb/s372 Mb/s 750 Mb/s248 Mb/s 500 Mb/s4 Mb/s 16 Mb/s
Figure 16: Throughput with Pareto distribution varying the location param-eter
In both cases, the throughput varies according to the interarrival timewith the maximum efficiency. For traffic rates above 1Gb/s the global through-put is still 1Gb/s. Moreover we can see that with the pareto distribution,due to the higher variance, we experience different results with respect toexponential distribution.
Individual Throughput
In order to collect the individual throughput statistics we have used thelink statistics Throughput (bit/sec) over the fibers between the splitter andeach ONUs. The results have been collected for different types of traffic:
Exponential interarrival time distribution with mean interarrival time0.000033 (it corresponds to 248Mb/s) for each ONU. (figure 17).
21
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
22/26
Figure 17: Throughput with exponential distribution with mean interarrivaltime 0.000033
Exponential interarrival time distribution with mean interarrival time0.000011 (it corresponds to 745Mb/s) for each ONU. (figure 18).
22
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
23/26
Figure 18: Throughput with exponential distribution with mean interarrivaltime 0.000011
23
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
24/26
Exponential interarrival time distribution with different mean interar-
rival time for each ONU (figure 19).
ONU Send rateONU 1 372 Mb/sONU 2 248 Mb/sONU 3 248 Mb/sONU 4 124 Mb/s
Figure 19: Throughput using different exponential distributions for eachONU
Exponential interarrival time distribution with different mean interar-rival time for each ONU (figure 20).
ONU Send rate Effective rateONU 1 745 Mb/s 530.6 Mb/sONU 2 124 Mb/s 124 Mb/sONU 3 124 Mb/s 124 Mb/sONU 4 124 Mb/s 124 Mb/sTotal 1.117 Gb/s 902 Mb/s
24
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
25/26
Figure 20: Throughput using different exponential distributions for eachONU
All the statistics have been collected using ten different seeds for therandom generator and computing the average of the results. Because of thestart-up phase, the average value is slightly smaller than the real one in thesteady state phase. For each value in the figures is also shown the error barrepresenting the 80% confidence interval.
From these results we can state that the DBA used is fair.In the first scenario, when all the ONUs transmit at the same rate, they allhave the same throughput.In the second scenario, all the ONUs transmit at the same rate (745Mb/s),but the total rate is greater than 1Gb/s. All the ONUs still have the samethroughput. This means that the packets are equally dropped from eachONU.In the third scenario, each ONU almost reaches the desired throughput andthe bandwidth is fairly divided between the ONU.In the last scenario, all the ONUs transmitting at lower data rate reach the
25
7/30/2019 Performance evaluation of DBAGatedallocation in an EPON
26/26
desired throughput, while the remaining bandwidth is used by the first ONU.
References
[1] Cedric F. Lam. Passive Optical Networks: Principles and Practice. Aca-demic Press, 2007.
[2] Glen Kramer, Biswanath Mukherjee, and Ariel Maislos. Ethernet pas-sive optical networks, pages 229275. Wiley-Interscience, New York, NY,USA, 2003.
[3] Gerry Pesavento. Ethernet passive optical network (epon) architecturefor broadband access. Optical Networks Magazine, 4(1):107113, January2003.
26