5

Click here to load reader

[IEEE 2013 IEEE Power & Energy Society General Meeting - Vancouver, BC (2013.7.21-2013.7.25)] 2013 IEEE Power & Energy Society General Meeting - Queuing-theoretic modeling of a PMU

Embed Size (px)

Citation preview

Page 1: [IEEE 2013 IEEE Power & Energy Society General Meeting - Vancouver, BC (2013.7.21-2013.7.25)] 2013 IEEE Power & Energy Society General Meeting - Queuing-theoretic modeling of a PMU

Queuing-Theoretic Modeling of a PMUCommunication Network

Saranga Menike∗, Pradeepa Yahampath†, Athula Rajapakse‡, Attahiru Alfa§

Department of Electrical and Computer EngineeringUniversity of Manitoba, Winnipeg, Canada

[email protected]∗, [email protected]†, [email protected]‡, [email protected] §

Abstract—We presents a queuing theoretic approach to modela packet-oriented data network which links a set of phasormeasurement units (PMU) to a phasor data concentrator (PDC)in a wide area monitoring, protection and control system(WAMPaCS). The PMU-PDC communication network is approx-imated as a cyclic polling system and the associated Markovchain is set up. Based on this model, closed-form expressions arederived for important reliability measures such as the packet lossprobability and the communication delay. We then demonstratehow the proposed model can be used to predict the impact ofthe number of data sources on the network, as well as the buffercapacity of the network switches on the overall reliability ofthe communication link. An important property of the proposedmodel is that it’s computational complexity is only linear in thenumber of data sources connected, making it suitable for thestudy of large systems.

Index Terms—Cyclic polling systems, Markov chains, Packetloss probability, Phasor measurement units, Wide area monitor-ing, protection and control

I. INTRODUCTION

Synchrophasor based wide area monitoring, protection andcontrol (WAMPaC) applications play a vital role in today’spower system operations, since they can provide a muchsuperior dynamic view of the power system than the su-pervisory control and data acquisition (SCADA) systems. Akey requirement for implementing a WAMPaC application isthe availability of reliable, high-speed data communicationbetween the phasor measurement units (PMUs) and the phasordata concentrator (PDC) in a control center. With emergingsynchropahsor technology, systems with a large number ofPMUs will likely be deployed in the near future. However,as the number of data sources in a system increases, thecongestion in network routers and switches may lead tounacceptable delays and packet-losses at the PDC and othercontrol devices. In previous work [1]–[3], a comprehensiveinvestigation of data communication requirements of PMU ap-plications is presented. In [1], the performance of a WAMPaCShas been studied by using power system simulation, in whichthe communication links have been represented by fixeddelays. In other related work [4]–[6], communication delaysestimated using an off-line communication network simulatoris subsequently used in the power system simulation.

In contrast to previous works, we develop in this paper, ananalytical approach to study the delay and packet loss charac-

This work has been funded by Manitoba Hydro and MITACS.

PDC

Control

Center Switch

Substation 1 Switch ( SW 1 )

Substation 2 Switch ( SW 2 )

Substation N Switch ( SW N )

DS 1 , 1 DS 1 , n 1 DS 2 , 1 DS 2 , n 2 DS N , nN DS N , 1

Control Center Workstation

PMU PMU PMU

Fig. 1. Block diagram of a PMU-PDC communication system.

teristics of a PMU-PDC network using queuing theoretic tools[7]–[10]. We consider a centralized PMU-PDC communicationarchitecture in which the control center switch cyclicallyallocates its processing time among the substation switchesin the network. We model the time-sharing behavior of thisPMU-PDC network using the cyclic polling approach [11]. Wethen set-up a discrete-time Markov chain to derive expressionsfor communication delay and packet-loss probability of PMU-PDC links. We also present numerical results obtained bysimulation of the data network in PSCAD to validate theaccuracy of the proposed model. The computational complex-ity of our model is linear in the number of PMUs in thenetwork. Therefore, it can be particularly useful in the studyof WAMPaC applications which include a large number ofPMUs.

II. SYSTEM MODEL AND ASSUMPTIONS

We focus on the centralized architecture used in industrialEthernet communication [12]. The system model under con-sideration is shown in Fig. 1. In this network model, PMUsand other data sources, such as remote terminal units (RTUs)and video units [6], are linked to a switch in each substation.All substation switches are linked to the PDC and the controlcenter workstation via a switch in the control center. Thecontrol center switch will route the packets from the PMUsto the PDC while the packets from RTUs and video units arerouted to the control center workstation.

978-1-4799-1303-9/13/$31.00 ©2013 IEEE

Page 2: [IEEE 2013 IEEE Power & Energy Society General Meeting - Vancouver, BC (2013.7.21-2013.7.25)] 2013 IEEE Power & Energy Society General Meeting - Queuing-theoretic modeling of a PMU

All communication links are assumed to be the standard10BASE5 Ethernet links, also known as the thick Ethernet.The number of PMUs and other data sources connected toeach substation switch is not necessarily a constant and canvary according to the application requirements. At least onePMU is connected to each substation switch. Let the numberof substation switches in the network be N . Since PMUstypically generate data at a constant rate [5], it is assumedthat inter-arrival time of packets from a PMU is constant. Onthe other hand, the inter-arrival times of packets from othertwo data sources are assumed to be random. It is also assumedthat all substation switches have an identical buffer size of Bpackets, i.e., a switch can keep a maximum of B data packets,including the one being served.

In the set-up considered here, each substation switch for-wards data packets in its queue (buffer) to the control centerswitch in a first-in-first-out (FIFO) order. In this paper, a cyclicpolling system is used to represent the process by which thecontrol center switch accesses or serves the packets bufferedin different substation switches. A polling system is definedas a single resource of service shared by multiple queues. Ina cyclic polling system, each queue is visited cyclically bythe server. Hence, this type of setup can be represented by avacation queue model [13], and the PMU-PDC network canbe decomposed in to a single server queue with visits andvacation periods, where the vacation period for each substationswitch is the sum of all the service periods of all the otherswitches.

III. PROBLEM FORMULATION

A. Communication Delay

The total communication delay experienced by a packettransmitted from a data source to its destination can be givenby [14]

T = Tp + Tq + L/R, (1)

where Tp is the propagation delay, Tq is the waiting timespent in the buffers, L is the size of an Ethernet frame, andR is the data rate in bits/s. Except Tq , it is straightforwardto compute other parameters in (1). In this paper, our goalis to derive expressions for the mean waiting time in thequeue or queuing delay T̄q = E{Tq} and total average delayT̄ = E{T} for communication between a given PMU and thePDC. Furthermore, queuing delay in a system as shown inFig. 1, can be decomposed in to 2 parts, i.e., queuing delay inthe substation switch (Tq1 ) and queuing delay in the controlcenter switch (Tq2 ). In the following it is assumed that theswitch-over time of the processor in the control center switchto be negligibly small. It is also assumed that, if the switchbuffer is full, any data packets arriving from connected datasources are discarded, leading to packet-losses.

B. Discrete Time Markov Chain (DTMC) Model

In this section, we set-up arrival, service, and vacationprocesses of the DTMC used to represent the system shownin Fig. 1. When a number of data sources are connected to a

0 2 4 6 8 10 12 14 16 180

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

X: 1Y: 0.747

Queuing delay (time units)

Pro

ba

bili

ty

X: 2Y: 0.1853X: 2Y: 0.1853

X: 4Y: 0.01218

X: 3Y: 0.05178

Fig. 2. Queuing delay distribution for the switch SW1.

substation switch, in general the packets sent by different datasources will arrive at the switch at different times. However,there is a non-zero probability of packets sent by several datasources arriving simultaneously at the switch. Therefore, ourgoal is to model batch arrivals of packets (with random inter-arrival times) at a substation switch .

1) Arrival Process: Let the probability of p packets arrivingat the ith switch SWi at the time instant t be Di,p, wherep = 0, . . . , ni. Recall that PMUs transmits packets at regulartime intervals (typically at the rate of 60 packets/s) while theother data sources may transmit packets randomly (randominter-arrival times at the switch). Therefore, we assume that thepackets form a data source arrives at the switch according tothe geometric distribution. Then, the arrivals to each substationswitch is a batch Markovian arrival process (BMAP) [15].Hence, Di,0, . . . , Di,ni

can be expressed as

Di,0 =ni∏

j=1

(1− ai,j) (2)

Di,p =ni∑

j=1

p∏j=1

ai,j

ni∏k=p

(1− ai,k) (3)

for p = 1, . . . , ni − 1, and

Di,ni =ni∏

j=1

ai,j , (4)

where, ai,j is the probability that a data packet arrives fromj-th data source to SWi, i = 1, . . . , N , and j = 1, . . . , , ni.Since,

∑ni

j=0Di,j = 1, this arrival process is stochastic.2) Service Process: We consider queuing delay (in the

buffer) in the substation switch (Tq1) as the service time ofeach substation switch. We have used the procedure explainedin [9] to calculate the queuing delay distribution for eachsubstation switch. Tq1 which was calculated using the methodin [9] for the example explained in section III-B1 is shown inFig. 2. Suppose we discretized the range of Tq1 into J time

Page 3: [IEEE 2013 IEEE Power & Energy Society General Meeting - Vancouver, BC (2013.7.21-2013.7.25)] 2013 IEEE Power & Energy Society General Meeting - Queuing-theoretic modeling of a PMU

steps, so that Tq1 ≈ cJ , where c is an integer. Then, Tq1 canbe represented by a discrete phase (PH) type distribution withgeometric service completion (Si, βiβiβi) of order cJ , where Si

is the cJ × cJ matrix which represents the transition states inphase type distribution

Si =

0 1− b1,i 0 · · · · · ·0 0 1− b2,i 0 · · ·...

......

......

0 · · · · · · · · · 1− b(cJ−1),i

0 · · · · · · · · · 0

.The probabilities of getting absorbed from each state are

given by the cJ × 1 matrix si = [b1,i, · · · , b(cJ−1),i, 1]T ,where bj,i is the probability of completing the service in thej time units in the ith switch which can be taken from thequeuing delay distribution for each substation switch. Theinitial distribution is given by βiβiβi = [1 0 · · · 0] with thedimension 1× cJ .

3) Vacation Process: We have modeled the vacation as anexhaustive single vacation type. That is, the server goes ona vacation as soon as the system is empty. After the serverreturns from a vacation, it starts to serve the packets waitingin the queue or if the queue is empty, waits for the first packetto arrive [15]. Therefore we have modeled the vacation timeas a discrete phase type distribution given by (Vi,αiαiαi) of orderki for the i-th switch. Once we calculate the queuing delaydistribution for each substation switch using the method in[9], we can calculate the vacation time for each substationusing the Monte-Carlo method [16]. The order, k will varyaccording to the number of data sources in the system, sincethe vacation of a given substation switch depends on the timespent by the server (i.e., processor of the control center switch)on other substation switches. If ith substation switch SWi haski time-units of vacation time, it can be represented as followsby using a phase type distribution

Vi =[

0ki−1,1 Iki−1,ki−1

0 01,ki−1

],

vi = [01,ki−1 1]T ,αααi = [1 01,ki−1],

where I denotes the identity matrix and 0 denotes a null matrix.4) DTMC for SWi: With arrival, service, and vacation

processes as defined above, we can set-up the DTMC as aBMAP/PH/1 queue with PH vacation. Using the results in Sec.5.2.3.1 in [15], we can show that the state transition matrixof this DTMC can be given by Pi as shown in (5) (see nextpage), where m = ni and the number-pairs above the matrixindicate possible states. As an example, in Pi, (0,1) representsthe state in which there are no packets in the switch SWi andthe server is in the serving mode, and (0,2) represents the statein which there are no packets in the switch and the server is onvacation. Since the system can have a maximum of B packetsin the switch, the last two states in Pi are (B, 1) and (B, 2).

IV. SYSTEM BEHAVIOR ANALYSIS

In this section, we develop an expression for the meanwaiting time in the queue (or the queuing delay) and packetloss probability. In order to do this, we should find thestationary distribution of the above DTMC given by Xi =(Xi,1,Xi,2,. . . ,Xi,B), where Xi,j is the probability that sys-tem has j packets in the i-th switch, under the stationaritycondition Xi=XiPi. Note that the packet-loss probability isPL = XB+1. For SWi,

PLi= Xi,B

ni∑j=1

Di,j , (6)

and in order to find the queuing delay, we can apply a suitablymodified version of the well known Little’s law

L = λW, (7)

where, L is the mean number of packets in the queue, λ isthe arrival rate of data packets, and W is the mean waitingtime in the queue. The mean number of packets waiting in thequeue to get the service can be expressed as

Lqi=

B∑u=1

(u− 1)Xi,u, (8)

Since, the all switches have identical buffer sizes, (8) is thesame for all substation switches. If the system experiencespacket losses, then we should consider the effective arrivalrate,

λ′ = (1− PLi)λ, (9)

instead of λ, in (7). For jth data source in ith switchDSi,j , λi,j=λai,j packets/ms and queuing delay Wq(i,j) isapproximated as

Wq(i,j) =∑B

u=1 (u− 1)Xi,u

(1− PLi)(λai,j)

. (10)

Likewise, we can calculate the packet loss probability using(6) and the queuing delay (10) for all data sources connectedto any substation switch. We can then calculate the total delayusing (1). Note that in this model, (10) gives summation ofTq1 and Tq2, as we have included Tq1 in the service time.Note that, when the number of data sources connected to thesubstation switch ni increases, the number of terms in (2)-(4)increase linearly. Therefore, the complexity of calculating thequeuing delay and packet loss probability using the proposedmodel increases only linearly with the number of data sourcesin the network.

V. NUMERICAL RESULTS

In order to evaluate the DTMC model developed in thispaper numerically, we have used N = 3 (SW1, SW2, andSW3), n1 = 3 (1 PMU, 1 RTU and 1 video unit), n2 = 2,(1 PMU and 1 RTU), n3 = 4, (2 PMUs, 1 RTU and 1 videounit) and B = 8, 10, 15 as the system parameters to representthe PMU-PDC communication network. According to Fig. 2,we model the service time for SW1 as a discrete phase type

Page 4: [IEEE 2013 IEEE Power & Energy Society General Meeting - Vancouver, BC (2013.7.21-2013.7.25)] 2013 IEEE Power & Energy Society General Meeting - Queuing-theoretic modeling of a PMU

States → (0, 1) (0, 2) (1, 1) (1, 2) (m, 1) (m, 2)

Pi =

26666666666666664

»Di,0 0Di,0 ⊗ vi Di,0 ⊗ Vi

– »Di,1 ⊗ βββ 0

Di,1 ⊗ viβββ Di,1 ⊗ Vi

–· · ·»Di,m ⊗ βββ 0Di,m ⊗ viβββ Di,m ⊗ Vi

–»0 00 0

–· · ·

»0 Di,0 ⊗ (sαααi)0 0

– »Di,0 ⊗ S +Di,1 ⊗ sβββ 0Di,0 ⊗ viβββ Di,0 ⊗ Vi

–· · ·»Di,m−1 ⊗ S +Di,m ⊗ sβββ 0Di,m−1 ⊗ viβββ Di,m−1 ⊗ Vi

–»0 00 0

–· · ·

»0 00 0

– »0 Di,0 ⊗ (sαααi)0 0

–· · ·»Di,m−2 ⊗ S +Di,m−1 ⊗ sβββ 0Di,m−2 ⊗ viβββ Di,m−2 ⊗ Vi

– . . . · · ·

.... . .

. . .. . . · · ·

37777777777777775(5)

distribution (S, β1β1β1) of order 4, with the time step c equal to 1,and b1,1 = 0.747, b2,1 = 0.1853, b3,1 = 0.0518, b4,1 = 0.012.The same procedure is followed to calculate the service timefor SW2 and SW3. Then, using the Monte-Carlo method, weestimated the average vacation times for SW1, SW2 and SW3

to be k1 = 8 ms, k2 = 10 ms, and k3 = 9 ms respectivelyand modeled these vacation times by a separate phase typedistribution, (Vi,αiαiαi) of order ki.

In this section, we present some numerical results for packetloss probability, queuing delay T̄q , and total communicationdelay T̄ of each data source connected to three substationswitches SW1, SW2, and SW3. In order to find the total delayT̄ , we require the propagation delay between each data sourceand the PDC. The propagation delay of 10BASE5 Ethernet is4.33 ns/meter (IEEE 802.3). The data rate has been assumedto be R = 1 Mbps. The Ethernet frame size L for PMU, RTUand video unit is taken as 100 bytes, 700 bytes and 1100bytes respectively [6]. In this paper, we assume a standardPMU data rate of 60 packets/s [17]. RTU data rate and videounit data rate are assumed as 30 packets/s and 200 packets/srespectively [6].

Table I presents the queuing delay and total delay corre-sponding to 3 data sources connected to the SW1, for threebuffer sizes, where D is the distance between the data sourcesand the control center. It is important to notice that the queuingdelay depends on the values for packet arrival probability, a1,j .a1,j for PMU, RTU and video are calculated as 60/290, 30/290and 200/290 respectively, according to the data rate of theeach data source. Notice that a higher a1,j results in lowerqueuing delay compared to a lower a1,j . The reason for this isthat the queuing delay decreases with the higher packet arrivalprobability. If the packet arrival probability is high, then theprobability of being served by the processor of the controlcenter switch is also high. Therefore, the mean waiting timein the queue (queuing delay) decreases. Similarly, Table II andTable III present the performance measures for data sourcesconnected to SW2 and SW3 respectively.

As can be seen in Tables I, II, and III, queuing delay of eachdata source increases with the buffer size. The effect of thebuffer size on the queuing delay is presented in Fig. 3. Whenthe buffer size is large, it gives more room for data packetsand as a result, eventually the mean waiting time increases.Also it is clearly noticeable that a small buffer size allows asmaller number of packets to be served, so that the queuing

TABLE INUMERICAL RESULTS BASED ON THE ANALYTICAL MODEL FOR SW1 .

a1,1 = 0.2, a1,2 = 0.1, a1,3 = 0.7, D = 100 KM.

Queuing Delay T̄q (µs) Total Delay T̄ (ms)B=8 B=10 B=15 B=8 B=10 B=15

PMU 11.1519 12.5317 16.8433 1.2112 1.212 1.2168RTU 27.8797 31.3292 42.1084 6.0279 6.0313 6.0421

Video Unit 4.2892 4.8199 6.4782 9.2043 9.2048 9.2065

TABLE IINUMERICAL RESULTS BASED ON THE ANALYTICAL MODEL FOR SW2 .

a2,1 = 0.67, a2,2 = 0.33, D = 200 KM.

Queuing Delay T̄q (µ s) Total Delay T̄ (ms)B=8 B=10 B=15 B=8 B=10 B=15

PMU 3.1417 4.1844 6.9781 1.6031 1.6042 1.6070RTU 6.3787 8.4955 14.1676 6.4064 6.4085 6.4142

TABLE IIINUMERICAL RESULTS BASED ON THE ANALYTICAL MODEL FOR SW3 .

a3,1 = 0.2, a3,2 = 0.2, a3,3 = 0.1, a3,4 = 0.6, D = 325 KM.

Queuing Delay T̄q (µs) Total Delay T̄ (ms)B=8 B=10 B=15 B=8 B=10 B=15

PMU 13.1998 15.0301 18.1934 2.1122 2.1160 2.1272PMU 13.1998 15.0301 18.19341 2.1122 2.1160 2.1272RTU 25.9245 27.0640 36.7860 6.9259 6.9341 6.9578

Video Unit 3.6385 4.7809 8.1103 10.1036 10.1048 10.1081

delay is less compared to higher buffer sizes, but packet losseswill be higher. Furthermore, with any buffer size, a switchwith more data sources experiences a longer queuing delaycompared to a switch with a smaller number of data sources.As an example, consider SW2 with 2 data sources and SW3

with 4 data sources. The PMU connected to SW2 has lessqueuing delay compared to the queuing delay of the 1-st PMUconnected to SW3 for B = 8, B = 10 and B = 15.

In order to verify the validity of the modeling procedurepresented in this paper, we next compare the network perfor-mance parameters calculated from our analytical model withthose obtained by time-domain simulation of the system inPSCAD power system simulation software. For this purpose,we implemented a set of communication system componentsin PSCAD. The comparison of analytical results predicted byour model and the simulation results are shown in Figs. 3and 4. It can be seen the analytical results closely agree withthe simulation results. According to Fig. 4, it is clear that ifthe number of PMUs connected to a substation is large, the

Page 5: [IEEE 2013 IEEE Power & Energy Society General Meeting - Vancouver, BC (2013.7.21-2013.7.25)] 2013 IEEE Power & Energy Society General Meeting - Queuing-theoretic modeling of a PMU

8 9 10 11 12 13 14 152

4

6

8

10

12

14

16

18

20

Buffer size

Qu

eu

ein

g D

ela

y in

mic

ro s

eco

nd

s

SW2

SW1

SW3

PSCAD Simulation Results

Fig. 3. Queuing delay vs buffer size (in packets) for the 1st PMU in eachsubstation switch.

2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 40.05

0.055

0.06

0.065

0.07

0.075

0.08

0.085

0.09

0.095

0.1

Number of Data Sources in the Substation Switch

Pca

ke

t L

oss P

rob

ab

ility

SW2

SW1

SW3

PSCAD Simulation Results

Fig. 4. Packet loss probability vs no. of data sources in the substation switch.

packet loss probability increases. For example, SW2 has only2 data sources connected to it and therefore it has the lowestpacket loss probability, while SW3 with 4 data sources has thehighest packet loss probability. This effect can be clearly seenin Fig. 4. Also, in order to present the effect of buffer sizeon the packet loss probability, we have also considered threebuffer sizes in Fig. 4. As these results confirm, the proposedanalytical model predicts the impact of buffer size on thepacket loss probability, i.e., increasing the buffer size leadsto a decrease in packet-loss probability.

VI. CONCLUSIONS

We proposed a novel queuing-theoretic model based oncyclic polling for a centralized PMU-PDC communicationsystem. Using simulations we have demonstrated that thismodel can be used to predict the critical performance metricsof a PMU network such as the packet delay and packet lossprobability. The model can be used to study the impact ofthe number of data sources in a given network on the systemreliability in terms loss probability and delay which can havean adverse effect on the underlying WAMPaCS performance.It can also be useful to the network designer in choosing theappropriate buffer size for switches in order to ensure therequired level of reliability.

REFERENCES

[1] M. Chenine, Z. Kun, and L. L. Nordstrom, “Survey on priorities andcommunication requirements for PMU-based applications in the Nordicregion,” IEEE PowerTech, 2009, pp. 1–8, July 2009.

[2] A. G. Phadke and J. S. Thorp, “Communication needs for wide areameasurement applications,” 5th Int. Conf. Critical Infrastructure (CRIS),pp. 1–7, Sept. 2010.

[3] M. Chenine and L. L. Nordstrom, “Investigation of communicationdelays and data incompleteness in multi-PMU wide area monitoringand control systems,” EPECS ’09, pp. 1–6, Nov. 2009.

[4] M. Wei and Z. Chen, “Distribution system protection with communi-cation technologies,” IEEE 36th Annual Conf. Industrial ElectronicsSociety, pp. 3328–3333, Nov. 2010.

[5] M. Chenine, E. Karam, and L. Nordstrom, “Modeling and simulation ofwide area monitoring and control systems in IP-based networks,” PES’09., pp. 1–8, July 2009.

[6] M. Chenine, I. A. Khatib, J. Ivanovski, V. Maden, and L. Nordstrom,“PMU traffic shaping in IP-based wide area communication,” 5th Int.Conf. Critical Infrastructure (CRIS), pp. 1–6, Sept. 2010.

[7] R. Frigui, I. Stone, and A. S. Alfa, “Message delay for a priority-basedautomatic meter reading network,” Computer communications, vol. 20,pp. 38–47, 1997.

[8] A. Kohnheim and B. Meiser, “Waiting lines and times in a system withpolling,” JACM, vol. 21, 1974.

[9] B. V. Houdt and C. Blondia, “The waiting time distribution of a typek customer in a discrete time FCFS MMAP[K]/PH[K]/c(c=1,2) queueusing QBD,” Stochastic Models, vol. 20, no. 1, pp. 55–69, 2004.

[10] ——, “The delay distribution of a type k customer in a FCFSMMAP[K]/PH[K]/1 queue,” Journal of applied Probability (JAP),vol. 39, no. 1, March 2002.

[11] H. Levy and M. Sidi, “Polling systems: Applications, modeling, andoptimization,” IEEE Trans. Commun., vol. 38, no. 10, pp. 1750–1760,Oct. 1990.

[12] J. D. Decotignie, “Ethernet-based real-time and industrial communica-tions,” Proc. IEEE, vol. 93, no. 6, pp. 1102–1117, June 2005.

[13] T. Wang, J. Ke, and F. Chang, “A discrete-time queue with modifiedvacation policy,” 4th Int. Joint Conf. on Computational Sciences andOptimization (CSO), pp. 132–136, April 2011.

[14] B. Naduvathuparambil, M. C. Valenti, and A. Feliachi, “Communicationdelays in wide area measurement systems,” Proc. 34th SoutheasternSymp. System Theory, pp. 118–122, 2002.

[15] A. S. Alfa, Queueing Theory for Telecommunications: Discrete TimeModelling of a Single Node system. Springer, New York, 2010.

[16] S. Ross, Simulation, 4th ed. Academic Press, 2006.[17] IEEE Std C37.118.2-2011 (Revision of IEEE Std C37.118-2005), 2011,

pp. 1-53.