9
Note on the article: Maximum entropy analysis of the M [x] /M/1 queueing system with multiple vacations and server breakdowns Edward Omey, Stefan Van Gulck * Mathematics and Statistics, EHSAL, Stormstraat 2, 1000 Brussel, Belgium Received 21 July 2007; accepted 29 October 2007 Available online 6 November 2007 Abstract Wang et al. [Wang, K. H., Chan, M. C., & Ke, J. C. (2007). Maximum entropy analysis of the M [x] /M/1 queueing sys- tem with multiple vacations and server breakdowns. Computers & Industrial Engineering, 52, 192–202] elaborate on an interesting approach to estimate the equilibrium distribution for the number of customers in the M [x] /M/1 queueing model with multiple vacations and server breakdowns. Their approach consists of maximizing an entropy function subject to con- straints, where the constraints are formed by some known exact results. By a comparison between the exact expression for the expected delay time and an approximate expected delay time based on the maximum entropy estimate, they argue that their maximum entropy estimate is sufficiently accurate for practical purposes. In this note, we show that their maximum entropy estimate is easily rejected by simulation. We propose a minor modification of their maximum entropy method that significantly improves the quality of the estimate. Ó 2007 Elsevier Ltd. All rights reserved. Keywords: Queueing model; Batch arrival; Multiple vacation; Server breakdowns; Maximum entropy; Simulation 1. Model description and exact results For clarity, we briefly repeat the description of the queueing model and adopt the notations of Wang, Chan, and Ke (2007). The (single) server serves customers in the order of their arrivals and in an exponentially distributed time with mean 1/l. The state of the server is denoted by i. The server is either on vacation (i = 0), busy (i = 1) or defect (i = 2). If the system is empty at the instant of a service completion, the server takes a vacation during an exponentially distributed time with mean 1/v. If no customers are present when the server returns from vacation, the server takes another vacation during an exponentially distributed time with mean 1/v; otherwise, it starts its service. During a busy period, the server can break down with a constant failure rate that is denoted 0360-8352/$ - see front matter Ó 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.cie.2007.10.019 * Corresponding author. Fax: +32 2 217 64 64. E-mail addresses: [email protected] (E. Omey), [email protected] (S. Van Gulck). Available online at www.sciencedirect.com Computers & Industrial Engineering 54 (2008) 1078–1086 www.elsevier.com/locate/caie

Note on the article: Maximum entropy analysis of the M[x]/M/1 queueing system with multiple vacations and server breakdowns

Embed Size (px)

Citation preview

Available online at www.sciencedirect.com

Computers & Industrial Engineering 54 (2008) 1078–1086

www.elsevier.com/locate/caie

Note on the article: Maximum entropy analysis of theM[x]/M/1 queueing system with multiple vacations

and server breakdowns

Edward Omey, Stefan Van Gulck *

Mathematics and Statistics, EHSAL, Stormstraat 2, 1000 Brussel, Belgium

Received 21 July 2007; accepted 29 October 2007Available online 6 November 2007

Abstract

Wang et al. [Wang, K. H., Chan, M. C., & Ke, J. C. (2007). Maximum entropy analysis of the M[x]/M/1 queueing sys-tem with multiple vacations and server breakdowns. Computers & Industrial Engineering, 52, 192–202] elaborate on aninteresting approach to estimate the equilibrium distribution for the number of customers in the M[x]/M/1 queueing modelwith multiple vacations and server breakdowns. Their approach consists of maximizing an entropy function subject to con-straints, where the constraints are formed by some known exact results. By a comparison between the exact expression forthe expected delay time and an approximate expected delay time based on the maximum entropy estimate, they argue thattheir maximum entropy estimate is sufficiently accurate for practical purposes. In this note, we show that their maximumentropy estimate is easily rejected by simulation. We propose a minor modification of their maximum entropy method thatsignificantly improves the quality of the estimate.� 2007 Elsevier Ltd. All rights reserved.

Keywords: Queueing model; Batch arrival; Multiple vacation; Server breakdowns; Maximum entropy; Simulation

1. Model description and exact results

For clarity, we briefly repeat the description of the queueing model and adopt the notations of Wang,Chan, and Ke (2007).

The (single) server serves customers in the order of their arrivals and in an exponentially distributed timewith mean 1/l. The state of the server is denoted by i. The server is either on vacation (i = 0), busy (i = 1) ordefect (i = 2). If the system is empty at the instant of a service completion, the server takes a vacation duringan exponentially distributed time with mean 1/v. If no customers are present when the server returns fromvacation, the server takes another vacation during an exponentially distributed time with mean 1/v; otherwise,it starts its service. During a busy period, the server can break down with a constant failure rate that is denoted

0360-8352/$ - see front matter � 2007 Elsevier Ltd. All rights reserved.doi:10.1016/j.cie.2007.10.019

* Corresponding author. Fax: +32 2 217 64 64.E-mail addresses: [email protected] (E. Omey), [email protected] (S. Van Gulck).

E. Omey, S. Van Gulck / Computers & Industrial Engineering 54 (2008) 1078–1086 1079

by a. It takes an exponentially distributed time with mean 1/b to repair the defective server. If the server isrepaired, it resumes the unfinished service.

Customers arrive in groups. Groups of customers arrive according to a (modified) Poisson process whoserates k0, k1 and k2 are allowed to depend on the state of the server. The sizes of the groups are independent andidentically distributed, and represented by the positive integer-valued random variable A.

Because all the interims are exponentially distributed, this queueing system can be described by a Markovchain with state space {(0,0)} [ {(i,n): i = 0, 1, 2; n = 1, 2, . . .}. In the following, only the equilibrium prop-erties of this Markov chain will be considered.

Let Pi(n) be the (equilibrium) probability of the event where there are n customers present in the system and

the server is in state i. The probability for the server to reside in state i is denoted by ai, where

a0 ¼X1n¼0

P 0ðnÞ; a1 ¼X1n¼1

P 1ðnÞ and a2 ¼X1n¼1

P 2ðnÞ:

The probability distribution of the number of customers present in the system is {P(n): n P 0}, withP(0) = P0(0) and P(n) = P0(n) + P1(n) + P2(n) if n P 1.

Wang et al. (2007) use the word when in their description of Pi(n). This suggests erroneously that Pi(n) is aconditional probability. The probability of the event where there are n customers present in the system when

the server is in state i, is Pi(n)/ai.Wang et al. (2007) introduce the probability generating functions

G0ðzÞ ¼X1n¼0

znP 0ðnÞ; G1ðzÞ ¼X1n¼1

znP 1ðnÞ; G2ðzÞ ¼X1n¼1

znP 2ðnÞ

and G(z) = G0(z) + G1(z) + G2(z), and succeed in obtaining an explicit expression for Gi(z) (i = 0,1,2); seetheir Eqs. (8), (12) and (13). By noting that ai = Gi(1), they conclude that

a0 ¼ P ð0Þ 1

1� qv; ð1Þ

a1 ¼ P ð0Þ bq0

hð1� qvÞ; ð2Þ

a2 ¼ P ð0Þ aq0

hð1� qvÞ: ð3Þ

Further, because G(1) = a0 + a1 + a2 = 1, they find that

P ð0Þ ¼ ð1� qvÞhhþ ðaþ bÞq0

: ð4Þ

In (1)–(4) the following notations are used:

qv ¼k0

k0 þ v; h ¼ bð1� q1Þ � aq2 and qi ¼

ki

lEðAÞ ði ¼ 0; 1; 2Þ:

Moreover, Wang et al. (2007) calculate the expected number of customers in the system, which is denotedby Ls. By using the property Ls = G 0(1) and the decomposition G(z) = G0(z) + G1(z) + G2(z), this expectationcan be written as Ls ¼ L0

s þ L1s þ L2

s , where the Lis ¼ G0ið1Þ are given by

L0s ¼ P ð0Þ qvEðAÞ

ð1� qvÞ2

ð5Þ

L1s ¼ P ð0Þ b½k0hþ q0ðk1bþ k2aÞ�EðA2Þ

2lh2ð1� qvÞþ bq0qvEðAÞ

hð1� qvÞ2þ q0ð2laq2

2 þ b2Þ2h2ð1� qvÞ

( )ð6Þ

L2s ¼

ab

L1s þ P ð0Þ k2aq0EðAÞ

bhð1� qvÞð7Þ

1080 E. Omey, S. Van Gulck / Computers & Industrial Engineering 54 (2008) 1078–1086

In Wang et al. (2007), Lis is erroneously called the expected number of customers in the system when the server

is in state i; this conditional expectation is Lis=ai.

The expected number of customers in the queue now easily follows from the general single-server formulaLq = Ls � (1 � P(0)); see e.g. Gross and Harris (1998, p. 12). Finally, the expected delay time Wq can beobtained by applying Little’s law,

W q ¼Lq

kEðAÞ ; where k ¼ a0k0 þ a1k1 þ a2k2 ð8Þ

is the (long-run) average number of arriving groups per unit of time. In Wang et al. (2007), formula (8)appears with k = k0 + k1 + k2, which is a mistake.

2. Maximum entropy estimate

In principle, the probability distribution of the number of customers present in the queueing system can beobtained by repeatedly taking derivatives of G(z) and setting z = 0 hereafter. But because such calculationsbecome rapidly too tedious, this strategy is practically unfeasible.

This is why Wang et al. (2007) estimate the exact probabilities Pi(n) by approximate probabilities bP iðnÞ. Intheir method, the approximate probabilities maximize the entropy function

H ¼ �X1n¼0

P 0ðnÞ ln P 0ðnÞ �X1n¼1

P 1ðnÞ ln P 1ðnÞ �X1n¼1

P 2ðnÞ ln P 2ðnÞ; ð9Þ

subject to the constraints

X1n¼0

P 0ðnÞ þX1n¼1

P 1ðnÞ þX1n¼1

P 2ðnÞ ¼ 1; ð10Þ

X1n¼1

P 1ðnÞ ¼ a1; ð11Þ

X1n¼1

P 2ðnÞ ¼ a2; ð12Þ

X1n¼0

nP 0ðnÞ þX1n¼1

nP 1ðnÞ þX1n¼1

nP 2ðnÞ ¼ Ls: ð13Þ

The optimal probabilities are obtained by Lagrange’s method. Their solution is

bP 0ðnÞ ¼a0

a0 þ Ls

1� 1

a0 þ Ls

� �n

for n P 0; ð14Þ

bP 1ðnÞ ¼a1

a0 þ Ls

1� 1

a0 þ Ls

� �n�1

for n P 1; ð15Þ

bP 2ðnÞ ¼a2

a0 þ Ls

1� 1

a0 þ Ls

� �n�1

for n P 1: ð16Þ

P(n) is estimated by bP ðnÞ, where bP ð0Þ ¼ bP 0ð0Þ and bP ðnÞ ¼ bP 0ðnÞ þ bP 1ðnÞ þ bP 2ðnÞ if n P1:

bP ðnÞ ¼ a0

a0þLsif n ¼ 0;

Ls

ða0þLsÞ21� 1

a0þLs

� �n�1

if n P 1:

8<: ð17Þ

Fig. 1 illustrates the maximum entropy estimate (17) in the case where A is uniformly distributed on{1,2,3,4,5} (notation: A � Un(1, 5)) and with the following choice of parameters:

k0 ¼ 0:3; k1 ¼ 0:5; k2 ¼ 0:4; l ¼ 2; a ¼ 0:05; b ¼ 3 and v ¼ 2: ð18Þ

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0 2 4 6 8 10 11 12 13 14 15 16 17 18 19 20

n

P(n

)maximum entropy

1 3 5 7 9

Fig. 1. Maximum entropy estimate (17) in the case where A � Un(1,5) and with parameter choice (18).

E. Omey, S. Van Gulck / Computers & Industrial Engineering 54 (2008) 1078–1086 1081

The parameter choice (18) is identical to the baseline of the numerical examples in Wang et al. (2007). More-over, they also consider the case A � Un(1,5) in their numerical examples. Remark that E(A) = 3 andE(A2) = 11. Hence, from (1)–(8) it can be calculated that

a0 ffi 0:3441; a1 ffi 0:6452; a2 ffi 0:0108;

P ð0Þ ffi 0:2992; Ls ffi 6:8422; W q ffi 4:7596:

In Fig. 1, the small value of bP ð0Þ attracts our attention. This value bP ð0Þ ffi 0:0479 is a lot smaller than the exactvalue P(0) ffi 0.2992. This is the reason why we will propose an improved maximum entropy estimate for{P(n): n P 0}. Moreover, our result for Wq deviates from the value Wq ffi 4.0560 which appears in Wanget al. (2007) for this example.

3. Improved maximum entropy estimate

Because the exact result for P(0) is known, cf. (4), we see no reason to estimate it. So, in our improved max-imum entropy method we replace constraint (10) by

X1

n¼1

P 0ðnÞ þX1n¼1

P 1ðnÞ þX1n¼1

P 2ðnÞ ¼ 1� c; ð19Þ

where the notation c = P(0) = P0(0) is introduced. By following the same steps as in Wang et al. (2007), wearrive for n P 1 at the improved maximum entropy estimates:

P 0ðnÞ ¼ða0 � cÞð1� cÞ

Ls � ð1� cÞ 1� 1� cLs

� �n

; ð20Þ

P 1ðnÞ ¼a1ð1� cÞ

Ls � ð1� cÞ 1� 1� cLs

� �n

; ð21Þ

P 2ðnÞ ¼a2ð1� cÞ

Ls � ð1� cÞ 1� 1� cLs

� �n

: ð22Þ

By adding (20)–(22), the improved maximum entropy estimate for P(n), n P 1, is obtained:

P ðnÞ ¼ ð1� cÞ2

Ls � ð1� cÞ 1� 1� cLs

� �n

: ð23Þ

Remark that Lq = Ls � (1 � c), showing that the estimate (23) decreases exponentially with decaying factorLq/Ls. In summary, our improved maximum entropy estimate for {P(n): n P 0} is given by

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20n

P(n)

improved maximum entropyestimate

Fig. 2. Improved maximum entropy estimate (24) in the case where A � Un(1,5) and with parameter choice (18).

1082 E. Omey, S. Van Gulck / Computers & Industrial Engineering 54 (2008) 1078–1086

PðnÞ ¼c if n ¼ 0;

ð1�cÞ2Lq

Lq

Ls

� �nif n P 1:

(ð24Þ

Fig. 2 illustrates (24) for the same case as in Fig. 1.

4. Simulation

Wang et al. (2007) test the quality of their maximum entropy estimate (17) by comparing the exact value Wq

for the expected waiting time in the queue (obtained by applying Little’s law) with a corresponding approx-imate value W �

q that is based on (17). They develop a formula for W �q by dividing customers into three groups:

those who arrive when the server is on vacation (i = 0), those who arrive when the server is busy (i = 1) andthose who arrive when the server is defect (i = 2). We find their expression for W �

q questionable, because thebreakdown rate a nowhere appears and because (non-valid) PASTA property is invoked.

In any case, we are convinced that a more direct approach to test the quality of the maximum entropy esti-mate of {P(n): n P 0} consists of comparing the estimate with an empirical distribution obtained by simula-tion. The M[x]/M/1 queueing model with multiple vacations and server breakdowns can easily be simulated.We performed this simulation in Microsoft Excel and our simulation is available on demand addressed to thecorresponding author.

In the numerical examples of Wang et al. (2007), A is either uniformly distributed on {1,2,3,4,5} (denotedby A � Un(1,5)) or geometrically distributed with parameter 0.25 (denoted by A � Geo(0.25)). The first casealready appeared in Figs. 1 and 2, and will further be explored here. In the latter case, our interpretation of thenotation A � Geo(0.25) is

P ðA ¼ nÞ ¼ 0:25� 0:75n�1 ðn ¼ 1; 2; . . .Þ: ð25Þ

The reader can verify that, according to (4), P(0) < 0 for the case where (18) and (25) hold, which implies thatthis case is unstable; the traffic intensity when the server is busy, amounts to q1 = 1. We have also performedsimulations in the case where A is geometrically distributed. We content ourselves by reporting only on thecase where A is uniformly distributed, because the case where A is geometrically distributed leads to the sameconclusion. The following results report on a simulation with (18) as the choice for the parameters and whereA � Un(1,5).

In Fig. 3, a possible realization of (i(t), n(t)) is shown for the first 300 time units (0 6 t 6 300) and with ini-tial state (i(0),n(0)) = (0,0). Fig. 3 illustrates that the breakdown state i = 2 is seldom visited and i(t) mostoften resides in the busy state i = 1. Moreover, Fig. 3 shows that n(t) strongly fluctuates and that large excur-sions of n(t) are possible. By comparing Fig. 3 and other realizations of (i(t),n(t)) with the exact equilibriumresults (a0 ffi 0.3441, a1 ffi 0.6452, a2 ffi 0.0108, P(0) ffi 0.2992, Ls ffi 6.8422), we are confident that in the presentexample the system has sufficiently equilibrated at t = 300.

0

5

10

15

20

25

30

35

40

0 50 100 150 200 250 300

t

n(t)

0

1

2

0 50 100 150 200 250 300

t

i(t)

Fig. 3. A possible realization of (i(t),n(t)) for t 6 300, obtained by stimulation in the case where A � Un(1,5) and with parameter choice(18). The upper part of the figure shows an evolution of the number of customers in the system. The lower part shows a correspondingevolution of the server’s state.

E. Omey, S. Van Gulck / Computers & Industrial Engineering 54 (2008) 1078–1086 1083

We observed (i(300),n(300)) 1000 times in our simulation. The exact values of a0, a1, a2, P(0) and Ls lie wellwithin the 95% confidence intervals obtained with our simulation; these intervals are respectively given by[0.3077,0.3663], [0.6215, 0.6805], [0.0053,0.0187], [0.2677,0.3243] and [6.1790, 7.2110]. Hence, we are inclined

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20n

P(n)

simulationmaximum entropyimproved maximum entropy

Fig. 4. Graphical comparison of the maximum entropy estimate (17) and the improved maximum entropy estimate (24) with an empiricaldistribution of the number of customers at time t = 300, obtained after 1000 repetitions of our simulation, in the case where A � Un(1,2)and with parameter choice (18).

Table 1Kolmogorov-Smirnov test

Maximum entropy Improved maximum entropy

Test value (KS) 0.2481 0.0230p-value 6.732 · 10�54 0.6663

1084 E. Omey, S. Van Gulck / Computers & Industrial Engineering 54 (2008) 1078–1086

to strengthen our belief that the equilibrium distribution {P(n): n P 0} is well represented by the distributionof n(300).

The empirical distribution of n(300) is compared in Fig. 4 with the maximum entropy estimates (17) and(24), which already appeared in Figs. 1 and 2. From Fig. 4, it can be concluded that the maximum entropyestimate (17) is a bad approximation of the true distribution {P(n): n P 0}, while the improved maximumentropy estimate (24) seems to be close to {P(n): n P 0}.

We finally applied Kolmogorov–Smirnov’s test in order to compare statistically the maximum entropy esti-mates (17) and (24) with the true distribution {P(n): n P 0}, which is supposed to be well represented by oursimulation. The test value KS in the Kolmogorov–Smirnov test is the maximum absolute difference betweenthe empirical distribution function (obtained in our simulation) and a theoretical distribution function (cor-responding either to (17) or (24)). This test value KS is compared with the 5% significance critical value forthe corresponding test statistic, which is given by

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

� lnð0:05=2Þ2� 1000

rffi 0:0429:

If the test value KS is larger than this critical value, the theoretical distribution is rejected at a confidence levelof 95%.

It can be observed form Table 1 that the maximum entropy estimate (17) is rejected, while the improvedmaximum entropy estimate (24) is not rejected. To further highlight the difference in quality between the esti-mates (17) and (24), the p-values also appear in Table 1; these p-values are calculated with the formula

2�X1k¼1

ð�1Þkþ1 � e�2�1000�ðk�KSÞ2 :

As expected, the maximum absolute difference between the empirical distribution function and the theoret-ical distribution function is attained at n = 0 for the estimate (17). This property has repeatedly been observedand shows that the major drawback of the estimate (17) is due to bP ð0Þ.

5. Conclusions

We have shown that the maximum entropy estimate of Wang et al. (2007) is deficient in accurately approx-imating the true distribution of the number of customers present in the system and that a significant improve-ment can be obtained by including the exact probability of the empty system in the constraints of theirmethod.

6. Appendix: Further improvements

In our simulation of Section 4, we also calculated the 95% confidence intervals for P(1) and P(2). Theseintervals are respectively given by [0.0382,0.0658] and [0.0533,0.0847]. The improved maximum entropy esti-mate (24) for P(1) and P(2) are, respectively, P ð1Þ ffi 0:0718 and P ð2Þ ffi 0:0644. Hence, P ð1Þ is not contained inthe 95% confidence interval for P(1). Moreover, Kolmogorov–Smirnov’s test value KS ffi 0.0230 for (24) isattained at n = 1. Hence, there seems to be room for further improvements of the maximum entropy estimate(24), in particular by taking the exact value of P(1) into account.

In an extra effort to improve the maximum entropy estimate of Wang et al. (2007), the exact expressions forP(0), P(1), . . ., P(k), with k P 1, can be used as additional constraints in Lagrange’s method. These probabil-

E. Omey, S. Van Gulck / Computers & Industrial Engineering 54 (2008) 1078–1086 1085

ities can be found by applying P(n) = G(n)(0)/n!, where G(n) denotes the nth derivative of the probability gen-erating function G. For n = 1 and n = 2, we obtain

P ð1Þ ¼ Pð0Þ r1qv þk0

lþ k0a

lðk2 þ bÞ

� �ð26Þ

and

P ð2Þ ¼ P ð0Þ r2qv þ r21q

2v þ r1

k0k2alðk2þbÞ2

�þ k0

l2 1þ ak2þb

� �k1 þ lþ a� r1lð1� qvÞ � ab

k2þb

� ��:

ð27Þ

In (26) and (27), the notation {rn: n P 1} of Wang et al. (2007) for the probability distribution of A has beenused. For the example where A � Un(1, 5) and with (18) as the choice for the parameters, we findP(1) ffi 0.0533 and P(2) ffi 0.0572.

In analogy with Wang et al. (2007), a further improved estimate feP ðnÞ : n P 0g for {P(n): n P 0} can befound by maximizing the entropy function (9) according to Lagrange’s method, where the constraints equa-tions (10)–(13) are supplemented with the constraints

P ð0Þ ¼ c0 and P iðnÞ ¼ cin ði ¼ 0; 1; 2; n ¼ 1; . . . ; kÞ: ð28Þ

In (28), c0 and cin denote the exact values for P(0) and Pi(n) (i = 0,1,2; n = 1, . . .,k). This procedure would firstlead to estimates eP iðnÞ for Pi(n) (i = 0,1,2; n = 1,2, . . .). The estimates for P(n) are successively obtained byeP ðnÞ ¼ eP 0ðnÞ þ eP 1ðnÞ þ eP 2ðnÞ (n = 1,2, . . .).

Although the above procedure is certainly tractable, the derivation of feP ðnÞ : n P 0g can be shortened bynoting that exactly the same estimate is obtained by maximizing the entropy function

H ¼ �X1n¼0

P ðnÞ ln PðnÞ; ð29Þ

subject to the constraints

X1n¼0

PðnÞ ¼ 1;X1n¼0

nP ðnÞ ¼ Ls and P ðnÞ ¼ cn ðn ¼ 0; . . . ; kÞ: ð30Þ

In (30), cn = c0n + c1n + c2n denotes the exact value of P(n) (n = 1, . . .,k).For illustrative purposes, we include here this shortened derivation. The problem formulation (29) and (30)

leads to the Langrangian function

L ¼ �X1n¼0

P ðnÞ ln P ðnÞ � h1

X1n¼0

P ðnÞ � 1

!� h2

X1n¼0

nP ðnÞ � Ls

!�Xk

n¼0

snðP ðnÞ � cnÞ; ð31Þ

where h1, h2, s0, . . ., sk are Lagrangian multipliers. Because the estimates eP ðnÞ ¼ cn for n 6 k are supposed tobe known, we restrict our attention hereafter to eP ðnÞ with n > k. The derivative of (31) with respect to P(n) is

oLoP ðnÞ ¼ � ln P ðnÞ � 1� h1 � nh2 if n > k: ð32Þ

By introducing the notations /1 ¼ e�1�h1 and /2 ¼ e�h2 , it follows from (32) that the maximum entropy esti-mate satisfies

eP ðnÞ ¼ /1/

n2 if n > k: ð33Þ

The expressions for /1 and /2 follow from (33) and the constraints in (30), because

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0 5 10 11 12 13 14 15 16 17 18 19 20

k=0k=1k=2

4321 9876

Fig. 5. Maximum entropy estimate (39) for k = 0, k = 1 and k = 2 in the case where A � Un(1,5) and with parameter choice (18).

1086 E. Omey, S. Van Gulck / Computers & Industrial Engineering 54 (2008) 1078–1086

1�Xk

n¼0

cn ¼X1

n¼kþ1

eP ðnÞ ¼ /1/kþ12

1� /2

; ð34Þ

Ls �Xk

n¼0

ncn ¼X1

n¼kþ1

neP ðnÞ ¼ /1/kþ12 ðk þ 1� k/2Þð1� /2Þ

2: ð35Þ

To obtain an attractive formula for the maximum entropy estimate, we introduce the notation N for the ran-dom variable that denotes the equilibrium number of customers present in the system. Remark that the prob-ability that N exceeds k and the (conditional) expectation of N when N exceeds k are, respectively, given by

P ðN > kÞ ¼ 1�Xk

n¼0

cn and EðN jN > kÞ ¼ Ls �Pk

n¼0ncn

1�Pk

n¼0cn

: ð36Þ

From (34)–(36) follows that

/1 ¼P ðN > kÞ

EðN � kjN > kÞ 1� 1

EðN � kjN > kÞ

� ��k�1

; ð37Þ

/2 ¼ 1� 1

EðN � kjN > kÞ : ð38Þ

In summary, the maximum entropy estimate feP ðnÞ : n P 0g is given by

eP ðnÞ ¼ P ðN ¼ nÞ if n 6 k;

P ðN>kÞEðN�kjN>kÞ 1� 1

EðN�kjN>kÞ

� �n�k�1

if n > k:

8<: ð39Þ

Remark that (39) coincides with (24) if k = 0.In Fig. 5, the maximum entropy estimates (39) for k = 0 (which already appeared in Fig. 2), k = 1 and k = 2

are compared for the example where A � Un(1,5) and with (18) as the choice for the parameters.

References

Gross, D., & Harris, C. M. (1998). Fundamentals of queueing theory. 3rd ed. New York: John Wiley.Wang, K. H., Chan, M. C., & Ke, J. C. (2007). Maximum entropy analysis of the M[x]/M/1 queueing system with multiple vacations and

server breakdowns. Computers & Industrial Engineering, 52, 192–202.