Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
TELETRAFFIC ENGINEERING
and
NETWORK PLANNING
Villy B. Iversen
DTU Course 34340
http://www.fotonik.dtu.dk
Technical University of Denmark
Building 343
DK–2800 Kgs. Lyngby
Revised May 20, 2010
ii
c© Villy Bæk Iversen, 2010
iii
PREFACE
This book covers the basic theory of teletraffic engineering. The mathematical backgroundrequired is elementary probability theory. The purpose of the book is to enable engineers tounderstand ITU–T recommendations on traffic engineering, evaluate tools and methods, andkeep up-to-date with new practices. The book includes the following parts:
• Introduction: Chapter 1,
• Mathematical background: Chapter 2 – 3,
• Telecommunication loss models: Chapter 4 – 8,
• Data communication delay models: Chapter 9 – 12,
• Measurement and simulation: Chapter 13.
The purpose of the book is twofold: to serve both as a handbook and as a textbook. Thusthe reader should, for example, be able to study chapters on loss models without studyingthe chapters on the mathematical background first.
The book is based on many years of experience in teaching the subject at the TechnicalUniversity of Denmark and from ITU training courses in developing countries.
Supporting material, such as software, exercises, advanced material, and case studies, isavailable at:
http://oldwww.com.dtu.dk/education/34340
where comments and ideas will also be appreciated.
Villy Bæk Iversen
May 20, 2010
iv
Contents
1 Introduction to Teletraffic Engineering 1
1.1 Modeling of telecommunication systems . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 System structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.2 Operational strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.3 Statistical properties of traffic . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.4 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Conventional telephone systems . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 System structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.2 User behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 Operation strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Wireless communication systems . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.1 Cellular systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.2 Wireless Broadband Systems . . . . . . . . . . . . . . . . . . . . . . . 11
Service classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4 Communication networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4.1 Classical telephone network . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4.2 Data networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4.3 Local Area Networks (LAN) . . . . . . . . . . . . . . . . . . . . . . . 16
1.5 ITU recommendations on traffic engineering . . . . . . . . . . . . . . . . . . . 17
1.5.1 Traffic engineering in the ITU . . . . . . . . . . . . . . . . . . . . . . . 18
1.6 Traffic concepts and grade of service . . . . . . . . . . . . . . . . . . . . . . . 18
1.7 Concept of traffic and traffic unit [erlang] . . . . . . . . . . . . . . . . . . . . 20
1.8 Traffic variations and the concept busy hour . . . . . . . . . . . . . . . . . . . 23
1.9 The blocking concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.10 Traffic generation and subscribers reaction . . . . . . . . . . . . . . . . . . . . 29
1.11 Introduction to Grade-of-Service = GoS . . . . . . . . . . . . . . . . . . . . . 35
1.11.1 Comparison of GoS and QoS . . . . . . . . . . . . . . . . . . . . . . . 36
v
vi CONTENTS
1.11.2 Special features of QoS . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.11.3 Network performance . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.11.4 Reference configurations . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2 Time interval modeling 41
2.1 Distribution functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.1.1 Exponential distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.2 Characteristics of distributions . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.2.1 Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.2.2 Residual life-time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.2.3 Load from holding times of duration less than x . . . . . . . . . . . . . 50
2.2.4 Forward recurrence time . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.2.5 Distribution of the j’th largest of k random variables . . . . . . . . . . 53
2.3 Combination of random variables . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.3.1 Random variables in series . . . . . . . . . . . . . . . . . . . . . . . . 54
Hypo-exponential or steep distributions . . . . . . . . . . . . . . . . . 55
Erlang-k distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
2.3.2 Random variables in parallel . . . . . . . . . . . . . . . . . . . . . . . 58
Hyper-exponential or flat distributions . . . . . . . . . . . . . . . . . . 59
Hyper-exponential distribution . . . . . . . . . . . . . . . . . . . . . . 60
Pareto distribution and Palm’s normal forms . . . . . . . . . . . . . . 62
2.3.3 Random variables in series and parallel . . . . . . . . . . . . . . . . . 63
Stochastic sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Cox distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Polynomial trial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Decomposition principles . . . . . . . . . . . . . . . . . . . . . . . . . 68
Importance of Cox distribution . . . . . . . . . . . . . . . . . . . . . . 70
2.4 Other time distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Gamma distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Weibull distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Heavy-tailed distributions . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.5 Observations of life-time distribution . . . . . . . . . . . . . . . . . . . . . . . 72
3 Arrival Processes 75
3.1 Description of point processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.1.1 Basic properties of number representation . . . . . . . . . . . . . . . . 77
CONTENTS vii
3.1.2 Basic properties of interval representation . . . . . . . . . . . . . . . . 78
3.2 Characteristics of point process . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.2.1 Stationarity (Time homogeneity) . . . . . . . . . . . . . . . . . . . . . 80
3.2.2 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.2.3 Simplicity or ordinarity . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.3 Little’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.4 Characteristics of the Poisson process . . . . . . . . . . . . . . . . . . . . . . 83
3.5 Distributions of the Poisson process . . . . . . . . . . . . . . . . . . . . . . . 84
3.5.1 Exponential distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.5.2 Erlang–k distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.5.3 Poisson distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.5.4 Static derivation of the distributions of the Poisson process . . . . . . 91
3.6 Properties of the Poisson process . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.6.1 Palm’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.6.2 Raikov’s theorem (Decomposition theorem) . . . . . . . . . . . . . . . 95
3.6.3 Uniform distribution – a conditional property . . . . . . . . . . . . . . 95
3.7 Generalization of the stationary Poisson process . . . . . . . . . . . . . . . . . 96
3.7.1 Interrupted Poisson process (IPP) . . . . . . . . . . . . . . . . . . . . 96
3.7.2 Batched Poisson process . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4 Erlang’s loss system and B–formula 101
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.2 Poisson distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
4.2.1 State transition diagram . . . . . . . . . . . . . . . . . . . . . . . . . . 103
4.2.2 Derivation of state probabilities . . . . . . . . . . . . . . . . . . . . . . 105
4.2.3 Traffic characteristics of the Poisson distribution . . . . . . . . . . . . 106
4.3 Truncated Poisson distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.3.1 State probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.3.2 Traffic characteristics of Erlang’s B-formula . . . . . . . . . . . . . . . 108
4.4 General procedure for state transition diagrams . . . . . . . . . . . . . . . . . 114
4.4.1 Recursion formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.5 Evaluation of Erlang’s B-formula . . . . . . . . . . . . . . . . . . . . . . . . . 116
4.6 Properties of Erlang’s B-formula . . . . . . . . . . . . . . . . . . . . . . . . . 119
4.6.1 Non-integral number of channels . . . . . . . . . . . . . . . . . . . . . 119
4.6.2 Insensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.6.3 Derivatives of Erlang-B formula and convexity . . . . . . . . . . . . . 121
viii CONTENTS
4.6.4 Derivative of Erlang-B formula with respect to A . . . . . . . . . . . . 121
4.6.5 Derivative of Erlang-B formula with respect to n . . . . . . . . . . . . 122
4.6.6 Inverse Erlang-B formulæ . . . . . . . . . . . . . . . . . . . . . . . . . 123
4.6.7 Approximations for Erlang-B formula . . . . . . . . . . . . . . . . . . 124
4.7 Fry-Molina’s Blocked Calls Held model . . . . . . . . . . . . . . . . . . . . . . 124
4.8 Principles of dimensioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
4.8.1 Dimensioning with fixed blocking probability . . . . . . . . . . . . . . 126
4.8.2 Improvement principle (Moe’s principle) . . . . . . . . . . . . . . . . . 127
5 Loss systems with full accessibility 133
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
5.2 Binomial Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
5.2.1 Equilibrium equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
5.2.2 Traffic characteristics of Binomial traffic . . . . . . . . . . . . . . . . . 139
5.3 Engset distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
5.3.1 State probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
5.3.2 Traffic characteristics of Engset traffic . . . . . . . . . . . . . . . . . . 142
5.4 Relations between E, B, and C . . . . . . . . . . . . . . . . . . . . . . . . . . 146
5.5 Evaluation of Engset’s formula . . . . . . . . . . . . . . . . . . . . . . . . . . 147
5.5.1 Recursion formula on n . . . . . . . . . . . . . . . . . . . . . . . . . . 148
5.5.2 Recursion formula on S . . . . . . . . . . . . . . . . . . . . . . . . . . 148
5.5.3 Recursion formula on both n and S . . . . . . . . . . . . . . . . . . . . 149
5.6 Pascal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
5.7 Truncated Pascal distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
5.8 Batched Poisson arrival process . . . . . . . . . . . . . . . . . . . . . . . . . . 157
5.8.1 Infinite capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
5.8.2 Finite capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
5.8.3 Performance measures . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
6 Overflow theory 161
6.1 Limited accessibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
6.2 Exact calculation by state probabilities . . . . . . . . . . . . . . . . . . . . . . 163
6.2.1 Balance equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
6.2.2 Erlang’s ideal grading . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
State probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Upper limit of channel utilization . . . . . . . . . . . . . . . . . . . . . 167
CONTENTS ix
6.3 Overflow theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
6.3.1 State probabilities of overflow systems . . . . . . . . . . . . . . . . . . 169
6.4 Equivalent Random Traffic Method . . . . . . . . . . . . . . . . . . . . . . . . 171
6.4.1 Preliminary analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
6.4.2 Numerical aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
6.4.3 Individual stream blocking probabilities . . . . . . . . . . . . . . . . . 175
6.4.4 Individual group blocking probabilities . . . . . . . . . . . . . . . . . . 175
6.5 Fredericks & Hayward’s method . . . . . . . . . . . . . . . . . . . . . . . . . 177
6.5.1 Traffic splitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
6.6 Other methods based on state space . . . . . . . . . . . . . . . . . . . . . . . 179
6.6.1 BPP traffic models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
6.6.2 Sanders’ method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
6.6.3 Berkeley’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
6.6.4 Comparison of state-based methods . . . . . . . . . . . . . . . . . . . 182
6.7 Methods based on arrival processes . . . . . . . . . . . . . . . . . . . . . . . . 182
6.7.1 Interrupted Poisson Process . . . . . . . . . . . . . . . . . . . . . . . . 182
6.7.2 Cox–2 arrival process . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
7 Multi-Dimensional Loss Systems 187
7.1 Multi-dimensional Erlang-B formula . . . . . . . . . . . . . . . . . . . . . . . 187
7.2 Reversible Markov processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
7.3 Multi-Dimensional Loss Systems . . . . . . . . . . . . . . . . . . . . . . . . . 193
7.3.1 Class limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
7.3.2 Generalized traffic processes . . . . . . . . . . . . . . . . . . . . . . . . 194
7.3.3 Multi-rate traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
7.4 Convolution Algorithm for loss systems . . . . . . . . . . . . . . . . . . . . . 200
7.4.1 The convolution algorithm . . . . . . . . . . . . . . . . . . . . . . . . 201
7.5 Fredericks-Haywards’s method . . . . . . . . . . . . . . . . . . . . . . . . . . 208
7.6 State space based algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
7.6.1 Fortet & Grandjean (Kaufman & Robert) algorithm . . . . . . . . . . 211
7.6.2 Generalized algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Performance measures . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
7.6.3 Batch Poisson arrival process . . . . . . . . . . . . . . . . . . . . . . . 216
7.7 Final remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
8 Dimensioning of telecom networks 217
x CONTENTS
8.1 Traffic matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
8.1.1 Kruithof’s double factor method . . . . . . . . . . . . . . . . . . . . . 218
8.2 Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
8.3 Routing principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
8.4 Approximate end-to-end calculations methods . . . . . . . . . . . . . . . . . . 221
8.4.1 Fix-point method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
8.5 Exact end-to-end calculation methods . . . . . . . . . . . . . . . . . . . . . . 222
8.5.1 Convolution algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
8.6 Load control and service protection . . . . . . . . . . . . . . . . . . . . . . . . 222
8.6.1 Trunk reservation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
8.6.2 Virtual channel protection . . . . . . . . . . . . . . . . . . . . . . . . . 224
8.7 Moe’s principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
8.7.1 Balancing marginal costs . . . . . . . . . . . . . . . . . . . . . . . . . 225
8.7.2 Optimum carried traffic . . . . . . . . . . . . . . . . . . . . . . . . . . 226
9 Markovian queueing systems 229
9.1 Erlang’s delay system M/M/n . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
9.2 Traffic characteristics of delay systems . . . . . . . . . . . . . . . . . . . . . . 232
9.2.1 Erlang’s C-formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
9.2.2 Numerical evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
9.2.3 Mean queue lengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Mean queue length at a random point of time . . . . . . . . . . . . . . 235
Mean queue length, given the queue is greater than zero . . . . . . . . 237
9.2.4 Mean waiting times . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Mean waiting time W for all customers . . . . . . . . . . . . . . . . . 237
Mean waiting time w for delayed customers . . . . . . . . . . . . . . . 238
9.2.5 Improvement functions for M/M/n . . . . . . . . . . . . . . . . . . . . 238
9.3 Moe’s principle for delay systems . . . . . . . . . . . . . . . . . . . . . . . . . 239
9.4 Waiting time distribution for M/M/n, FCFS . . . . . . . . . . . . . . . . . . 240
9.5 Single server queueing system M/M/1 . . . . . . . . . . . . . . . . . . . . . . 243
9.5.1 Sojourn time for a single server . . . . . . . . . . . . . . . . . . . . . . 244
9.6 Palm’s machine repair model . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
9.6.1 Terminal systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
9.6.2 State probabilities – single server . . . . . . . . . . . . . . . . . . . . . 247
9.6.3 Terminal states and traffic characteristics . . . . . . . . . . . . . . . . 249
9.6.4 Machine–repair model with n servers . . . . . . . . . . . . . . . . . . . 253
CONTENTS xi
9.7 Optimizing the machine-repair model . . . . . . . . . . . . . . . . . . . . . . . 254
9.8 Waiting time distribution for M/M/n/S/S–FCFS . . . . . . . . . . . . . . . . 256
10 Applied Queueing Theory 261
10.1 Kendall’s classification of queueing models . . . . . . . . . . . . . . . . . . . . 262
10.1.1 Description of traffic and structure . . . . . . . . . . . . . . . . . . . . 262
10.1.2 Queueing strategy: disciplines and organization . . . . . . . . . . . . . 263
10.1.3 Priority of customers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
10.2 General results in the queueing theory . . . . . . . . . . . . . . . . . . . . . . 265
10.2.1 Load function and work conservation . . . . . . . . . . . . . . . . . . . 266
10.3 Pollaczek-Khintchine’s formula for M/G/1 . . . . . . . . . . . . . . . . . . . . 267
10.3.1 Derivation of Pollaczek-Khintchine’s formula . . . . . . . . . . . . . . 267
10.3.2 Busy period for M/G/1 . . . . . . . . . . . . . . . . . . . . . . . . . . 269
10.3.3 Moments of M/G/1 waiting time distribution . . . . . . . . . . . . . . 270
10.3.4 Limited queue length: M/G/1/k . . . . . . . . . . . . . . . . . . . . . 270
10.4 Queueing systems with constant holding times . . . . . . . . . . . . . . . . . 271
10.4.1 Historical remarks on M/D/n . . . . . . . . . . . . . . . . . . . . . . . 271
10.4.2 State probabilities of M/D/1 . . . . . . . . . . . . . . . . . . . . . . . 272
10.4.3 Mean waiting times and busy period of M/D/1 . . . . . . . . . . . . . 273
10.4.4 Waiting time distribution: M/D/1, FCFS . . . . . . . . . . . . . . . . 274
10.4.5 State probabilities: M/D/n . . . . . . . . . . . . . . . . . . . . . . . . 276
10.4.6 Waiting time distribution: M/D/n, FCFS . . . . . . . . . . . . . . . . 276
10.4.7 Erlang-k arrival process: Ek/D/r . . . . . . . . . . . . . . . . . . . . . 277
10.4.8 Finite queue system: M/D/1/k . . . . . . . . . . . . . . . . . . . . . . 278
10.5 Single server queueing system: GI/G/1 . . . . . . . . . . . . . . . . . . . . . . 279
10.5.1 General results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
10.5.2 State probabilities: GI/M/1 . . . . . . . . . . . . . . . . . . . . . . . . 280
10.5.3 Characteristics of GI/M/1 . . . . . . . . . . . . . . . . . . . . . . . . . 282
10.5.4 Waiting time distribution: GI/M/1, FCFS . . . . . . . . . . . . . . . . 283
10.6 Priority queueing systems: M/G/1 . . . . . . . . . . . . . . . . . . . . . . . . 283
10.6.1 Combination of several classes of customers . . . . . . . . . . . . . . . 284
10.6.2 Kleinrock’s conservation law . . . . . . . . . . . . . . . . . . . . . . . 285
10.6.3 Non-preemptive queueing discipline . . . . . . . . . . . . . . . . . . . . 285
10.6.4 SJF-queueing discipline: M/G/1 . . . . . . . . . . . . . . . . . . . . . 288
10.6.5 M/M/n with non-preemptive priority . . . . . . . . . . . . . . . . . . 290
10.6.6 Preemptive-resume queueing discipline . . . . . . . . . . . . . . . . . . 291
xii CONTENTS
10.6.7 M/M/n with preemptive-resume priority . . . . . . . . . . . . . . . . . 293
10.7 Fair Queueing: Round Robin, Processor-Sharing . . . . . . . . . . . . . . . . 293
11 Multi-service queueing systems 295
11.1 Reversible multi-chain single-server systems . . . . . . . . . . . . . . . . . . . 296
11.1.1 Reduction factors for single-server system . . . . . . . . . . . . . . . . 296
11.1.2 Single-server Processor Sharing (PS) system . . . . . . . . . . . . . . . 300
11.1.3 Non-sharing single-server system . . . . . . . . . . . . . . . . . . . . . 301
11.1.4 Single-server LCFS-PR system . . . . . . . . . . . . . . . . . . . . . . 301
11.1.5 Summary for reversible single server systems . . . . . . . . . . . . . . 302
11.1.6 State probabilities for multi-services single-server system . . . . . . . . 302
11.1.7 Generalized algorithm for state probabilities . . . . . . . . . . . . . . . 304
11.1.8 Performance measures . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
11.2 Reversible multi-chain & server systems . . . . . . . . . . . . . . . . . . . . 305
11.2.1 Reduction factors for multi-server systems . . . . . . . . . . . . . . . . 305
11.2.2 Generalized processor sharing (GPS) system . . . . . . . . . . . . . . . 308
11.2.3 Non-sharing multi-chain & server system . . . . . . . . . . . . . . . 308
11.2.4 Symmetric queueing systems . . . . . . . . . . . . . . . . . . . . . . . 309
11.2.5 State probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
11.2.6 Generalized algorithm for state probabilities . . . . . . . . . . . . . . . 311
11.2.7 Performance measures . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
11.3 Reversible multi-rate & chain & server systems . . . . . . . . . . . . . . . . 312
11.3.1 Reduction factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
11.3.2 Generalized algorithm for state probabilities . . . . . . . . . . . . . . . 315
11.4 Finite source models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
12 Queueing networks 321
12.1 Introduction to queueing networks . . . . . . . . . . . . . . . . . . . . . . . . 322
12.2 Symmetric (reversible) queueing systems . . . . . . . . . . . . . . . . . . . . . 322
12.3 Open networks: single chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
12.3.1 Kleinrock’s independence assumption . . . . . . . . . . . . . . . . . . . 327
12.4 Open networks: multiple chains . . . . . . . . . . . . . . . . . . . . . . . . . . 328
12.5 Closed networks: single chain . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
12.5.1 Convolution algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
12.5.2 MVA–algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
12.6 BCMP multi-chain queueing networks . . . . . . . . . . . . . . . . . . . . . . 336
CONTENTS xiii
12.6.1 Convolution algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
12.7 Other algorithms for queueing networks . . . . . . . . . . . . . . . . . . . . . 341
12.8 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
12.9 Optimal capacity allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
13 Traffic measurements 345
13.1 Measuring principles and methods . . . . . . . . . . . . . . . . . . . . . . . . 346
13.1.1 Continuous measurements . . . . . . . . . . . . . . . . . . . . . . . . . 346
13.1.2 Discrete measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
13.2 Theory of sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
13.3 Continuous measurements in an unlimited period . . . . . . . . . . . . . . . . 350
13.4 Scanning method in an unlimited time period . . . . . . . . . . . . . . . . . . 353
13.5 Numerical example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Bibliography 361
Author index 370
Subject index 372
Exercises 379
Tables 608
xiv CONTENTS
Notations
a Offered traffic per sourceA Offered traffic = AoA` Lost trafficB Call congestionB Burstinessc ConstantC Traffic congestion = load congestionCn Catalan’s numberd Slot size in multi-rate trafficD Probability of delay or
Deterministic arrival or service processE Time congestionE1,n(A) = E1 Erlang’s B–formula = Erlang’s 1. formulaE2,n(A) = E2 Erlang’s C–formula = Erlang’s 2. formulaF Improvement functiong Number of groupsh Constant time interval or service timeH(k) Palm–Jacobæus’ formulaI Inverse time congestion I = 1/EJν(z) Modified Bessel function of order νk Accessibility = hunting capacity
Maximum number of customers in a queueing systemK Number of links in a telecommunication network or
number of nodes in a queueing networkL Mean queue lengthLkø Mean queue length when the queue is greater than zeroL Random variable for queue lengthm Mean value (average) = m1
mi i’th (non-central) momentm′i i’th central momentmr Mean residual life timeM Poisson arrival processn Number of servers (channels)N Number of traffic streams or traffic typesp(i) State probabilities, time averagespi, t | j, t0 Probability for state i at time t given state j at time t0
CONTENTS xv
P (i) Cumulated state probabilities P (i) =∑i
x=−∞ p(x)q(i) Relative (non normalized) state probabilities
Q(i) Cumulated values of q(i): Q(i) =∑i
x=−∞ q(x)Q Normalization constantr Reservation parameter (trunk reservation)R Mean response times Mean service timeS Number of traffic sourcest Time instantT Random variable for time instantU Load functionV Virtual waiting timew Mean waiting time for delayed customersW Mean waiting time for all customersW Random variable for waiting timex VariableX Random variabley Utilization = mean carried traffic per channel, yi = traffic carried by chan-
nel iY Total carried trafficZ Peakedness
α Carried traffic per sourceβ Offered traffic per idle sourceγ Arrival rate for an idle sourceε Palm’s form factorϑ Lagrange-multiplierκi i’th cumulantλ Arrival rate of a Poisson processΛ Total arrival rate to a systemµ Service rate, inverse mean service timeπ(i) State probabilities, arriving customer mean valuesψ(i) State probabilities, departing customer mean values% Service ratioσ2 Variance, σ = standard deviationτ Time-out constant or constant time-interval
xvi CONTENTS
Chapter 1
Introduction to Teletraffic Engineering
Teletraffic theory is defined as the application of probability theory to the solution of problemsconcerning planning, performance evaluation, operation, and maintenance of telecommuni-cation systems. More generally, teletraffic theory can be viewed as a discipline of planningwhere the tools (stochastic processes, queueing theory and numerical simulation) are takenfrom the disciplines of operations research.
The term teletraffic covers all kinds of data communication traffic and telecommunicationtraffic. The theory will primarily be illustrated by examples from telephone and data com-munication systems. The tools developed are, however, independent of the technology andapplicable within other areas such as road traffic, air traffic, manufacturing, distribution,workshop and storage management, and all kinds of service systems.
The objective of teletraffic theory can be formulated as follows:
to make the traffic measurable in well defined units through mathematical models andto derive relationships between grade-of-service and system capacity in such a way thatthe theory becomes a tool by which investments can be planned.
The task of teletraffic theory is to design systems as cost effective as possible with a predefinedgrade of service when we know the future traffic demand and the capacity of system elements.Furthermore, it is the task of teletraffic engineering to specify methods for controlling thatthe actual grade of service is fulfilling the requirements, and also to specify emergency actionswhen systems are overloaded or technical faults occur. This requires methods for forecastingthe demand (for instance based on traffic measurements), methods for calculating the capacityof the systems, and specification of quantitative measures for the grade of service.
When applying the theory in practice, a series of decision problems concerning both shortterm as well as long term arrangements occur.Short term decisions include for example the determination of the number of channels in a
1
2 CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING
base station of a cellular network, the number of operators in a call center, the number ofopen lanes in the supermarket, and the allocation of priorities to jobs in a computer system.Long term decisions include decisions concerning the development and extension of data- andtelecommunication networks, extension of cables, radio links, establishing a new base station,etc.
The application of the theory for design of new systems can help in comparing different solu-tions and thus eliminate non-optimal solutions at an early stage without having to implementprototypes.
1.1 Modeling of telecommunication systems
For the analysis of a telecommunication system, a model of the system considered must beset up. Especially for applications of the teletraffic theory to new systems, this modelingprocess is of fundamental importance. It requires knowledge of both the technical system,available mathematical tools, and the implementation of the model in a computer. Such amodel contains three main elements (Fig. 1.1):
• the system structure,
• the operational strategy, and
• the statistical properties of the traffic.
MACHINEDeterministic
MAN
Structure
Stochastic User demands
Hardware Software
Strategy
Traffic
Figure 1.1: Telecommunication systems are complex man/machine systems. The task ofteletraffic theory is to configure optimal systems from knowledge of user requirements andbehavior.
1.1. MODELING OF TELECOMMUNICATION SYSTEMS 3
1.1.1 System structure
This part is technically determined and it is in principle possible to obtain any level of detailsin the description, e.g. at component level. Reliability aspects are random processes as failuresoccur more or less at random, and they can be dealt with as traffic with highest priority. Thesystem structure is given by hardware and software which is described in manuals. In roadtraffic systems, roads, traffic signals, roundabouts, etc. make up the structure.
1.1.2 Operational strategy
A given physical system can be used in different ways in order to adapt the system to thetraffic demand. In road traffic, it is implemented with traffic rules and strategies which mayadapt to traffic variations during the day.
In a computer, this adaption takes place by means of the operating system and by operatorinterference. In a telecommunication system, strategies are applied in order to give priorityto call attempts and in order to route the traffic to the destination. In Stored ProgramControlled (SPC) telephone exchanges, the tasks assigned to the central processor are dividedinto classes with different priorities. The highest priority is given to calls already accepted,followed by new call attempts whereas routine control of equipment has lower priority. Theclassical telephone systems used wired logic in order to introduce strategies while in modernsystems it is done by software, enabling more flexible and adaptive strategies.
1.1.3 Statistical properties of traffic
User demands are modeled by statistical properties of the traffic. It is only possible to validatethat a mathematical models is in agreement with reality by comparing results obtained fromthe model with measurements on real systems. This process must necessarily be of iterativenature (Fig. 1.2). A mathematical model is build up from a thorough knowledge of the traffic.Properties are then derived from the model and compared to measured data. If they are notin satisfactory agreement, a new iteration of the process must take place.
It appears natural to split the description of the traffic properties into random processes forarrival of call attempts and processes describing service (holding) times. These two processesare usually assumed to be mutually independent, meaning that the duration of a call isindependent of the time the call arrive. Models also exists for describing the behavior ofusers (subscribers) experiencing blocking, i.e. they are refused service and may make a newcall attempt a little later (repeated call attempts). Fig. 1.3 illustrates the terminology usuallyapplied in the teletraffic theory.
4 CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING
Verification
Model
Observation
Data
Deduction
Figure 1.2: Teletraffic theory is an inductive discipline. From observations of real systems weestablish theoretical models, from which we derive parameters, which can be compared withcorresponding observations from the real system. If there is agreement, the model has beenvalidated. If not, then we have to elaborate the model further. This scientific way of workingis called the research loop.
Holding time Idle time
Time
Idle
BusyInter-arrival time
Arrival time Departure time
Figure 1.3: Illustration of the terminology applied for a traffic process. Notice the differencebetween time intervals and instants of time. We use the terms arrival, call, and connectionsynonymously. The inter-arrival time, respectively the inter-departure time, are the timeintervals between arrivals, respectively departures.
1.2. CONVENTIONAL TELEPHONE SYSTEMS 5
1.1.4 Models
General requirements to an engineering model are:
1. It must without major difficulties be possible to verify the model and to determine themodel parameters from observed data.
2. It must be feasible to apply the model for practical dimensioning.
We are looking for a description of for example the variations observed in the number ofongoing established calls in a telephone exchange, which changes incessantly due to calls be-ing established and terminated. Even though common habits of subscribers imply that dailyvariations follows a predictable pattern, it is impossible to predict individual call attemptsor duration of individual calls. In the description, it is therefore necessary to use statisticalmethods. We say that call attempt events take place according to a random (= stochas-tic) process, and the inter arrival times between call attempts is described by probabilitydistributions which characterize the random process.
We may classify models into three classes:
1. Mathematical models which are general, but often approximate. We may optimize theparameters analytically or numerically.
2. Simulation models where we may use either measured data or artificial data from sta-tistical distributions. It is more resource demanding to work with simulation models asyhey are not very general. Every individual case must be simulated.
3. Physical models (prototypes) are even much more time and resource consuming than asimulation model.
In general mathematical models are therefore preferred but often it is necessary to applysimulation to develop the mathematical model. Sometimes prototypes are developed forultimate testing.
1.2 Conventional telephone systems
This section gives a short description on what happens when a call attempt arrives to a tradi-tional telephone central. We divide the description into three parts: structure, strategy andtraffic. It is common practice to distinguish between subscriber exchanges (access switches,local exchanges (LEX)) and transit exchanges (TEX) due to the hierarchical structure ac-cording to which most national telephone networks are designed. Subscribers are connectedto local exchanges or to access switches (concentrators), which are connected to local ex-changes. Finally, transit switches are used to interconnect local exchanges or to increase theavailability and reliability.
6 CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING
1.2.1 System structure
Let us consider a historical telephone exchange of the crossbar type. Even though this typehas been taken out of service, a description of its functionality gives a good illustration onthe tasks which need to be solved in a digital exchange. The equipment in a conventionaltelephone exchange consists of voice paths and control paths (Fig. 1.4).
Processor
Register
Subscriber Stage Group Selector
JunctorSubscriber
Voice Paths
Control Paths
Processor Processor
Figure 1.4: Fundamental structure of a switching system.
The voice paths are occupied during the whole duration of the call (on the average 2–3minutes) while the control paths only are occupied during the phase of call establishment(in range 0.1 to 1 s). The number of voice paths is therefore considerable larger than thenumber of control paths. The voice path is a connection from a given inlet (subscriber) to agiven outlet. In a space division system the voice paths consists of passive component (likerelays, diodes or VLSI circuits). In a time division system the voice paths consist of specifictime-slots within a frame. The control paths are responsible for establishing the connection.Usually, this happens in a number of steps where each step is performed by a control device:a microprocessor, or a register (originally a human operator).
Tasks of the control device are:
• Identification of the originating subscriber (who wants a connection (inlet)).
• Reception of the digit information (address, outlet).
• Search after an idle connection between inlet and outlet.
• Establishment of the connection.
• Release of the connection when the conversation ends (performed sometimes by thevoice path itself).
1.2. CONVENTIONAL TELEPHONE SYSTEMS 7
In addition the charging of the calls must also be taken care of. In conventional exchangesthe control path is build up on relays and/or electronic devices and the logical operations areimplemented by wired logic. Changes in functions require hardware modifications which arecomplex and expensive.In digital exchanges the control devices are processors. The logical functions are carried out bysoftware, and changes are much easier to implement. The restrictions are far less constraining,as well as the complexity of the logical operations compared to the wired logic. Softwarecontrolled exchanges are also called SPC-systems (Stored Program Controlled systems).
1.2.2 User behaviour
We still consider a conventional telephone system. When an A-subscriber initiates a call, thehook is taken off and the wired pair to the subscriber is short-circuited. This triggers a relayat the exchange. The relay identifies the subscriber and a micro processor in the subscriberstage choose an idle cord. The subscriber line and the cord are connected through a switchingstage. This terminology originates from a the time when a manual operator by means of thecord was connected to the subscriber. A manual operator corresponds to a register. The cordhas three outlets.
A register is through another switching stage connexcted to the cord. Thereby the subscriberline is connected to a register (via the register selector) via the cord. This phase takes lessthan one second.
The register sends the dial tone to the A-subscriber who dials the digits of the telephonenumber of the B-subscriber; the digits are received and stored by the register. The durationof this phase depends on the subscriber.
A microprocessor analyzes the digit information and by means of a group selector establishes aconnection to the desired subscriber. It can be a subscriber at same exchange, at a neighbourexchange or a remote exchange. It is common to distinguish between exchanges to which adirect link exists, and exchanges for which this is not the case. In the latter case a connectionmust go through an exchange at a higher level in the hierarchy. The digit information isdelivered by means of a code transmitter to the code receiver of the desired exchange whichthen transmits the information to the registers of the exchange.
The register has now fulfilled its obligations and is released so it is idle for the service ofnew call attempts. The microprocessors work very fast (around 1–10 ms) and independentlyof the subscribers. The cord is occupied during the whole duration of the call and takescontrol of the call when the register is released. It takes care of different types of signals(busy, reference, etc), charging information, and release of the connection when the call isput down, etc.
It happens that a call does not pass on as planned. The subscriber may make an error,
8 CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING
suddenly hang up, etc. Furthermore, the system has a limited capacity. This will be dealtwith in Chap. 1.6. Call attempts towards a subscriber take place in approximately the sameway. A code receiver at the exchange of the B-subscriber receives the digits and a connection isset up through the group switching stage and the local switch stage through the B-subscriberwith use of the registers of the receiving exchange.
1.2.3 Operation strategy
The voice path normally works as loss systems while the control path works as delay systems(Sec. 1.6).
If there is not both an idle cord as well as an idle register then the subscriber will get no dialtone no matter how long he/she waits. If there is no idle outlet from the exchange to thedesired B-subscriber a busy tone will be sent to the calling A-subscriber. Independently ofany additional waiting there will not be established any connection.
If a microprocessor (or all microprocessors of a specific type when there are more than one)is busy, then the call will wait until the microprocessor becomes idle. Due to the veryshort holding time the waiting time will often be so short that the subscribers do not noticeanything. If several subscribers are waiting for the same microprocessor, they will usually beserved in random order independent of the time of arrival.
The way by which control devices of the same type and the cords share the work is often cyclic,such that they get approximately the same number of call attempts. This is an advantagesince this ensures the same amount of wear and since a subscriber only rarely will get a defectcord or control path again if the call attempt is repeated.If a control path is occupied longer than a given time, a forced disconnection of the call willtake place. This makes it impossible for a single call to block vital parts of the exchange, e.g.a register. It is also only possible to generate the ringing tone for a limited duration of timetowards a B-subscriber and thus block this telephone a limited time at each call attempt. Anexchange must be able to operate and function independently of subscriber behaviour.
The cooperation between the different parts takes place in accordance to strict and welldefined rules, called protocols, which in conventional systems is determined by the wiredlogic and in software controlled systems by software logic.
The digital systems (e.g. ISDN = Integrated Services Digital Network, where the wholetelephone system is digital from subscriber to subscriber (2 ·B +D = 2× 64 + 16 Kbps persubscriber), ISDN = N-ISDN = Narrow-band ISDN) of course operates in a way differentfrom the conventional systems described above. However, the fundamental teletraffic tools forevaluation are the same in both systems. The same also covers the future broadband systemsB–ISDN which are based on ATM = Asynchronous Transfer Mode and MPLS (Multi ProtocolLabel Switching).
1.3. WIRELESS COMMUNICATION SYSTEMS 9
1.3 Wireless communication systems
A tremendous expansion is seen these years in mobile communication systems where thetransmission medium is either analogue or digital radio channels (wireless) instead of con-ventional wired systems. The electro magnetic frequency spectrum is divided into differentbands reserved for specific purposes. For mobile communications a subset of these bands arereserved. Each band corresponds to a limited number of radio telephone channels, and it ishere the limited resource is located in mobile communication systems. The optimal utiliza-tion of this resource is a main issue in the cellular technology. In the following subsection arepresentative system is described.
1.3.1 Cellular systems
Structure. When a certain geographical area is to be supplied with mobile telephony, asuitable number of base stations must be put into operation in the area. A base stationis an antenna with transmission/receiving equipment or a radio link to a mobile telephoneexchange (MTX) which are part of the traditional telephone network. A mobile telephoneexchange is common to all the base stations in a given traffic area. Radio waves are attenuatedwhen they propagate in the atmosphere and a base station is therefore only able to covera limited geographical area which is called a cell (not to be confused with ATM–cells). Bytransmitting the radio waves at adequate power it is possible to adapt the coverage area suchthat all base stations covers the planned traffic area without too much overlapping betweenneighbour stations. It is not possible to use the same radio frequency in two neighbour basestations but in two base stations without a common border the same frequency can be usedthereby allowing the channels to be reused.
In Fig. 1.5 an example is shown. A certain number of channels per cell corresponding to agiven traffic volume is thereby made available. The size of the cell will depend on the trafficvolume. In densely populated areas as major cities the cells will be small while in sparselypopulated areas the cells will be large.Frequency allocation is a complex problem. In addition to the restrictions given above, anumber of other limitations also exist. For example, there has to be a certain distance(number of channels) between two channels on the same base station (neighbour channelrestriction) and to avoid interference also other restrictions exist.
Strategy. In mobile telephone systems a database with information about all the subscriberhas to exist. Any subscriber is either active or passive corresponding to whether the radiotelephone is switched on or off. When the subscriber turns on the phone, it is automaticallyassigned to a so-called control channel and an identification of the subscriber takes place.The control channel is a radio channel used by the base station for control. The remainingchannels are traffic channels
10 CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING
A
A
B
A
A B A B A
C A
C
A
C B
C B C A B
CB A B
B
C C
B A C
C A
A B C
B C
Figure 1.5: Cellular mobile communication system. By dividing the frequencies into 3 groups(A, B and C) they can be reused as shown.
A call request towards a mobile subscriber (B-subscriber) takes place in the following way.The mobile telephone exchange receives the call from the other subscriber (A-subscriber,fixed or mobile). If the B-subscriber is passive (handset switched off) the A-subscriber isinformed that the B-subscriber is non-available. Is the B-subscriber active, then the numberis sent out on all control channels in the traffic area. The B-subscriber recognizes his ownnumber and informs via the control channel the system about the identity of the cell (basestation) in which he is located. If an idle traffic channel exists it is allocated and the MTXputs up the call.
A call request from a mobile subscriber (A-subscriber) is initiated by the subscriber shiftingfrom the control channel to a traffic channel where the call is established. The first phasewith recording the digits and testing the accessibility of the B-subscriber is in some casesperformed by the control channel (common channel signalling)
A subscriber is able to move freely within his own traffic area. When moving away from thebase station this is detected by the MTX which constantly monitor the signal to noise ratioand the MTX moves the call to another base station and to another traffic channel withbetter quality when this is required. This takes place automatically by cooperation betweenthe MTX and the subscriber equipment, usually without being noticed by the subscriber.This operation is called hand over, and of course requires the existence of an idle trafficchannel in the new cell. Since it is improper to interrupt an existing call, hand-over calls aregiven higher priorities than new calls. This strategy can be implemented by reserving one ortwo idle channels for hand-over calls.
When a subscriber is leaving its traffic area, so-called roaming will take place. The MTX
1.3. WIRELESS COMMUNICATION SYSTEMS 11
in the new area is from the identity of the subscriber able to locate the home MTX of thesubscriber. A message to the home MTX is forwarded with information on the new position.Incoming calls to the subscriber will always go to the home MTX which will then route thecall to the new MTX. Outgoing calls will be taken care of the usual way.
A widespread digital wireless system is GSM, which can be used throughout Western Eu-rope. The International Telecommunication Union is working towards a global mobile sys-tem UPC (Universal Personal Communication), where subscribers can be reached worldwide(IMT2000).
Paging systems are primitive one-way systems. DECT, Digital European Cord-less Tele-phone, is a standard for wireless telephones. They can be applied locally in companies,business centers etc. In the future equipment which can be applied both for DECT and GSMwill come up. Here DECT corresponds to a system with very small cells while GSM is asystem with larger cells.
Satellite communication systems are also being planned in which low orbit satellites corre-spond to base stations. The first such system Iridium, consisted of 66 satellites such thatmore than one satellite always were available at any given location within the geographicalrange of the system. The satellites have orbits only a few hundred kilometers above theEarth. Iridium was unsuccessful, but newer systems such as the Inmarsat system are now inoperation.
1.3.2 Wireless Broadband Systems
In these systems we have an analogue high capacity channel, for example 10 Mhz, whichis turned into a digital channel with a capacity up to 100 Mbps, depending on the codingscheme which again depends on the quality of the channel. The digital channel (media) isshared my many users according to a media access control (MAC) protocol.
If all services have the same constant bandwidth demand we could split the digital channelup into many constant bit rate channels. This was done in classical systems by frequencydivision multiple access FDMA.
Most data and multimedia services have variable bandwidth demand during the occupationtime. Therefore, the digital channel in time is split up into time-slots, and we apply timedivision multiple access, FDMA. A certain number of time slots make up a frame, whichis repeated infinitely during time. Thus a time slot in each frame corresponding to theminimum bandwidth allocated. The information transmitted by a user is thus aggregatedand transmitted in one or more slots in every frame. The frame size specifies the maximumdelay the information experience due to the slotted time. Slot size and frame size shouldbe specified according to the quality of service and restrictions from coding and switchingmechanisms. One slot in a frame in TDMA thus corresponds to one channel in FDMA. The
12 CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING
advantage of TDMA is that we can change the allocation of slots from one frame to the nextframe and thus reallocate the bandwidth resources very fast.
Service classes
In most digital service-integrated systems we specify four services classes: two for real-timeservices and two for non-real-time services.
• Real-time services
– Constant bit-rate real time services. These services require a constant bandwidth.Examples are voice services as ISDN and VoIP (voice over IP). For this kind aservices we have to reserve a fixed number of slots in each frame.
– Variable bit-rate real time services. These services have a variable bandwidthdemand. Examples are most data services. Also voice and video services withcodecs (coder/decoder) having variable bit output. During each frame we allocatea certain capacity to a service. We may have restrictions upon the maximumnumber of slots, the average number of slots, etc.
• Non Real-time services
– Non real-time polling services. This is services which do not require real timetransmission, but there may be restrictions on the minimum bandwidth allocated.The services ask for a certain number of slots, and the system allocate slots ineach frame dependent on the number of idle slots.
– Best effort traffic. This traffic uses the remaining capacity left over from the otherservices. Also here we may guarantee a certain minimum bandwidth. This couldfor example be ftp-traffic.
By traffic engineering we develop strategies for acceptance of connections, specify strategiesfor allocation of capacity to the classes, so that we can fulfil the service level agreement(SLA) between user and operator. We also specify policing agreements to ensure that theuser traffic conform with the agreed parameters. The SLA specifies the quality-of-service(QoS) guaranteed by the operator. For each service there may be different levels of QoS, forexample named Gold, Silver, and Bronze. A subscriber asking for Gold service will requiremore resources and also pay more for the service. The task of traffic engineering is to bothmaximize the utilization of the resources and fulfil the QoS requirements.
1.4 Communication networks
There exists different kinds of communications networks: telephone networks, data networks,Internet, etc. Today the telephone network is dominating and physically other networks will
1.4. COMMUNICATION NETWORKS 13
often be integrated in the telephone network. In future digital networks it is the plan tointegrate a large number of services into the same network (ISDN, B-ISDN).
1.4.1 Classical telephone network
The telephone network has traditionally been build up as a hierarchical system. The individ-ual subscribers are connected to a subscriber switch or sometimes a local exchange (LEX).This part of the network is called the access network. The subscriber switch is connected to aspecific main local exchange which again is connected to a transit exchange (TEX) of whichthere usually is at least one for each area code. The transit exchanges are normally connectedinto a mesh structure. (Fig. 1.6). These connections between the transit exchanges are calledthe hierarchical transit network. There exists furthermore connections between two localexchanges (or subscriber switches) belonging to different transit exchanges (local exchanges)if the traffic demand is sufficient to justify it.
Ring networkMesh network Star network
Figure 1.6: There are three basic structures of networks: mesh, star and ring. Mesh networksare applicable when there are few large exchanges (upper part of the hierarchy, also namedpolygon network), whereas star networks are proper when there are many small exchanges(lower part of the hierarchy). Ring networks are applied for example in fibre optical systems.
A connection between two subscribers in different transit areas will normally pass the follow-ing exchanges:
USER → LEX → TEX → TEX → LEX → USER
The individual transit trunk groups are based on either analogue or digital transmissionsystems, and multiplexing equipment is often used.
Twelve analogue channels of 3 kHz each make up one first order bearer frequency system(frequency multiplex), while 32 digital channels of 64 Kbps each make up a first order PCM-system of 2.048 Mbps (pulse-code-multiplexing, time multiplexing).
14 CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING
The 64 Kbps are obtained from a sampling of the analogue signal at a rate of 8 kHz and anamplitude accuracy of 8 bit. Two of the 32 channels in a PCM system are used for signallingand control.
I
L L L L L L L L L
T T T T
I
Figure 1.7: In a telecommunication network all exchanges are typically arranged in a three-level hierarchy. Local-exchanges or subscriber-exchanges (L), to which the subscribers areconnected, are connected to main exchanges (T), which again are connected to inter-urbanexchanges (I). An inter-urban area thus makes up a star network. The inter-urban exchangesare interconnected in a mesh network. In practice the two network structures are mixed, be-cause direct trunk groups are established between any two exchanges, when there is sufficienttraffic.
Due to reliability and security there will almost always exist at least two disjoint pathsbetween any two exchanges and the strategy will be to use the cheapest connections first.The hierarchy in the Danish digital network is reduced to two levels only. The upper level withtransit exchanges consists of a fully connected meshed network while the local exchanges andsubscriber switches are connected to two or three different transit exchanges due to securityand reliability.
The telephone network is characterized by the fact that before any two subscribers can com-municate, a full two-way (duplex) connection must be created, and the connection existsduring the whole duration of the communication. This property is referred to as the tele-phone network being connection oriented as distinct from for example the Internet whichis connection-less. Any network applying for example line–switching or circuit–switching isconnection oriented. A packet switching network may be either connection oriented (for ex-ample virtual connections in ATM) or connection-less. In the discipline of network planning,the objective is to optimise network structures and traffic routing under the consideration oftraffic demands, service and reliability requirement etc.
Example 1.4.1: VSAT-networksVSAT-networks (Maral, 1995 [85]) are for instance used by multi-national organizations for trans-mission of speech and data between different divisions of news-broadcasting, in case of disasters ,
1.4. COMMUNICATION NETWORKS 15
etc. It can be both point-to point connections and point to multi-point connections (distributionand broadcast). The acronym VSAT stands for Very Small Aperture Terminal (Earth station)which is an antenna with a diameter of 1.6–1.8 meter. The terminal is cheap and mobile. It is thuspossible to bypass the public telephone network. The signals are transmitted from a VSAT terminalvia a satellite towards another VSAT terminal. The satellite is in a fixed position 35 786 km aboveequator and the signals therefore experiences a propagation delay of around 125 ms per hop. Theavailable bandwidth is typically partitioned into channels of 64 Kbps, and the connections can beone-way or two-ways.
In the simplest version, all terminals transmit directly to all others, and a full mesh network is theresult. The available bandwidth can either be assigned in advance (fixed assignment) or dynamicallyassigned (demand assignment). Dynamical assignment gives better utilization but requires morecontrol.
Due to the small parabola (antenna) and an attenuation of typically 200 dB in each direction,it is practically impossible to avoid transmission error, and error correcting codes and possibleretransmission schemes are used. A more reliable system is obtained by introducing a main terminal(a hub) with an antenna of 4 to 11 meters in diameter. A communication takes place through thehub. Then both hops (VSAT → hub and hub → VSAT) become more reliable since the hub is ableto receive the weak signals and amplify them such that the receiving VSAT gets a stronger signal.The price to be paid is that the propagation delay now is 500 ms. The hub solution also enablescentralised control and monitoring of the system. Since all communication is going through the hub,the network structure constitutes a star topology. 2
1.4.2 Data networks
Data network are sometimes engineered according to the same principle as the telephonenetwork except that the duration of the connection establishment phase is much shorter.Another kind of data network is given by packet switching network, which works accordingto the store-and-forward principle (see Fig. 1.8). The data to be transmitted are sent fromtransmitter to receiver in steps from exchange to exchange. This may create delays since theexchanges which are computers work as delay systems (connection-less transmission).
If the packet has a maximum fixed length, the network is denoted packet switching (e.g. X.25protocol). In X.25 a message is segmented into a number of packets which do not necessarilyfollow the same path through the network. The protocol header of the packet contains asequence number such that the packets can be arranged in correct order at the receiver.Furthermore error correction codes are used and the correctness of each packet is checkedat the receiver. If the packet is correct an acknowledgement is sent back to the precedingnode which now can delete its copy of the packet. If the preceding node does not receiveany acknowledgement within some given time interval a new copy of the packet (or a wholeframe of packets) are retransmitted. Finally, there is a control of the whole message fromtransmitter to receiver. In this way a very reliable transmission is obtained. If the wholemessage is sent in a single packet, it is denoted message–switching .
16 CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING
HOST
2
3
5
4
1
6
HOST
HOST
HOST
Figure 1.8: Datagram network: Store- and forward principle for a packet switching datanetwork.
Since the exchanges in a data network are computers, it is feasible to apply advanced strategiesfor traffic routing.
1.4.3 Local Area Networks (LAN)
Local area networks are a very specific but also very important type of data network whereall users through a computer are attached to the same digital transmission system, e.g. acoaxial cable. Normally, only one user at a time can use the transmission medium and getsome data transmitted to another user. Since the transmission system has a large capacitycompared to the demand of the individual users, a user experiences the system as if he isthe only user. There exist several types of local area networks. Applying adequate strategiesfor the medium access control (MAC) principle, the assignment of capacity in case of manyusers competing for transmission is taken care of. There exist two main types of LocalArea Networks: CSMA/CD (Ethernet) and token networks. The CSMA/CD (Carrier SenseMultiple Access/Collision Detection) is the one most widely used. All terminals are all thetime listening to the transmission medium and know when it is idle and when it is occupied.At the same time a terminal can see which packets are addressed to the terminal itself andtherefore should be received and stored. A terminal wanting to transmit a packet transmits itif the medium is idle. If the medium is occupied the terminal wait a random amount of time
1.5. ITU RECOMMENDATIONS ON TRAFFIC ENGINEERING 17
before trying again. Due to the finite propagation speed, it is possible that two (or even more)terminals starts transmission within such a short time interval so that two or more messagescollide on the medium. This is denoted as a collision. Since all terminals are listening all thetime, they can immediately detect that the transmitted information is different from whatthey receive and conclude that a collision has taken place (CD = Collision Detection). Theterminals involved immediately stops transmission and try again a random amount of timelater (back-off).
In local area network of the token type, it is only the terminal presently possessing the tokenwhich can transmit information. The token is circulating between the terminals according topredefined rules.
Local area networks based on the ATM technique are also in operation. Furthermore, wire-less LANs are very common. The propagation is negligible in local area networks due tosmall geographical distance between the users. In for example a satellite data network thepropagation delay is large compared to the length of the messages and in these applicationsother strategies than those used in local area networks are used.
1.5 ITU recommendations on traffic engineering
The following section is based on ITU–T draft Recommendation E.490.1: Overview of Recom-mendations on traffic engineering. See also (Villen, 2002 [116]). The International Telecom-munication Union (ITU) is an organization sponsored by the United Nations for promotinginternational telecommunications. It has three sectors:
• Telecommunication Standardization Sector (ITU–T),
• Radio communication Sector (ITU–R), and
• Telecommunication Development Sector (ITU–D).
The primary function of the ITU–T is to produce international standards for telecommunica-tions. The standards are known as recommendations. Although the original task of ITU–Twas restricted to facilitate international inter-working, its scope has been extended to covernational networks, and the ITU–T recommendations are nowadays widely used as de factonational standards and as references.
The aim of most recommendations is to ensure compatible inter-working of telecommunicationequipment in a multi-vendor and multi-operator environment. But there are also recommen-dations that advice on best practices for operating networks. Included in this group are therecommendations on traffic engineering.
18 CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING
The ITU–T is divided into Study Groups. Study Group 2 (SG2) is responsible for OperationalAspects of Service Provision Networks and Performance. Each Study Group is divided intoWorking Parties.
1.5.1 Traffic engineering in the ITU
Although Working Party 3/2 has the overall responsibility for traffic engineering, some rec-ommendations on traffic engineering or related to it have been (or are being) produced byother Groups. Study Group 7 deals in the X Series with traffic engineering for data com-munication networks, Study Group 11 has produced some recommendations (Q Series) ontraffic aspects related to system design of digital switches and signalling, and some recom-mendations of the I Series, prepared by Study Group 13, deal with traffic aspects relatedto network architecture of N- and B-ISDN and IP–based networks. Within Study Group 2,Working Party 1 is responsible for the recommendations on routing and Working Party 2 forthe Recommendations on network traffic management.
This section will focus on the recommendations produced by Working Party 3/2. They are inthe E Series (numbered between E.490 and E.799) and constitute the main body of ITU–Trecommendations on traffic engineering.
The Recommendations on traffic engineering can be classified according to the four majortraffic engineering tasks:
• Traffic demand characterization;
• Grade of Service (GoS) objectives;
• Traffic controls and dimensioning;
• Performance monitoring.
The interrelation between these four tasks is illustrated in Fig. 1. The initial tasks in trafficengineering are to characterize the traffic demand and to specify the GoS (or performance)objectives. The results of these two tasks are input for dimensioning network resources andfor establishing appropriate traffic controls. Finally, performance monitoring is required tocheck if the GoS objectives have been achieved and is used as a feedback for the overallprocess.
1.6 Traffic concepts and grade of service
The costs of a telephone system can be divided into costs which are dependent upon thenumber of subscribers and costs that are dependent upon the amount of traffic in the system.
1.6. TRAFFIC CONCEPTS AND GRADE OF SERVICE 19
Dimensioning
QoS
End−to−endGoS objectives
requirements
work components
Grade of Service objectives
Allocation to net−
Trafficmodelling
Trafficmeasurement
Trafficforecasting
Traffic controls
Traffic demand characterisation
Performance monitoring
Performance monitoring
Traffic controls and dimensioning
Figure 1.9: Traffic engineering tasks.
The goal when planning a telecommunication system is to adjust the amount of equipmentso that variations in the subscriber demand for calls can be satisfied without noticeableinconvenience while the costs of the installations are as small as possible. The equipmentmust be used as efficiently as possible.
Teletraffic engineering deals with optimization of the structure of the network and adjustmentof the amount of equipment that depends upon the amount of traffic.
In the following some fundamental concepts are introduced and some examples are given toshow how the traffic behaves in real systems. All examples are from the telecommunicationarea.
20 CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING
1.7 Concept of traffic and traffic unit [erlang]
In teletraffic theory we usually use the word traffic to denote the traffic intensity, i.e. trafficper time unit. The term traffic comes from Italian and means business. According to ITU–T(1993 [39]) we have the following definition:
Definition of Traffic Intensity: The instantaneous traffic intensity in a pool of resourcesis the number of busy resources at a given instant of time.
Depending on the technology considered, the pool of resources correspond to a group ofservers, lines, circuits, channels, trunks, computers, etc. The statistical moments (meanvalue, variance) of the traffic intensity may be calculated for a given period of time T . Forthe average traffic intensity we get:
Y (T ) =1
T·∫ T
0
n(t) dt. (1.1)
where n(t) denotes the number of occupied devices at the time t.
Carried traffic Y = Ac : This is called the traffic carried by the group of servers during thetime interval T (Fig. 1.10). In applications, the term traffic intensity usually has the meaningof average traffic intensity.
0
10
20
30
40Number of busy channels
TimeT
n(t)mean
.......................................................................
..................................................................................................
.............................................
..................................................................................................................................................................................................................................................
..................................................................................
.................................................................
........
.............................................................
...............................................................................................................................................................................................
..............................................................................................
..........................................................................................
.............................................................................................................................................................................................................................................
..........................................................................................................................................................................................................................................
.................................................................................................................................................................................................................................................................
.................................................................................................................................
.....................................................................................................................
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
........
........
..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..............................................................................
..........................
..............................................................................
..........................
..................................................................................................................... ........................................................................................... ..........................
Figure 1.10: The carried traffic (intensity) (= number of busy devices) as a function n(t) oftime. For dimensioning purposes we use the average traffic intensity during a period of timeT (mean).
1.7. CONCEPT OF TRAFFIC AND TRAFFIC UNIT [ERLANG] 21
The ITU-T recommendation also specifies that the unit usually used for traffic intensity iserlang (symbol E). This name was given to the traffic unit in 1946 by CCIF (predecessorto CCITT and to ITU-T), in honor of the Danish mathematician A. K. Erlang (1878-1929),who was the founder of traffic theory in telephony. The unit is dimensionless. The totaltraffic carried in a time period T is a traffic volume, and it is measured in erlang–hours (Eh),or if more convenient for example erlang–seconds. It is equal to the sum of all holding timesinside the time period.
The carried traffic can never exceed the number of channels (lines). A channel can at mostcarry one erlang. The revenue is often proportional to the carried traffic.
Offered traffic A: In mathematical models we use the concept offered traffic. This is thetraffic which would be carried if no calls were rejected due to lack of capacity, i.e. if thenumber of servers were unlimited. The offered traffic is a theoretical quantity which cannotbe measured. It can only be estimated from the carried traffic.
Theoretically we operate with two parameters:
1. call intensity λ, which is the mean number of calls offered per time unit, and
2. mean service time s.
The offered traffic is equal to:A = λ · s . (1.2)
The parameters should be specified using the same time unit. From this equation it isseen that the unit of traffic has no dimension. This definition assumes according to theabove definition that there is an unlimited number of servers. The offered traffic should beindependent of the actual system.
Lost or Rejected traffic A` : The difference between offered traffic and carried traffic isequal to the rejected traffic. The lost traffic can be reduced by increasing the capacity of thesystem.
Example 1.7.1: Definition of trafficIf the call intensity is 5 calls per minute, and the mean service time is 3 minutes then the offeredtraffic is equal to 15 erlang. The offered traffic-volume during a working day of 8 hours is then 120erlang-hours. 2
Example 1.7.2: Traffic unitsEarlier other units of traffic have been used. The most common which may still be seen are:
SM = Speech-minutes1 SM= 1/60 Eh.
22 CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING
CCS = Hundred call seconds:1 CCS = 1/36 Eh.This unit is based on a mean holding time of 100 secondsand can still be found, e.g. in USA.
EBHC = Equated busy hour calls:1 EBHC = 1/30 Eh.This unit is based on a mean holding time of 120 seconds.
We will soon realize, that erlang is the natural unit for traffic intensity because this unit is inde-pendent of the time unit chosen. 2
The offered traffic is a theoretical parameter used in mathematical models and simulationmodels. However, the only measurable parameter in reality is the carried traffic, whichdepends upon the actual system.
Data transmission and multi-rate traffic: In data transmissions systems we do nottalk about service times but about transmission demands. A job can for example be a datapacket of s units (e.g. bits or bytes). The capacity of the system ϕ, the data signallingspeed, is measured in units per second (e.g. bits/second). The service time for such a job, i.e.transmission time, is s/ϕ time units (e.g. seconds), i.e. dependent upon ϕ. If on the averageλ jobs are served per time unit, then the utilization % of the system is:
% =λ · sϕ
. (1.3)
The observed utilization will always be inside the interval 0 ≤ % ≤ 1, as it is the traffic carriedby one channel.
Usually we with split the total capacity up into units called Basic Bandwidth Units (BBU)or channels. We choose this unit so that all services require an integral number of bandwidthunits, for example 64 kbps. If calls of type j simultaneously occupy dj channels, then theoffered traffic expressed in number of channels becomes:
Ac h =N∑
j=0
λj · sj · dj [erlang-channels] , (1.4)
where N is number of traffic types, and λj and sj denotes the arrival rate, respectively themean holding time of traffic type j. The offered traffic in number of connections for one serviceis Aj,c o = λj · sj [erlang-connections]. Usually the carried traffic is measured in number ofchannels as it often is a mix of different connections with different bandwidth.
Potential traffic: In planning and demand models we use the term potential traffic, whichis equal the offered traffic if there are no limitations on the use of the phone due to cost oravailability (always a free telephone available).
1.8. TRAFFIC VARIATIONS AND THE CONCEPT BUSY HOUR 23
1.8 Traffic variations and the concept busy hour
The teletraffic have variations according to the activity in the society. The traffic is generatedby single sources, subscribers, who normally make telephone calls independently of each other.
An investigation of the traffic variations shows that it is partly of a stochastic nature, partlyof a deterministic nature. Fig. 1.11 shows the variation in the number of calls on a Mondaymorning. By comparing several days we can recognize a deterministic curve with superposedstochastic variations.
8 9 10 11 12 130
40
80
120
160
Time of day [hour]
Calls per minute
........................................................................................................................................................................................................................................................................................
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...............................................................................................................................................
........
........
........
........
........
........
........
...................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
................................................................................................................................................................................
..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
........
........
..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
........
........
........
........
......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
...............................................................................
Figure 1.11: Number of calls per minute to a switching center a Monday morning. Theregular 24-hour variations are superposed by stochastic variations. (Iversen, 1973 [40]).
During a 24 hours period the traffic typically looks as shown in Fig. 1.12. The first peak iscaused by business subscribers at the beginning of the working hours in the morning, possiblycalls postponed from the day before. Around 12 o’clock it is lunch, and in the afternoon thereis a certain activity again.
Around 19 o’clock there is a new peak caused by private calls and a possible reduction inrates after 19.30. The mutual size of the peaks depends among other thing upon whetherthe exchange is located in a typical residential area or in a business area. They also dependupon which type of traffic we look at. If we consider the traffic between Europe and USA,most calls takes place in the late afternoon because of the time difference.
The variations can further be split up into variation in call intensity and variation in servicetime. Fig. 1.13 shows variations in the mean service time for occupation times of trunk linesduring 24 hours. During business hours it is constant, just below 3 minutes. In the evening
24 CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING
0 4 8 12 16 20 240
20
40
60
80
100
Time of day [hour]
Calls per minute
Figure 1.12: The mean number of calls per minute to a switching center taken as an averagefor periods of 15 minutes during 10 working days (Monday – Friday). At the time of themeasurements there were no reduced rates outside working hours (Iversen, 1973 [40]).
it is more than 4 minutes and during the night very small, about one minute.
Busy Hour: The highest traffic does not occur at same time every day. We define theconcept time consistent busy hour, TCBH as those 60 minutes (fixed with an accuracy of 15minutes) which during a long period on the average has the highest traffic.
Some days it may therefore happen that the traffic during the busiest hour is larger than thetime consistent busy hour, but on the average during many days, the busy hour traffic willbe the largest.
We also distinguish between busy hour for the total telecommunication system, an exchange,and for a single group of servers, e.g. a trunk group. Certain trunk groups may have a busyhour outside the busy hour for the exchange (for example trunk groups for calls to the USA).
In practice, for measurements of traffic, dimensioning, and other aspects it is an advantageto have a predetermined well–defined busy hour.
The deterministic variations in teletraffic can be divided into:
• 24 hours variation (Fig. 1.12 and 1.13).
• Weekly variations (Fig. 1.14). Normally the highest traffic is on Monday, then Friday,Tuesday, Wednesday and Thursday. Saturday and especially Sunday have a low trafficlevel. A useful rule of thumb is that the 24 hour traffic is equal to 8 times the busy
1.8. TRAFFIC VARIATIONS AND THE CONCEPT BUSY HOUR 25
hour traffic (Fig. 1.14), i.e. only one third of capacity in the telephone system is utilized.This is the reason for reducing rates outside busy hours.
• Variation during a year. There is a high traffic in the beginning of a month, after afestival season, and after quarterly period begins. If Easter is around the 1st of Aprilthen we observe a very high traffic just after the holidays.
• The traffic increases year by year due to the development of technology and economicsin the society.
0 4 8 12 16 20 240
30
60
90
120
150
180
210
240
270
300
Time of day [hour]
Mean holding time [s]
Figure 1.13: Mean holding time for trunk lines as a function of time of day. (Iversen,1973 [40]). The measurements exclude local calls.
Above we have considered traditional voice traffic. Other services and traffic types have otherpatterns of variation. In Fig. 1.15 we show the variation in the number of calls per 15 minutesto a modem pool for dial-up Internet calls. The mean holding time as a function of the timeof day is shown in Fig. 1.16.
Cellular mobile telephony has a different profile with maximum late in the afternoon, and themean holding time is shorter than for wire-line calls. By integrating various forms of trafficin the same network we may therefore obtain a higher utilization of the resources.
26 CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING
. . . . . . .. . . . . . . .
. . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri Sat0
10000
20000
30000
40000
50000
60000
0
1250
2500
3750
5000
6250
7500Number of calls per 24 hours Number of calls per Busy Hour
Figure 1.14: Number of calls per 24 hours to a switching center (left scale). The numberof calls during busy hour is shown for comparison at the right scale. We notice that the24–hour traffic is approximately 8 times the busy hour traffic. This factor is called the trafficconcentration (Iversen, 1973 [40]).
1.9 The blocking concept
The telephone system is not dimensioned so that all subscribers can be connected at thesame time. Several subscribers are sharing the expensive equipment of the exchanges. Theconcentration takes place from the subscriber toward the exchange. The equipment which isseparate for each subscriber should be made as cheap as possible.
In general we expect that about 5–8 % of the subscribers should be able to make calls at thesame time in busy hour (each phone is used 10–16 % of the time). For international callsless than 1 % of the subscribers are making calls simultaneously. Thus we exploit statisticalmultiplexing advantages. Every subscriber should feel that he has unrestricted access to allresources of the telecommunication system even if he is sharing it with many others.
The amount of equipment is limited for economical reasons and it is therefore possible thata subscriber cannot establish a call, but has to wait or is blocked (the subscriber for exampleexperiences busy tone and has to repeat the call attempt). Both are inconvenient to the
1.9. THE BLOCKING CONCEPT 27
0
2000
4000
6000
8000
10000
12000
14000
0 2 4 6 8 10 12 14 16 18 20 22 24
arri
vals
hour of day
Figure 1.15: Number of calls per 15 minutes to a modem pool of Tele Danmark Internet.Tuesday 1999.01.19.
subscriber. Depending on how the system operates we distinguish between loss systems (e.g.trunk groups) and waiting time systems (e.g. common control units and computer systems)or a combination of these if the number of waiting positions (buffer) is limited.
The inconvenience in loss–systems due to insufficient equipment can be expressed in threeways (network performance measures):
Call congestion B: The fraction of all call attempts which observes all servers busy(the user-perceived quality-of-service, the nuisance the subscriberexperiences).
Time congestion E: The fraction of time when all servers are busy. Time conges-tion can for example be measured at the exchange (= virtualcongestion).
Traffic congestion C: The fraction of offered traffic which is not carried, possibly de-spite several attempts.
These quantitative measures can for example be used to establish dimensioning principles fortrunk groups.
28 CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING
0
200
400
600
800
1000
1200
0 2 4 6 8 10 12 14 16 18 20 22 24
serv
ice
time
(sec
)
hour of day
Figure 1.16: Mean holding time in seconds as a function of time of day for calls arrivinginside the period considered. Tele Denmark Internet. Tuesday 1999.01.19.
When congestion is small it is possible with a good approximation to handle congestion in thedifferent part of the system as being mutually independent. The congestion for a certain routeis then approximately equal to the sum of the congestion in each link of the route. Duringthe busy hour we normally allow a congestion of a few percentage between two subscribers.
The systems cannot manage every situation without inconvenience for the subscribers. Thepurpose of teletraffic theory is to find relations between quality of service and cost of equip-ment. The existing equipment should be able to work at maximum capacity during abnormaltraffic situations (e.g. a burst of phone calls), i.e. the equipment should keep working andmake useful connections.
The inconvenience in delay–systems (queueing systems) is measured as a waiting time. Notonly the mean waiting time is of interest but also the distribution of the waiting time. Itcould be that a small delay do not mean any inconvenience, so there may not be a linearrelation between inconvenience and waiting time.
In telephone systems we often define an upper limit for the acceptable waiting time. If thislimit is exceeded, then a time-out of the connection will take place (enforced disconnection).
1.10. TRAFFIC GENERATION AND SUBSCRIBERS REACTION 29
Outcome I–country D–country
A-error: 15 % 20 %Blocking and technical errors: 5 % 35 %B no answer before A hangs up: 10 % 5 %B-busy: 10 % 20 %B-answer = conversation: 60 % 20 %
No conversation: 40 % 80 %
Table 1.1: Typical outcome of a large number of call attempts during Busy Hour for Indus-trialized countries, respectively Developing countries.
1.10 Traffic generation and subscribers reaction
If Subscriber A want to speak to Subscriber B this will either result in a successful callor a failed call–attempt. In the latter case A may repeat the call attempt later and thusinitiate a series of several call–attempts which fail. Call statistics typically looks as shown inTable 1.1, where we have grouped the errors into a few typical classes. We notice that theonly error which can be directly influenced by the operator is technical errors and blocking,and this class usually is small, a few percentages during the Busy Hour. Furthermore, wenotice that the number of calls which experience B–busy depends on the number of A-errorsand technical errors & blocking. Therefore, the statistics in Table 1.1 are misleading. To
A
pe
ps
pb
paB-answer
pn
B-busy
No answer
and BlockingTech. errors
A-error
Figure 1.17: When calculating the probabilities of events for a certain number of call attemptswe have to consider the conditional probabilities.
obtain the relevant probabilities, which are shown in Fig. 1.17, we shall only consider thecalls arriving at the considered stage when calculating probabilities. Applying the notation
30 CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING
I – country D – country
pe = 15100
= 15% pe = 20100
= 20%
ps = 585
= 6% ps = 3580
= 44%
pn = 1080
= 13% pn = 545
= 11%
pb = 1080
= 13% pb = 2045
= 44%
pa = 6080
= 75% pa = 2045
= 44%
Table 1.2: The relevant probabilities for the individual outcomes of the call attempts calcu-lated for Table 1.1
in Fig. 1.17 we find the following probabilities for a call attempts (assuming independence):
pA-error = pe (1.5)
pCongestion & tech. errors = (1− pe) · ps (1.6)
pB–no answer = (1− pe) · (1− ps) · pn (1.7)
pB–busy = (1− pe) · (1− ps) · pb (1.8)
pB–answer = (1− pe) · (1− ps) · pa (1.9)
Using the numbers from Table 1.1 we find the figures shown in Table 1.2. From this we noticethat even if the A-subscriber behaves correctly and the telephone system is perfect, then only75 %, respectively 45 % of the call attempts result in a conversation.
We distinguish between the service time which includes the time from the instant a server isoccupied until the server becomes idle again (e.g. both call set-up, duration of the conversa-tion, and termination of the call), and conversation duration, which is the time period whereA talks with B. Because of failed call–attempts the mean service time is often less than themean call duration if we include all call–attempts. Fig. 1.18 shows an example with observedholding times.
Example 1.10.1: Mean holding timesWe assume that the mean holding time of calls which are interrupted before B-answer (A-error,congestion, technical errors) is 20 seconds and that the mean holding time for calls arriving at thecalled party (B-subscriber) (no answer, B-busy, B-answer) is 180 seconds. The mean holding timeat the A-subscriber then becomes by using the figures in Table 1.1:
I – country: ma =20100· 20 +
80100· 180 = 148 seconds
D – country: ma =55100· 20 +
45100· 180 = 92 seconds
1.10. TRAFFIC GENERATION AND SUBSCRIBERS REACTION 31
0 5 10 15 20 25 30 35 4010
1
102
103
104
ExponentialHyper−exponential
Minutes
Number of observations
135164 observationsµ = 142.86ε = 3.83
Figure 1.18: Frequency function of holding times of trunks in a local switching center.
We thus notice that the mean holding time increases from 148s, respectively 92s, at the A-subscriberto 180s at the B-subscriber. If one call intent implies more repeated call attempts (cf. Example 1.4),then the carried traffic may become larger than the offered traffic. 2
If we know the mean service time of the individual phases of a call attempt, then we cancalculate the proportion of the call attempts which are lost during the individual phases.This can be exploited to analyse electro-mechanical systems by using SPC-systems to collectdata.
Each call–attempt loads the controlling groups in the exchange (e.g. a computer or a controlunit) with an almost constant load whereas the load of the network is proportional to theduration of the call. Because of this many failed call–attempts are able to overload the controldevices while free capacity is still available in the network. Repeated call–attempts are notnecessarily caused by errors in the telephone-system. They can also be caused by e.g. abusy B–subscriber. This problem were treated for the first time by Fr. Johannsen in “Busy”
32 CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING
published in 1908 (Johannsen, 1908 [60]) . Fig. 1.19 and Fig. 1.20 show some examples frommeasurements of subscriber behaviour.
Studies of the subscribers response to for example busy tone is of vital importance for thedimensioning of telephone systems. In fact, human–factors (= subscriber–behaviour) is apart of the teletraffic theory which is of great interest.
During Busy Hour α = 10 − 16 % of the subscribers are busy using the line for incomingor outgoing calls. Therefore, we would expect that α% of the call attempts experience B-busy. This is, however, wrong, because the subscribers have different traffic levels. Somesubscribers receive no incoming call attempts, whereas others receive more than the average.In fact, it is so that the most busy subscribers on the average receive most call attempts.A-subscribers have an inclination to choose the most busy B-subscribers, and in practice weobserve that the probability of B-busy is about 4 · α, if we take no measures. For residentialsubscribers it is difficult to improve the situation. But for large business subscribers havinga PAX (= PABX) (Private Automatic eXchange) with a group-number a sufficient numberof lines will eliminate B-busy. Therefore, in industrialized countries the total probability ofB-busy becomes of the same order of size as α (Table 1.1). For D–countries the traffic ismore focused towards individual numbers and often the business subscribers don’t benefitfrom group numbering, and therefore we observe a high probability of B-busy (40–50 %).
At the Ordrup measurements approximately 4% of the call were repeated call–attempts. Ifa subscriber experience blocking or B–busy there is 70% probability that the call is repeatedwithin an hour. See Table 1.3.
Number of observations
Attempt no. Success Continue Give up psuccess Persistence
75.3891 56.935 7.512 10.942 0.76 0.412 3.252 2.378 1.882 0.43 0.563 925 951 502 0.39 0.664 293 476 182 0.31 0.725 139 248 89 0.29 0.74> 5 134 114
Total 61.678 13.711
Table 1.3: An observed sequence of repeated call–attempts (national calls, “Ordrup–measurements”). The probability of success decreases with the number of call–attempts,while the persistence increases. Here a repeated call–attempt is a call repeated to the sameB–subscriber within one hour.
A classical example of the importance of the subscribers reaction was seen when Valby gas-works (in Copenhagen) exploded in the mid sixties. The subscribers in Copenhagen generateda lot of call–attempts and occupied the controlling devices in the exchanges in the area of
1.10. TRAFFIC GENERATION AND SUBSCRIBERS REACTION 33
Copenhagen. Then subscribers from Esbjerg (western part of Denmark) phoning to Copen-hagen had to wait because the dialled numbers could not be transferred to Copenhagenimmediately. Therefore the equipment in Esbjerg was kept busy by waiting, and subscribersmaking local calls in Esbjerg could not complete the call attempts.
This is an example of how a overload situation spreads like a chain reaction throughout thenetwork. The more tight a network has been dimensioned, the more likely it is that a chainreaction will occur. An exchange should always be constructed so that it keeps working withfull capacity during overload situations.
In a modern exchange we have the possibility of giving priority to a group of subscribers inan emergency situation, e.g. doctors and police (preferential traffic). In computer systemssimilar conditions will influence the performance. For example, if it is difficult to get a freeentry to a terminal–system, the user will be disposed not to log off, but keep the terminal,i.e. increase the service time. If a system works as a waiting–time system, then the meanwaiting time will increase with the third order of the mean service time (Chap. 10). Underthese conditions the system will be saturated very fast, i.e. be overloaded. In countries withan overloaded telecommunication network (e.g. developing countries) a big percentage of thecall–attempts will be repeated call–attempts.
0 30 60 90 120 150 18010
0
101
102
103
104
105
Seconds
n = 138543
Figure 1.19: Histogram for the time interval from occupation of register (dial tone) to B–answer for completed calls. The mean value is 13.60 s.
Example 1.10.2: Repeated call attemptThis is an example of a simple model of repeated call attempts. Let us introduce the following
34 CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING
notation:b = persistence (1.10)
B = pnon-completion (1.11)
The persistence b is the probability that an unsuccessful call attempt is repeated, and pcompletion= (1−B) is the probability that the B-subscriber (called party) answers. For one call intent we getthe following history: We get the following probabilities for one call intent:
Attempt No. pB-answer pContinue pGive up0 1
1 (1−B) B · b B · (1− b)2 (1−B) · (B · b) (B · b)2 B · (1− b) · (B · b)3 (1−B) · (B · b)2 (B · b)3 B · (1− b) · (B · b)2
4 (1−B) · (B · b)3 (B · b)4 B · (1− b) · (B · b)3
. . . . . . . . . . . .
Total(1−B)
(1−B · b)1
(1−B · b)B · (1− b)(1−B · b)
Table 1.4: A single call intent results in a series of call attempts. The distribution of thenumber of attempts is geometrically distributed.
pcompletion =(1−B)
(1−B · b) (1.12)
pnon-completion =B · (1− b)(1−B · b) (1.13)
No. of call attempts per call intent =1
(1−B · b) (1.14)
Let us assume the following mean holding times:
sc = mean holding time of completed calls
sn = 0 = mean holding time of non-completed calls
Then we get the following relations between the traffic carried Y and the traffic offered A:
Y = A · 1−B1−B · b (1.15)
A = Y · 1−B · b1−B (1.16)
This is similar to the result given in ITU–T Rec. E.502. 2
1.11. INTRODUCTION TO GRADE-OF-SERVICE = GOS 35
In practice, the persistence b and the probability of completion 1 − B will depend on thenumber of times the call has been repeated (cf. Table 1.3). If the unsuccessful calls havea positive mean holding time, then the carried traffic may become larger than the offeredtraffic.
0 60 120 180 240 3000
100
200
300
400
500
600
Seconds
n = 7653
Figure 1.20: Histogram for all call attempts repeated within 5 minutes, when the called partyis busy.
1.11 Introduction to Grade-of-Service = GoS
The following section is based on (Veirø, 2001 [115]). A network operator must decide whatservices the network should deliver to the end user and the level of service quality that theuser should experience. This is true for any telecommunications network, whether it is circuit-or packet-switched, wired or wireless, optical or copper-based, and it is independent of thetransmission technology applied. Further decisions to be made may include the type andlayout of the network infrastructure for supporting the services, and the choice of techniquesto be used for handling the information transport. These further decisions may be different,depending on whether the operator is already present in the market, or is starting service froma greenfield situation (i.e. a situation where there is no legacy network in place to consider).
As for the Quality of Service (QoS) concept, it is defined in the ITU-T Recommendation E.800as: The collective effect of service performance, which determine the degree of satisfactionof a user of the service. The QoS consists of a set of parameters that pertain to the traffic
36 CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING
performance of the network, but in addition to this, the QoS also includes a lot of otherconcepts. They can be summarized as:
• service support performance
• service operability performance
• serveability performance and
• service security performance
The detailed definitions of these terms are given in the E.800 recommendation. The betterservice quality an operator chooses to offer to the end user, the better is the chance to wincustomers and to keep current customers. But a better service quality also means that thenetwork will become more expensive to install and this, normally, also has a bearing to theprice of the service. The choice of a particular service quality therefore depends on politicaldecisions by the operator and will not be treated further here.
When the quality decision is in place the planning of the network proper can start. Thisincludes the decision of a transport network technology and its topology as well as reliabilityaspects in case one or more network elements become malfunctioning. It is also at this stagewhere the routing strategy has to be determined.
This is the point in time where it is needed to consider the Grade of Service (GoS). This isdefined in the ITU-T Recommendation E.600 as: A number of traffic engineering variablesto provide a measure of adequacy of a group of resources under specified conditions. Thesegrade of service variables may be probability of loss, dial tone delay, etc. To this definitionthe recommendation furthermore supplies the following notes:
• The parameter values assigned for grade of service variables are called grade of servicestandards.
• The values of grade of service parameters achieved under actual conditions are calledgrade of service results.
The key point to solve in the determination of the GoS standards is to apportion individualvalues to each network element in such a way that the target end-to-end QoS is obtained.
1.11.1 Comparison of GoS and QoS
It is not an easy task to find the GoS standards needed to support a certain QoS. This is dueto the fact that the GoS and QoS concepts have different viewpoints. While the QoS viewsthe situation from the customer’s point of view, the GoS takes the network point of view.We illustrate this by the following example:
1.11. INTRODUCTION TO GRADE-OF-SERVICE = GOS 37
Example 1.11.1:Say we want to fix the end to end call blocking probability at 1 % in a telephone network. Acustomer will interpret this quantity to mean that he will be able to reach his destinations in 99 outof 100 cases on the average. Fixing this design target, the operator apportioned a certain blockingprobability to each of the network elements, which a reference call could meet. In order to makesure that the target is met, the network has to be monitored. But this monitoring normally takesplace all over the network and it can only be ensured that the network on the average can meet thetarget values. If we consider a particular access line its GoS target may well be exceeded, but theaverage for all access lines does indeed meet the target. 2
GoS pertains to parameters that can be verified through network performance (the abilityof a network or network portion to provide the functions related to communications betweenusers) and the parameters hold only on average for the network. Even if we restrain ourselvesonly to consider the part of the QoS that is traffic related, the example illustrates, that evenif the GoS target is fulfilled this need not be the case for the QoS.
1.11.2 Special features of QoS
Due to the different views taken by GoS and QoS a solution to take care of the problemhas been proposed. This solution is called a service level agreement (SLA). This is reallya contract between a user and a network operator. In this contract it is defined what theparameters in question really mean. It is supposed to be done in such a way, that it will beunderstood in the same manner by the customer and the network operator. Furthermore theSLA defines, what is to happen in case the terms of the contract are violated. Some operatorshave chosen to issue an SLA for all customer relationships they have (at least in principle),while others only do it for big customers, who know what the terms in the SLA really mean.
1.11.3 Network performance
As mentioned above the network performance concerns the ability of a network or networkportion to provide the functions related to communications between users. In order to es-tablish how a certain network performs, it is necessary to perform measurements and themeasurements have to cover all the aspects of the performance parameters (i.e. trafficability,dependability, transmission and charging).
Furthermore, the network performance aspects in the GoS concept pertains only to the factorsrelated to trafficability performance in the QoS terminology. But in the QoS world networkperformance also includes the following concepts:
• dependability,
38 CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING
• transmission performance, and
• charging correctness.
It is not enough just to perform the measurements. It is also necessary to have an organizationthat can do the proper surveillance and can take appropriate action when problems arise. Asthe network complexity keeps growing so does the number of parameters needed to consider.This means that automated tools will be required in order to make it easier to get an overviewof the most important parameters to consider.
1.11.4 Reference configurations
In order to obtain an overview of the network under consideration, it is often useful to producea so-called reference configuration. This consists of one or more simplified drawing(s) of thepath a call (or connection) can take in the network including appropriate reference points,where the interfaces between entities are defined. In some cases the reference points define aninterface between two operators, and it is therefore important to watch carefully what happensat this point. From a GoS perspective the importance of the reference configuration is thepartitioning of the GoS as described below. Consider a telephone network with terminals,subscriber switches and transit switches. In the example we ignore the signalling network.Suppose the call can be routed in one of three ways:
1. terminal → subscriber switch → terminal
This is drawn as a reference configuration shown in Fig. 1.21.
S
point ARef
point ARef
Figure 1.21: Reference configuration for case 1.
2. terminal → subscriber switch → transit switch → subscriber switch → terminal
This is drawn as a reference configuration shown in Fig. 1.22.
3. terminal→subscriber switch→transit switch→transit switch→subscriber switch→terminal
This is drawn as a reference configuration shown in Fig. 1.23.
1.11. INTRODUCTION TO GRADE-OF-SERVICE = GOS 39
SRef
point ARef
point B
TRef
point B
SRef
point A
Figure 1.22: Reference configuration for case 2.
SRef
point ARef
point B
TRef
T
point C
SRef
point ARef
point B
Figure 1.23: Reference configuration for case 3.
Based on a given set of QoS requirements, a set of GoS parameters are selected and definedon an end-to-end basis within the network boundary, for each major service category providedby a network. The selected GoS parameters are specified in such a way that the GoS canbe derived at well-defined reference points, i.e. traffic significant points. This is to allow thepartitioning of end-to-end GoS objectives to obtain the GoS objectives for each network stageor component, on the basis of some well-defined reference connections.
As defined in ITU-TRecommendation E.600, for traffic engineering purposes, a connection isan association of resources providing means for communication between two or more devicesin, or attached to, a telecommunication network. There can be different types of connectionsas the number and types of resources in a connection may vary. Therefore, the conceptof a reference connection is used to identify representative cases of the different types ofconnections without involving the specifics of their actual realizations by different physicalmeans.
Typically, different network segments are involved in the path of a connection. For example,a connection may be local, national, or international. The purposes of reference connectionsare for clarifying and specifying traffic performance issues at various interfaces between dif-ferent network domains. Each domain may consist of one or more service provider networks.Recommendation I.380/Y.1540 defines performance parameters for IP packet transfer; itscompanion Draft Recommendation Y.1541 specifies the corresponding allocations and per-formance objectives. Recommendation E.651 specifies reference connections for IP-accessnetworks. Other reference connections are to be specified.
From the QoS objectives, a set of end-to-end GoS parameters and their objectives for differentreference connections are derived. For example, end-to-end connection blocking probabilityand end-to-end packet transfer delay may be relevant GoS parameters. The GoS objectivesshould be specified with reference to traffic load conditions, such as under normal and highload conditions. The end-to-end GoS objectives are then apportioned to individual resourcecomponents of the reference connections for dimensioning purposes. In an operational net-
40 CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING
work, to ensure that the GoS objectives have been met, performance measurements andperformance monitoring are required.
In IP-based networks, performance allocation is usually done on a cloud, i.e. the set of routersand links under a single (or collaborative) jurisdictional responsibility, such as an InternetService Provider, ISP. A cloud is connected to another cloud by a link, i.e. a gateway routerin one cloud is connected via a link to a gateway router in another cloud. End-to-endcommunication between hosts is conducted on a path consisting of a sequence of clouds andinterconnecting links. Such a sequence is referred to as a hypothetical reference path forperformance allocation purposes.
Chapter 2
Time interval modeling
Time intervals are non-negative, and therefore they can be expressed by non-negative randomvariables. Time intervals of interests are for example service times, duration of congestion(blocking periods, busy periods), waiting times, holding times, CPU -busy times, inter-arrivaltimes, etc. We denote these time durations as life-times and their distribution functions astime distributions. In this chapter we review the basic theory of probability and statistics rel-evant to teletraffic theory and illustrate the theory by the (negative) exponential distributionand generalizations of this.
In principle, we may use any distribution function with non–negative values to model a life–time. However, the exponential distribution has some unique characteristics which make thisdistribution qualified for both analytical and practical uses. The exponential distributionplays a key role among all life–time distributions. The most fundamental characteristic ofthe exponential distribution is the Markov property which means lack of memory or lack ofage. The future is independent of the past.
We can combine life-times by combing them into series (Sec. 2.3.1), into parallel (Sec. 2.3.2),or into a combination of the two (Sec. 2.3.3). In this way we get more parameters availablefor fitting the distribution to real observations.
A hypo-exponential or steep distribution corresponds to a set of stochastic independent ex-ponential distributions in series (Fig. 2.4), and a hyper-exponential or flat distribution corre-sponds to exponential distributions in parallel (Fig. 2.6). This structure corresponds naturallyto the shaping of traffic processes in telecommunication and data networks. By combinationof steep and flat distribution we get Cox-distributions, which can be approximated to anydistribution with any degree of accuracy. By using a graphical approach, phase-diagrams, weare able to derive decomposition properties of importance for later applications.
We also mention a few other time distributions which are employed in teletraffic theory(Sec. 2.4), and finally we review some observations of real life-times in Sec. 2.5.
41
42 CHAPTER 2. TIME INTERVAL MODELING
2.1 Distribution functions
A time interval can be described by a random variable T . This is characterized by a cumu-lative distribution function (cdf ) F (t), which is the probability that the duration of a timeinterval is less than or equal to t:
F (t) = p(T ≤ t) .
In general, we assume that the derivative of F (t), the probability density function (pdf ) f(t),exists:
dF (t) = f(t) · dt = pt < T ≤ t+ dt , t ≥ 0 . (2.1)
As we only consider non-negative time intervals we have:
F (t) =
0 , t < 0 ,
∫ t
0−dF (u) =
∫ t
0−f(t) dt , 0 ≤ t <∞ ,
(2.2)
In (2.2) we integrate from 0− to keep record of a possible discontinuity in t = 0. Whenwe consider waiting time systems, there is often a positive probability to have waiting timesequal to zero, i.e. F (0) 6= 0. On the other hand, when we look at the inter-arrival times,we usually assume F (0) = 0 (Sec. 3.2.3). The probability density function is also called thefrequency function.
Sometimes it is easier to consider the complementary distribution function, also called thesurvival distribution function:
F c(t) = 1− F (t) .
Analytically, many calculations can be carried out for any time distribution.
2.1.1 Exponential distribution
This is the most fundamental distribution i teletraffic theory, where it is called the negativeexponential distribution.
This distribution is characterized by a single parameter, the intensity or rate λ:
F (t) = 1− e−λt , λ > 0 , t ≥ 0 , (2.3)
f(t) = λe−λt , λ > 0 , t ≥ 0 . (2.4)
The phase diagram of the exponential distribution is shown in Fig. 2.1. The density functionis shown in Fig. 2.5 for k = 1.
2.2. CHARACTERISTICS OF DISTRIBUTIONS 43
...................................................................................................... .......................... ...................................................................................................... ..........................λ
Figure 2.1: Phase diagrams of an exponentially distributed time interval is shown as a boxwith the intensity λ. The box thus means that a customer arriving to the box is delayed anexponentially distributed time interval before leaving the box.
2.2 Characteristics of distributions
Times intervals are always non-negative and therefore their distribution functions have someuseful properties.
2.2.1 Moments
The i’th non-central moment, which usually is called the i’th moment, is defined by:
ET i = mi =
∫ ∞
0
ti · f(t) dt , i = 1, 2, . . . . . (2.5)
So far we assume that all moments exist. In general we always assume that at least the meanvalue exists. A distribution is uniquely defined by its moments. For life-time distributionswe have the following relation, called Palm’s identity :
mi =
∫ ∞
0
ti · f(t) dt =
∫ ∞
0
i ti−1 · 1− F (t) dt , i = 1, 2, . . . . . (2.6)
It was first proved by (Palm, 1943 [92]) as follows:
∫ ∞
t=0
i ti−1 1− F (t) dt =
∫ ∞
t=0
i ti−1
∫ ∞
x=t
f(x) dx
dt
=
∫ ∞
t=0
∫ ∞
x=t
i ti−1f(x) dx dt
=
∫ ∞
t=0
∫ ∞
x=t
dti f(x) dx
=
∫ ∞
x=0
∫ x
t=0
dti f(x) dx
=
∫ ∞
x=0
xi f(x) dx
= mi .
44 CHAPTER 2. TIME INTERVAL MODELING
The order of integration can be inverted because the integrand is non-negative. Thus we haveproved (2.5). In particular we find the first two moments:
m1 =
∫ ∞
0
t · f(t) dt =
∫ ∞
0
1− F (t)dt = ET , (2.7)
m2 =
∫ ∞
0
t2 · f(t) dt =
∫ ∞
0
2t · 1− F (t) dt . (2.8)
The i’th central moment is defined as:
E(T −m1)i =
∫ ∞
0
(t−m1)i · f(t) dt . (2.9)
In advanced teletraffic we also use cumulants, Binomial moments, and factorial moments.They are uniquely related to the above moments, but has some advantages when dealingwith special problems.
For characterizing random variables we use the following parameters related to the first twomoments:
• Mean value or expected value = ET. This is the first moment:
m1 = ET . (2.10)
• Variance. This is the 2nd central moment:
σ2 = E(T −m1)2 .
It is easy to show that:
σ2 = m2 −m21 or (2.11)
m2 = σ2 +m21 .
• Standard deviation. This is the square root of the variance and thus equal to σ.
• Coefficient of variation is a normalized measure for the irregularity (dispersion) of adistribution. It is defined as the ratio between the standard deviation and the meanvalue:
CV = Coefficient of Variation =σ
m1
. (2.12)
This quantity is dimensionless, and later we use it to characterize discrete distributions(state probabilities).
2.2. CHARACTERISTICS OF DISTRIBUTIONS 45
• Palm’s form factor ε is another measure of irregularity which is defined as follows:
ε =m2
m21
= 1 +
(σ
m1
)2
≥ 1 . (2.13)
The form factor ε as well as (σ/m1 = CV ) are independent of the choice of time scale,and they will appear in many formulæ in the following. The larger a form factor, themore irregular is the time distribution, The form factor has its minimum value equal toone for constant time intervals (σ = 0). It is used to charactize continuous distributions,for exampe time intervals.
• Median. Sometimes we also use the median to characterize a distribution. The medianis the value of t for which F (t) = 0.5. Thus half the observations will be smaller thanthe median and half will be bigger. For a symmetric probability density function themean equals the median. For the exponential distribution the median is 0.6931 timesthe mean value.
• Percentiles. More generally, we characterize a distribution by percentiles (quantiles orfractiles): If
P (T ≤ tp) = pt ,
then tp is the pt · 100 % percentile. The median is the 50% percentile.
When estimating parameters of a distribution from observations, we are usually satisfied byknowing the first two moments (m1 and σ) as higher order moments require extremely manyobservations to obtain reliable estimates.
Time distributions can also be characterized in other ways, for example by properties relatedto the traffic. We consider some important characteristics in the following sections.
Example 2.2.1: Exponential distributionThe following integral is very useful:
∫t · e−λ tdt = −e−λ t
λ2(λ t+ 1) (2.14)
For the exponential distribution (Sec. 2.1.1) we find:
m1 =∫ ∞
0t · λ e−λt dt =
1λ,
m2 =∫ ∞
0t2 · λ e−λt dt
=∫ ∞
t=02 t · e−λt dt =
2λ2.
46 CHAPTER 2. TIME INTERVAL MODELING
where the last equation is obtained using (2.6). The gamma function is defined by:
Γ(n+ 1) =∫ ∞
0tn e−t dt = n! (2.15)
If we replace t by λt, then we get the i’th moment (2.6) of the exponential distribution:
i’th moment: mi =i !λi,
Mean value: m1 =1λ,
Second moment: m2 =2λ2,
Variance: σ2 =1λ2,
Form factor: ε = 2 ,
(2.16)
2
Example 2.2.2: Constant time intervalFor a constant time interval of duration h we have: mi = hi . 2
2.2.2 Residual life-time
If an event b has occurred, i.e. p(b > 0), then the probability that the event a also occurs isgiven by the conditional probability. This is denoted by p(a | b), the conditional probabilityof a, given b. Denoting the joint probability that both a and b take place by p(a∩ b) we have:
p(a ∧ b) = p(b) · p(a | b) = p(a) · p(b | a) .
Under the assumption that p(b) > 0, we thus have:
p(a | b) =p(a ∧ b)p(b)
. (2.17)
For time distributions we are interested in F (x + t | x), the distribution of the residual lifetime t, given that a certain age x ≥ 0 has already been obtained. The random variable ofthe total life time is T .
Assuming pT >x > 0 and t ≥ 0 we get:
pT > x+ t | T > x =p(T > x+ t) ∧ (T > x)
pT > x
=pT > x+ tpT > x
=1− F (x+ t)
1− F (x),
2.2. CHARACTERISTICS OF DISTRIBUTIONS 47
and thus:
F (x+ t | x) = pT ≤ x+ t | T > x
=F (x+ t)− F (x)
1− F (x), (2.18)
f(t+ x | x) =f(x+ t)
1− F (x), t ≥ 0 , x ≥ 0 . (2.19)
Fig. 2.2 illustrates these calculations graphically.
The mean value m1,r of the residual life-time can be written as (2.7):
m1,r(x) =1
1− F (x)·∫ ∞
t=0
1− F (x+ t) dt , x ≥ 0 . (2.20)
The death rate at time x, i.e. the probability, that the considered life-time terminates withinan interval (x, x + dx), under the condition that age x has been achieved, is obtained from(2.18) by letting t = dx:
µ(x) · dx =F (x+ dx)− F (x)
1− F (x)
=dF (x)
1− F (x)
=f(x) dx
1− F (x). (2.21)
The conditional density function µ(x) is also called the hazard function. If this function isgiven, then F (x) may be obtained as the solution to the following differential equation:
dF (x)
dx+ µ(x) · F (x) = µ(x) . (2.22)
Assuming F (0) = 0 we get the solution:
F (t) = 1− exp
−∫ t
0
µ(u) du
, (2.23)
f(t) = µ(t) · exp
−∫ t
0
µ(u) du
. (2.24)
The death rate µ(t) is constant if and only if the life-time is exponentially distributed. Thisis a fundamental characteristic of the exponential distribution which is called the Markovianproperty (lack of memory or age). The probability of terminating at time t is independentof the actual age t.
48 CHAPTER 2. TIME INTERVAL MODELING
0 2 4 6 8 10 12 140
0.05
0.10
0.15
0.20 .........................................................................................
......................................................................... ................
f(t)
t.........................................................................................................................................................................................................................................................................................................
...............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.....
........
.....
........
.....
........
.....
........
.....
........
.....
........
.....
........
.....
........
.....
........
.....
0 2 4 6 8 10 12 140
0.05
0.10
0.15
0.20
0.25 .............................................................
......................................................................... ................
f(t+3| 3)
t
......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Figure 2.2: The density function of the residual life time conditioned by a given age x(2.19). The example is based on a Weibull distribution We(2,5) (2.98), where x = 3 andF (3) = 0.3023.
One would expect that the mean residual life-time m1,r(x) decreases for increasing x, so thatthe expected residual life-time decreases when the age x increases. often it is not so. For anexponential distribution with form factor ε = 2 we have m1,r = m1. For steep distributions(1 ≤ ε < 2) we have m1,r < m1 (Sec. 2.3.1), whereas for flat distributions (2 < ε < ∞), wehave m1,r > m1 (Sec. 2.3.2).
Example 2.2.3: Exponential distributionWe assume duration of telephone calls is exponentially distributed. The distribution of the residualtime is then independent of the actual duration of the conversation, and it is equal to the distribution
2.2. CHARACTERISTICS OF DISTRIBUTIONS 49
of the total life-time (2.19):
f(t+ x | x) =λe−(t+x)λ
e−λx= λe−λt
= f(t) .
If we remove the probability mass of the interval (0, x) from the density function and normalizethe residual mass in (x,∞) to unity, then the new density function becomes congruent with theoriginal density function. The only continuous distribution function having this property is theexponential distribution, whereas the geometric distribution is the only discrete distribution havingthis property. Therefore, the mean value of the residual life-time is m1,r = m1, and the probabilityof observing a life–time in the interval (t, t+ dt), given that it occurs after t, is given by (2.21)
pt < X ≤ t+ dt | X > t =f(t) dt
1− F (t)
= λ dt . (2.25)
Thus it depends only upon λ and dt, but it is independent of the actual age t. An example wherethis property is not valid is shown in Fig. 2.2 for the Weibull distribution (2.98) when k 6= 1. Fork = 1 the Weibull distribution becomes identical with the exponential distribution. 2
Example 2.2.4: Waiting-time distributionLet us consider a queueing system with infinite queue where no customers are blocked. The waitingtime distribution Ws(t) for a random customer usually has a positive probability mass (atom) att = 0, because some of the customers are served immediately without any delay. We thus haveWs(0) > 0. The waiting time distribution W+(t) for customers having positive waiting times thenbecomes (2.18):
W (t | t > 0) = W+(t) =Ws(t)−Ws(0)
1−Ws(0),
or if we denote the probability of a positive waiting time by D = 1−Ws(0) (probability of delay):
D · 1−W+(t) = 1−Ws(t) . (2.26)
For the probability density function (pdf) we have (2.19):
D · w+(t) = ws(t) . (2.27)
For mean values we get:
D · w = W , (2.28)
where the mean waiting time for all customers is denoted by W , and the mean waiting time for thedelayed customers is denoted by w. These formulæ are valid for any queueing system with infinitequeue. 2
50 CHAPTER 2. TIME INTERVAL MODELING
2.2.3 Load from holding times of duration less than x
So far we have attached the same importance to all life-times independently of their duration.The importance of a life-time is often proportional to its duration, for example when weconsider the load of queueing system, charging of CPU -times, telephone conversations etc.
If we to a life time allocate a weight factor proportional to its duration, then the averageweight of all time intervals is equal to the mean value:
m1 =
∫ ∞
0
t · f(t) dt , (2.29)
where f(t) dt is the probability of an observation within the interval (t, t + dt), and t is theweight of this observation.
We are interested in calculating the proportion of the mean value which is due to contributionsfrom life-times of duration less than x:
%x =
∫ x
0
t · f(t) dt
m1
. (2.30)
Often relatively few service times make up a relatively large proportion of the total load. FromFig. 2.3 we see that if the form factor ε is 5, then 75% of the service times only contributewith 30% of the total load (Vilfred Pareto’s rule). This fact can be utilized to give priorityto short tasks without delaying long tasks very much (Chap. 10).
Example 2.2.5: Exponential distributionFor the exponentially distributed jobs with mean value m1 = 1/λ we find the relative load fromjobs of duration t ≤ x from (2.30), using (2.14):
%x = λ ·∫ x
0t · f(t) dt
=∫ x
0λt · λ e−λt dt
= 1− e−λx(λx+ 1) . (2.31)
This result is used later when we look at shortest-job-first queueing discipline (Sec. 10.6.4). 2
2.2.4 Forward recurrence time
The residual life-time from a random point of time is called the forward recurrence time. Inthis section we shall derive some formulæ of importance for applications. To formulate the
2.2. CHARACTERISTICS OF DISTRIBUTIONS 51
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
...................................................................................................................................................................................................................................................................................................................................................................................................................
................................................................................................................................................................
......................................................................................................................
................................................................................................
....................................................................................
..........................................................................
....................................................................
..............................................................
........................................................
......................................................
........................................................................
.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..............................................................................................................................................................................................
........................................................................................................................................
............................................................................................................
..............................................................................................
..................................................................................
........................................................................
..................................................................
..........................................................
......................................................
................................................
......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
ε = 2 ε = 5
Relative load
Percentile/100
Figure 2.3: Example of the relative traffic load from holding times shorter than a given valuegiven by the percentile of the holding time distribution (2.30). Here ε = 2 corresponds to anexponential distribution and ε = 5 corresponds to a Pareto-distribution. We note that the10% largest holding times contributes with 33%, respectively 47%, of the load (cf. customeraverages and time averages in Chap. 3).
problem we consider an example. We wish to investigate the life-time distribution of cars andask car-owners chosen at random about the age of their car. As the point of time is chosen atrandom the probability of choosing a certain car is proportional to the total life-time of thatcar. The distribution of the remaining residual life-time will be identical with the alreadyachieved life-time.
By sampling in this way, the probability of choosing a car is proportional with the life-timeof this car, i.e. we will preferably choose cars with longer life-times (length-biased sampling).The probability of choosing a car having a total life-time x is given by (cf. the derivation of(2.30)):
x · f(x) dx
m1
.
As we consider a random point of time, the distribution of the remaining life-time will be
52 CHAPTER 2. TIME INTERVAL MODELING
uniformly distributed in (0, x ]:
f(t | x) =1
x, 0 < t ≤ x .
The probability density function (pdf) of the remaining life-time at a random point of timebecomes:
v(t) =
∫ ∞
t
1
x· x · f(x) dx
m1
,
v(t) =1− F (t)
m1
. (2.32)
where F (t) is the distribution function of the total life-time and m1 is the mean value. Byapplying the identity (2.5), we note that the i’th moment of v(t) is given by the (i + 1)’thmoment of f(t):
mi,v =
∫ ∞
0
ti · v(t) dt
=
∫ ∞
0
ti · 1− F (t)
m1
dt
=1
i+ 1· 1
m1
·∫ ∞
0
(i+ 1) ti · 1− F (t) dt ,
mi,v =1
i+ 1· 1
m1
·mi+1,f . (2.33)
In particular, we obtain the mean value:
m1,v =m1
2· ε , (2.34)
where m1 is the mean value and ε the form factor of the life-time distribution considered.These formulæ are also valid for discrete time distributions.
Example 2.2.6: Exponential distributionFor the exponential distribution we get (??):
mi,v =1
i+ 1· 1m1· (i+1)!λi+1
=i !λi
= mi .
In particular, we have m1,v = m1. The mean remaining life-time from a random point of view isequal to the mean value of the life-time distribution, because the exponential distribution is withoutmemory. Furthermore, the mean value of the actual life-time is also m1 as we choose a randompoint of time. Thus the mean value of the total life-time becomes 2m1. 2
2.2. CHARACTERISTICS OF DISTRIBUTIONS 53
2.2.5 Distribution of the j’th largest of k random variables
Let us assume that k random variables T1, T2, . . . , Tk are independent and identically dis-tributed with distribution function F (t). The distribution of the j’th largest variable will begiven by:
pj’th largest ≤ t =
j−1∑
i=0
(k
i
)1− F (t)i F (t)k−i (2.35)
= 1−k∑
i=j
(k
i
)1− F (t)i F (t)k−i .
as at most j−1 variables may be larger than t (but they may eventually all be less than t).The right-hand side is obtained using the Binomial theorem:
(a+ b)n =n∑
i=0
(n
i
)ai · bn−i . (2.36)
The smallest one (or k’th largest, j=k) has the distribution function:
Fmin(t) = 1− 1− F (t)k , (2.37)
and the largest one (j=1) has the distribution function:
Fmax(t) = F (t)k . (2.38)
If the random variables has individual distribution functions Fi(t), we get an expression morecomplex than (2.35). For the smallest and the largest we get:
Fmin(t) = 1−k∏
i=1
1− Fi(t) , (2.39)
Fmax(t) =k∏
i=1
Fi(t) . (2.40)
Example 2.2.7: Minimum of N exponentially distributed random variablesWe assume that two random variables T1 and T2 are mutually independent and exponentially dis-tributed with intensities λ1 and λ2, respectively. A new random variable T is defined as:
T = min T1, T2 .
The distribution function of T is (2.37):
pT ≤ t = 1− e−(λ1+λ2)t . (2.41)
54 CHAPTER 2. TIME INTERVAL MODELING
Thus this distribution function is also an exponential distribution with intensity (λ1 + λ2).
Under the assumption that the first (smallest) event happens within the time interval (t, t + dt),then the probability that the random variable T1 is realized first (i.e. takes places in this intervaland the other takes place later) is given by:
pT1 < T2 | t =Pt < T1 ≤ t+ dt · PT2 > t
Pt < T ≤ t+ dt
=λ1 e−λ1t dt · e−λ2t
(λ1 + λ2) e−(λ1+λ2)t dt
=λ1
λ1 + λ2, (2.42)
i.e. independent of t. These results can easily be generalized to N variables and make up thebasic principle of the simulation technique called the roulette method, a Monte Carlo simulationmethodology. 2
2.3 Combination of random variables
Combining exponential distributed time intervals in series, we get a class of distributionscalled Erlang distributions (Sec. 2.3.1). Combining them in parallel, we obtain hyper–exponential distribution (Sec. 2.3.2). Combining exponential distributions both in seriesand in parallel, possibly with feedback, we obtain phase-type distributions, which is a verygeneral class of distributions. One important sub–class of phase-type distributions is Cox-distributions (Sec. 2.3.3). We note that an arbitrary distribution can be expressed by aCox-distribution which can be used in analytical models in a relatively simple way.
2.3.1 Random variables in series
A linking in series of k independent time intervals corresponds to addition of k independentrandom variables, i.e. convolution of the random variables.
If we denote the mean value and the variance of the i’th time interval by m1,i, σ2i , respectively,
then the sum of the random variables has the following mean value and variance:
m1 =k∑
i=1
m1,i , (2.43)
σ2 =k∑
i=1
σ2i . (2.44)
2.3. COMBINATION OF RANDOM VARIABLES 55
In general, we should add the so-called cumulants, and the first three cumulants are identicalwith the first three central moments.
The density function f(t) of the sum is obtained by the convolution:
f(t) = f1(t)⊗ f2(t)⊗ · · · ⊗ fk(t) ,
where ⊗ is the convolution operator:
f12(t) = f1(t)⊗ f2(t)
=
∫ t
0
f1(x) · f2(t−x) dx (2.45)
Example 2.3.1: Non-homogenous Erlang-2 distributionWe consider two exponentially distributed independent time intervals T1 and T2 with intensities λ1,respectively λ2 6= λ1. The sum T12 = T1 + T2 is a random variable and the probability densityfunction is obtained by convolution:
p(t < T12 ≤ t+ dt) = f12(t) =∫ t
0f1(x) · f2(t−x) dx
=∫ t
0λ1 e−λ1 x · λ2 e−λ2(t−x) dx
= λ1 λ2 · e−λ2t
∫ t
0e−(λ1−λ2)x dx
=λ1 λ2
λ1 − λ2· e−λ2t
∫ t
0(λ1 − λ2) · e−(λ1−λ2)x dx
=λ1 λ2
λ1 − λ2· e−λ2t
(1− e−(λ1−λ2)t
)
p(t < T12 ≤ t+ dt) =λ1 λ2
λ1 − λ2· e−λ2t − λ1 λ2
λ1 − λ2· e−λ1t , λ1 6= λ2 .
For the case λ1 = λ2 we get an Erlang-2 distribution considered in the following. 2
Hypo-exponential or steep distributions
Steep distributions are also called hypo–exponential distributions or generalized Erlang dis-tributions. They have a form factor within the interval 1 < ε ≤ 2. This distribution isobtained by convolving k exponential distributions (Fig. 2.4).
56 CHAPTER 2. TIME INTERVAL MODELING
Figure 2.4: By combining k exponential distributions in series we get a steep distributionwith formfactor ε ≤ 2). If all k distributions are identical (λi = λ), then we get an Erlang–kdistribution.
Erlang-k distributions
We consider the case where all k exponential distributions are identical. The distributionobtained fk(t) is called the Erlang-k distribution, as it was widely used by A.K. Erlang.
For k = 1 we of course get the exponential distribution. The distribution fk(t), k > 0, isobtained by convolving fk−1(t) and f1(t). If we assume that the expression (2.46) is valid forfk−1(t), then we have by convolution:
fk(t) =
∫ t
0
fk−1(t−x) f1(x) dx
=
∫ t
0
λ(t− x)k−2
(k − 2)!λ e−λ(t−x) λ e−λx dx
=λk
(k − 2)!e−λt
∫ t
0
(t− x)k−2 dx
fk(t) =(λt)k−1
(k−1)!· λ · e−λt , λ > 0 , t > 0 , k = 1, 2, . . . . (2.46)
As the expression is valid for k = 1, we have by induction shown that it is valid for any k.The Erlang-k distribution is, from a statistical point of view, a special gamma-distribution.
The cdf (cumulative distribution function) is obtained by repeated partial integration or asshown in a simple way later (3.21):
Fk(t) =∞∑
j=k
(λt)j
j !· e−λt = 1−
k−1∑
j=0
(λt)j
j !· e−λt . (2.47)
The following moments can be found by using (2.43) and (2.44):
m1 =k
λ, (2.48)
σ2 =k
λ2, (2.49)
ε = 1 +σ2
m2= 1 +
1
k, (2.50)
2.3. COMBINATION OF RANDOM VARIABLES 57
0 1 2 30
1
2
3
..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
............................................................................................................................................................................................................................................................
.............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...................................................................................................................................................................................................................................................................................................................................................................................................................
....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Erlang–k distributions
1
2
5
50
t
f(t)
Figure 2.5: Erlang–k distributions with mean value equal to one. The case k = 1 correspondsto an exponential distribution (density functions).
The i’th non-central moment is:
mi =(i+ k − 1)!
(k − 1)!·(
1
λ
)i. (2.51)
In particular, we have
m2 =k (k + 1)
λ 2. (2.52)
The mean residual life–time m1,r(x) for x ≥ 0 will be less than the mean value:
m1,r(x) ≤ m1 , x ≥ 0 .
Using this distribution we have two parameters (λ, k) available to be estimated from obser-vations. The mean value is often kept fixed. To study the influence of the parameter k, wenormalize all Erlang–k distributions to the same mean value as the Erlang–1 distribution, i.e.
58 CHAPTER 2. TIME INTERVAL MODELING
the exponential distribution with mean value m1 = 1/λ, by replacing t by k t or λ by k λ:
fk(t) dt =(λkt)k−1
(k − 1) !e−λkt kλ dt , (2.53)
m1 =1
λ, (2.54)
σ2 =1
k λ2, (2.55)
ε = 1 +1
k. (2.56)
Notice that the form factor is independent of time scale. The density function (2.53) isillustrated in Fig. 2.5 for different values of k with λ = 1. The case k = 1 corresponds to theexponential distribution. When k → ∞ we get a constant time interval (ε = 1). By solvingfor f ′(t) = 0 we find the maximum value at:
λt =k − 1
k. (2.57)
Steep distributions are named so because their distribution functions increase quicker from 0to 1 than the exponential distribution do.
2.3.2 Random variables in parallel
We combine ` independent time intervals (random variables) by choosing the i’th time intervalwith probability (weight factor) pi,
∑
i=1
pi = 1 .
The random variable of the weighted sum is called a compound distribution. The j’th (non-central) moment is obtained by weighting the (non-central) moments of the random variables:
mj =∑
i=1
pi ·mj,i , (2.58)
where mj,i is the j’th (non-central) moment of the distribution of the i’th interval. The meanvalue becomes:
m1 =∑
i=1
pi ·m1,i . (2.59)
The second moment is:
m2 =∑
i=1
pi ·m2,i ,
2.3. COMBINATION OF RANDOM VARIABLES 59
......
λ1
λ2
λk
p1
p2
pk
............................................................................................................................................................................................................................
.........................................................................................................................................
............................................................................................................................................................................................................................................................................................................ ..........................x y..................................................................................................................................... .......................... .........
.............................................................................................................................................................................................................................................
..........................
.............................................................................................................................................................................................
................................................................................................................................................................................................
...........................
..................................................................................................................................................................................................... ..........................
..................................................................................................................................................................................................... ..........................
..................................................................................................................................................................................................... ..........................
..................................................................... ..........................
..................................................................... ..........................
..................................................................... ..........................
Figure 2.6: By combining k exponential distributions in parallel and choosing branch numberi with the probability pi, we get a hyper–exponential distribution, which is a flat distribution(ε ≥ 2).
and from this we get the variance:
σ2 = m2 −m21 =
∑
i=1
pi · (σ2i +m2
1,i)−m21 , (2.60)
where σ2i , is the variance of the i’th distribution.
The distribution function is as follows:
F (t) =∑
i=1
pi · Fi(t) . (2.61)
A similar formula is valid for the density function:
f(t) =∑
i=1
pi · fi(t) .
Hyper-exponential or flat distributions
The general distribution function is in this case a weighted sum of exponential distributions(compound distribution) with a form factor ε ≥ 2 :
F (t) =
∫ ∞
0
(1− e−λt
)dW (λ) , λ > 0 , t ≥ 0 , (2.62)
where the weight function may be discrete or continuous (Stieltjes integral). This distributionclass corresponds to a parallel combination of the exponential distributions (Fig. 2.6). Thedensity function is called complete monotone due to the alternating signs (Palm, 1957 [95]):
(−1)ν · f (ν)(t) ≥ 0 . (2.63)
60 CHAPTER 2. TIME INTERVAL MODELING
The mean residual life-time m1,r(x) for all x ≥ 0 is larger than the mean value:
m1,r(x) ≥ m1 , x ≥ 0 . (2.64)
Hyper-exponential distribution
In this case W (λ) is discrete. Suppose we have exponential distributions with the followingintensities:
λ1, λ2, . . . , λk ,
and that W (λ) has the positive increments:
p1, p2, . . . , pk ,
wherek∑
i=1
pi = 1 . (2.65)
For all other values W (λ) is constant. In this case (2.62) becomes:
F (t) = 1−k∑
i=1
pi · e−λit , t ≥ 0 . (2.66)
The mean values and form factor is obtained from (2.59) and (2.60) (σi = m1,i = 1/λi):
m1 =k∑
i=1
piλi, (2.67)
ε =
k∑
i=1
pi2
λ2i
/k∑
i=1
piλi
2
≥ 2 . (2.68)
If k = 1 or all λi are equal, then we get an exponential distribution.
The distribution is called flat because its distribution function increases more slowly from 0to 1 than the exponential distribution.
It is difficult to estimate more than one or two parameters (typically mean and variance)from real observations. The most common case in practise is n = 2 (p1 = p, p2 = 1− p):
F (t) = 1− p · e−λ1t − (1− p) · e−λ2t . (2.69)
Statistical problems arise even when we have to estimate three parameters. So for practicalapplications we usually choose λi = 2λpi and thus reduce the number of parameters to onlytwo:
F (t) = 1− p · e−2λpt − (1− p) · e−2λ(1−p)t . (2.70)
2.3. COMBINATION OF RANDOM VARIABLES 61
The mean value and form factor (assuming p > 0) becomes:
m1 =1
λ,
ε =1
2 p (1− p) > 2 . (2.71)
For this choice of parameters the two branches have the same contribution to the mean value.Fig. 2.7 illustrates an example.
0 5 10 15 20 25
10
100
1000
10000
Time
Number of observations
57055 observationsm = 171.85 sform factor = 3.30
Figure 2.7: Probability density function for holding times observed on lines in a local exchangeduring busy hours.
62 CHAPTER 2. TIME INTERVAL MODELING
Pareto distribution and Palm’s normal forms
W (λ) can also be a continuous distribution, and this case was considered by Conny Palm(1943 [92]). In most important case W (λ) is chosen to be gamma-distributed with meanvalue and form factor as follows:
m1 =1
λ
ε = 1 + η0/λ
This corresponds to λ = 1/η0 and k = λ/η0 in (2.53). We then get
dW (x) =1
η0
·
(xη0
) λη0−1
Γ(λη0
) · e−xη0 · dx , (2.72)
and from (2.62) it can be shown that:
F (t) = 1− (1 + η0t)−(
1+ λη0
). (2.73)
This distribution is called the Pareto-distribution. With the above choice of parameters themean value and form factor of F (t) becomes:
m1 =1
λ,
ε =2λ
λ− η0
, 0 < η0 < λ . (2.74)
Note that the variance does not exist for λ ≤ η0, and the distribution is called heavy-tailed(Sec. 2.4). This model is called Palm’s first normal form, which has only two parameters(λ, η0). As a special case, letting η0 → 0 the gamma-distribution (2.72) becomes a constantand (2.73) becomes an exponential distribution.
By weighting once more again using a gamma-distribution (κ) the result is a time distributionwith three parameters which is called Palm’s second normal form:
F (t) = 1− 1
1 + η0t
1 + κ
λ
η0
ln (1 + η0t)
−(1+ 1κ), η0 > 0 , κ > 0 , t ≥ 0 . (2.75)
The Pareto-distribution (2.73) is obtained from (2.75) by letting κ → 0, or η0 → 0. If bothκ and η tend to zero, we get the exponential distribution. We return to the normal forms inSec. 3.6.
2.3. COMBINATION OF RANDOM VARIABLES 63
2.3.3 Random variables in series and parallel
By combining exponential random variables in both series and parallel we get an almostgeneral class of distributions. By weak convergence it can be shown that we in this way canapproximate any distribution function with any degree of accuracy. For the derivations it isuseful first to consider the concept a stochastic sum (random sum).
Stochastic sum
By a stochastic sum we understand the sum of a stochastic number of random variables (Feller,1950 [32]). Let us consider a trunk group without congestion, where the arrival process andthe holding times are stochastically independent. If we consider a fixed time interval t, thenthe number of arrivals is a random variable N . In the following N is characterized by:
N : density function p(i) = pN = i , i = 0, 1, 2, . . . ,
mean value m1,n , (2.76)
variance σ2n ,
Arriving call number i has the holding time Ti. All Ti have the same distribution, and eacharrival (request) will contribute with a certain number of time units (the holding times) whichis a random variable characterized by:
T : density function f(t) = p(t < T ≤ t+ dt) , t ≥ 0 ,
mean value m1,t , (2.77)
variance σ2t ,
The total traffic volume generated by all arrivals (requests) arriving within the consideredtime interval T is then a random variable itself:
ST = T1 + T2 + · · ·+ TN . (2.78)
In the following we assume that Ti and N are stochastically independent. This will be fulfilledwhen the congestion is zero.
The following derivations are valid for both discrete and continuous random variables (sum-mation is replaced by integration or vice versa). The stochastic sum becomes a combinationof random variables in series and parallel as shown in Fig. 2.8 and dealt with in Sec. 2.3. For
64 CHAPTER 2. TIME INTERVAL MODELING
T
p
p
p
p
1
2
3
1
2
T 1
1
ii
1T
T
T
2T
T
2T
3T
Figure 2.8: A stochastic sum may be interpreted as a series/parallel combination of randomvariable.
a given branch i we find (Fig. 2.8):
m1,i = i ·m1,t , (2.79)
σ2i = i · σ2
t , (2.80)
m2,i = i · σ2t + (i ·m1,t)
2 . (2.81)
By summation over all possible values (branches) i we get:
m1,s =∞∑
i=1
p(i) ·m1,i
=∞∑
i=1
p(i) · i ·m1,t ,
m1,s = m1,t ·m1,n , (2.82)
2.3. COMBINATION OF RANDOM VARIABLES 65
m2,s =∞∑
i=1
p(i) ·m2,i
=∞∑
i=1
p(i) · i · σ2t + (i ·m1,t)
2 ,
m2,s = m1,n · σ2t +m2
1,t ·m2,n , (2.83)
σ2s = m1,n · σ2
t +m21,t ·m2,n − (m1,t ·m1,n)2 ,
σ2s = m1,n · σ2
t +m21,t · σ2
n . (2.84)
We notice there are two contributions to the total variance: one term because the numberof calls is a random variable (σ2
n), and a term because the duration of the calls is a randomvariable (σ2
t ).
Example 2.3.2: Special case 1: N = n = constant (mn = n)
m1,s = n ·m1,t ,
σ2s = n · σ2
t . (2.85)
This corresponds to counting the number of calls at the same time as we measure the traffic volumeso that we can estimate the mean holding time. 2
Example 2.3.3: Special case 2: T = t = constant (mt = t)
m1,s = m1,n · t ,
σ2s = t2 · σ2
n . (2.86)
If we change the scale from 1 to m1,t, then the mean value has to be multiplied by m1,t and thevariance by m2
1,t. The mean value m1,t = 1 corresponds to counting the number of calls. Thus thevariance/mean ratio becomes m1,t times bigger. 2
Example 2.3.4: Stochastic sumAs a non-teletraffic example N may denote the number of rain showers during one month and Ti maydenote the precipitation due to the i’th shower. ST is then a random variable describing the totalprecipitation during a month. N may also for a given time interval denote the number of accidentsregistered by an insurance company and Ti denotes the compensation for the i’th accident. ST thenis the total amount paid by the company for the considered period. 2
The exponential distribution is the most important time distribution within teletraffic theory.This time distribution is dealt with in Sec. 2.1.1.
66 CHAPTER 2. TIME INTERVAL MODELING
Cox distributions
λ1
q1(1− p1)
q2(1− p2)
qk(1− pk)
(1− p0)
λk
λ1
λ1 λ2
λ2
Figure 2.9: A Cox–distribution is a generalized Erlang–distribution having exponential dis-tributions in both parallel and series. The phase-diagram is equivalent to Fig. 2.10.
................................................................................................................................................................................. .......................... ................................................................................................................................................................................. .......................... ................................................................................................................................................................................. .......................... ................................................................................................................................................................................. ........................................................................................................................................................................................................................................................................................................................................
........
..................
............................................................................................................................................................................................................................................................................................
..................
........
..................
............................................................................................................................................................................................................................................................................................
........
..........
........
..................
............................................................................................................................................................................................................................................................................................
..................
........
..................
............................................................................................................................................................................................................................................................................................
..................
........
..................
............................................................................................................................... .......................... ..................................................................................................................................................................................................................................................................................... .......................... ..................................................................................................................................................................................................................................................................................... .......................... ................................................................................................................................................................................. .......................... ............................................................................................................................................................................................................................................................ ..........................
x
y
· · ·· · ·
· · ·· · ·
λ1 λ2 λk
p0 p1 p2 pk−1
1 − p0 1 − p1 1 − p2 1 − pk−1 1
Figure 2.10: The phase diagram of a Cox distribution, cf. Fig. 2.9.
By combining the steep and flat distributions we obtain a general class of distributions (phase–type distributions) which can be described with exponential phase in both series and parallel(e.g. a k × ` matrix). To analyse a model with this kind of distributions, we can apply thetheory of Markov processes, for which we have powerful tools as the phase-method. In themore general case we can allow for loop back between the phases.
We shall only consider Cox-distributions as shown in Fig. 2.9 (Cox, 1955 [18]). These alsoappear under the name of “Branching Erlang” distribution. The mean value and variance ofthis Cox distribution (Fig. 2.10) are found from the formulae in Sec. 2.3 for random variablesin series and parallel as shown in fig. 2.9:
m1 =k∑
i=1
qi (1− pi)
i∑
j=1
1
λj
, (2.87)
2.3. COMBINATION OF RANDOM VARIABLES 67
whereqi = p0 · p1 · p2 · · · · · pi−1 . (2.88)
The term qi(1 − pi) is the probability of jumping out after being in i’th phase. It can beshown that the mean value can be expressed by the simple form:
m1 =k∑
i=1
qiλi
=k∑
i=1
m1,i , (2.89)
where m1,i = qi/λi is the i’th phase related mean value. The second moment becomes:
m2 =k∑
i=1
qi (1− pi) ·m2,i
=k∑
i=1
qi (1− pi) ·
i∑
j=1
1
λ2j
+
(i∑
j=1
1
λj
)2
, (2.90)
where m2,i is obtained from (2.11): m2,i = σ22,i + m2
1,i. It can be shown that this can bewritten as:
m2 = 2 ·k∑
i=1
(i∑
j=1
1
λj
)· qiλi
. (2.91)
From this we get the variance (2.11):
σ2 = m2 −m21 .
The addition of two Cox–distributed random variables yields another Cox-distributed vari-able, i.e. this class is closed under the operation of addition.
The distribution function of a Cox distribution can be written as a sum of exponential func-tions:
1− F (t) =k∑
i=1
ci · e−λit where 0 ≤k∑
i=1
ci ≤ 1 , −∞ < ci < +∞ . (2.92)
Polynomial trial
The following properties are of importance for later applications. If we consider a point oftime chosen at random within a Cox–distributed time interval, then this point is within phasei with probability:
%i =mi
m1
, i = 1, 2, . . . , k . (2.93)
If we repeat this experiment y (independently) times, then the probability that phase i isobserved yi times is given by multinomial distribution (= polynomial distribution):
py1, y2, . . . , yk | y =
(y
y1, y2, . . . , yk
)· % y11 · % y22 · . . . · % ykk , (2.94)
68 CHAPTER 2. TIME INTERVAL MODELING
wherek∑
i=1
yi = y ,
and (y
y1, y2, . . . , yk
)=
y!
y1! · y2! · · · · · yk!. (2.95)
This (2.95) is called the multinomial coefficient. By the property of lack of memory of theexponential distributions (phases) we have full information about the residual life-time, whenwe know the number of the actual phase.
By the multinomial theorem we have by summation over all possible states:
(%1 + %2 + . . .+ %k)y = 1 =
∑∑yi=y
(y
y1, y2, . . . , yk
)· % y11 · % y22 · . . . · % ykk . (2.96)
The multinomial theorem is also valid for∑
i %i 6= 1. It is a generalization of the binomialtheorem (2.36).
Decomposition principles
Phase–diagrams are a useful tool for analyzing Cox distributions. The following is a funda-mental characteristic of the exponential distribution (Iversen & Nielsen, 1985 [46]):
Theorem 2.1 An exponential distribution with intensity λ can be decomposed into a two-phase Cox distribution, where the first phase has an intensity µ > λ and the second phaseintensity λ (Fig. 2.11).
According to Theorem 2.1 a hyper–exponential distribution with ` phases is equivalent to aCox distribution with the same number of phases. The case ` = 2 is shown in Fig. 2.13.
We have another property of Cox distributions (Iversen & Nielsen, 1985 [46]):
Theorem 2.2 The phases in any Cox distribution can be ordered such as λi ≥ λi+1.
Theorem 2.1 shows that an exponential distribution is equivalent to a homogeneous Coxdistribution (homogeneous: same intensities in all phases) with intensity m and an infi-nite number of phases (Fig. 2.11). We notice that the branching probabilities are constant.Fig. 2.12 corresponds to a weighted sum of Erlang–k distributions where the weighting factorsare geometrically distributed.
2.3. COMBINATION OF RANDOM VARIABLES 69
...................................................................................................... .......................... ...................................................................................................... ..........................
...................................................................................................... .......................... .......................................................................................................................................................................................................... ......................................................................................................................................................................................................................................................................................
........
..................
..........................................................................................................................................................................................................................................
..................
........
..................
......................................................................................................................................................................................................................................................................................................................................................................................... ..........................
λ
µ λ1− λ
µ
λµ
⇐⇒
Figure 2.11: An exponential distribution with rate λ is equivalent to the shown Cox–2 dis-tribution (Theorem 2.1).
...................................................................................................... .......................... ................................................................................................................................................................................. .......................... ................................................................................................................................................................................. .......................... ................................................................................................................................................................................. ........................................................................................................................................................................................................................................................................................................................................
........
..................
............................................................................................................................................................................................................................................................................................
..................
........
..................
............................................................................................................................................................................................................................................................................................
..................
........
..................
............................................................................................................................................................................................................................................................................................
..................
........
........
..........
............................................................................................................................... .......................... ..................................................................................................................................................................................................................................................................................... .......................... ................................................................................................................................................................................. .......................... .............................................................................................................................................................................................................................................................................................................. ..........................
.......................................................................................................................................................................................................... ..........................· · ·· · ·
· · ·· · ·
· · ·
· · ·
µ µ µ1 − p 1− p 1 − p 1 − p
p = λµ
p p p
Figure 2.12: An exponential distribution with rate λ is by successive decomposition trans-formed into a compound distribution of homogeneous Erlang–k distributions with rates µ > λ,where the weighting factors follows a geometric distribution (quotient p = λ/µ).
........................................................................................................................................................ .......................... ........................................................................................................................................................................................................................................................................................................................................................................................................................................... ......................................................................................................................................................................................................................................................................................
........
..................
............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................ ..........................
..........................................................................................................................................................................................................................................
..................
........
..................
............................................................................λ1 λ2
q = (1− p1)(1−λ2
λ1)
p = p1 + (1− p1)λ2
λ1
Figure 2.13: A hyper–exponential distribution (Fig. 2.6) with two phases (λ1 > λ2, p2 =1− p1) can be transformed into a Cox–2 distribution.
70 CHAPTER 2. TIME INTERVAL MODELING
By using phase diagrams it is easy to see that any exponential time interval (λ) can bedecomposed into phase-type distributions (λi), where λi ≥ λ. Referring to Fig. 2.14 wenotice that the rate out of the macro-state (dashed box) is λ independent of the micro state.When the number of phases k is finite and there is no feedback the final phase must haverate λ.
............. ............. ............. .............................................................................................................................................................................................. .......................... ............................................................................................................................... .......................... ............................................................................................................................... .......................... ............................................................................................................................... .......................... ............................................................................................................................... ..........................
.................................................................................................................................................................................................................
..................
........
..................
.................................................................................................................................................................................................................
..................
........
..................
.................................................................................................................................................................................................................
..................
........
..................
λ1 λi λk =λ1−p1
p1
1−pi
pi pk =1
Figure 2.14: This phase-type distribution is equivalent to a single exponential when pi ·λi = λ.Thus λi ≥ λ as 0 < pi ≤ 1.
Importance of Cox distribution
Cox distributions have attracted a lot of attention during recent years. They are of greatimportance due to the following properties:
a. Cox distribution can be analyzed using the method of phases.
b. One can approximate an arbitrary distribution arbitrarily well with a Cox distribution.If a property is valid for a Cox distribution, then it is valid for any distribution ofpractical interest.
By using Cox distributions we can with elementary methods obtain results which previouslyrequired very advanced mathematics.
In the connection with practical applications of the theory, we have used the methods to esti-mate the parameters of Cox distribution. In general there are 2 k parameters in an unsolvedstatistical problem. Normally, we may choose a special Cox distribution (e.g. Erlang–k orhyper–exponential distribution) and approximate the first moment.
By numerical simulation on computers using the Roulette method, we automatically obtainthe observations of the time intervals as Cox distribution with the same intensities in allphases.
2.4. OTHER TIME DISTRIBUTIONS 71
2.4 Other time distributions
In principle, every distribution which has non–negative values, may be used as a time dis-tribution to describe the time intervals. For distributions which are widely applied in thequeueing theory, we have the following abbreviated notations (cf. Sec. 10.1):
M ∼ Exponential distribution (Markov),Ek ∼ Erlang-k distribution,Hn ∼ Hyper-exponential distribution of order n,D ∼ Constant (Deterministic),Cox ∼ Cox distribution,G ∼ General = arbitrary distribution.
Gamma distribution
If we suppose the parameter k in Erlang-k distribution (2.46) takes non-negative real values,then we obtain the gamma distribution:
f(t) =1
Γ(k)(λt)k−1 · e−λt · λ , λ > 0 , t ≥ 0 . (2.97)
The mean value and variance are given in (2.48) and (2.49).
Weibull distribution
A distribution also known in teletraffic theory is the Weibull distribution We(k, λ):
F (t) = 1− e−(λt)k , t ≥ 0 , k > 0 , λ > 0 . (2.98)
This distribution has a time-dependent death intensity (2.21):
dF (t)
1− F (t)= µ(t) =
λe−(λt)k · k (λt)k−1 dt
e−(λt)k
= λk (λt)k−1 . (2.99)
The distribution has its origin in the reliability theory. For k = 1 we get the exponentialdistribution.
72 CHAPTER 2. TIME INTERVAL MODELING
Heavy-tailed distributions
To describe data with big variations we often use heavy-tailed distributions. A distributionis heavy-tailed in strict sense if the tail of the distribution function behaves as a power law,i.e. as
1− F (t) ≈ t−α , 0 < α ≤ 2 .
The Pareto distribution (2.73) is heavy-tailed in strict sense.
Sometimes distributions with a tail more heavy than the exponential distribution are alsoclassified as heavy-tailed. Examples are hyper-exponential, Weibull, and log-normal distri-butions. Another class of distributions is sub-exponential distribution. These subjects aredealt with in the literature.
Later, we will deal with a set of discrete distributions, which also describes the life–time, suchas geometrical distribution, Pascal distribution, Binomial distribution, Westerberg distribu-tion, etc. In practice, the parameters of distributions are not always stationary.
The service (holding) times can be physically correlated with the state of the system. Inman–machine systems the service time changes because of busyness (decrease) or tiredness(increase). In the same way, electro–mechanical systems work more slowly during periods ofhigh load because the voltage decreases.
2.5 Observations of life-time distribution
Fig. 2.7 shows an example of observed holding times from a local telephone exchange. Theholding time consists of both signalling time and, if the call is answered, conversation time.Fig. 3.4 shows observation and inter–arrival times of incoming calls to a transit telephoneexchange during one hour. A particular outcome of a random variable is called a randomvariate. Thus observations of holding times are variates of a random variable we want tomodel.
From its very beginning, the teletraffic theory has been characterized by a strong interac-tion between theory and practice, and there has been excellent possibilities to carry outmeasurements.
Erlang (1920, [12]) reports a measurement where 2461 conversation times were recorded ina telephone exchange in Copenhagen in 1916. Palm (1943 [92]) analyzed the field of trafficmeasurements, both theoretically and practically, and implemented extensive measurementsin Sweden.
By the use of computer technology a large amount of data can be collected. The first stored
2.5. OBSERVATIONS OF LIFE-TIME DISTRIBUTION 73
program controlled by a mini-computer measurement is described in (Iversen, 1973 [40]). Theimportance of using discrete values of time when observing values is dealt with in Chapter 13.Bolotin (1994, [7]) has measured and modelled telecommunication holding times.
Numerous measurements on computer systems have been carried out. Where in telephonesystems we seldom have a form factor greater than 6, we observe form factors greater than100 in data traffic. This is the case for example for data transmission, where we send eithera few characters or a large quantity of data.
More recent extensive measurements have been performed and modeled using self-similartraffic models (Jerkins & al., 1999 [59]). These subjects are dealt with in more advancedchapters. For more advanced modelling Laplace transforms and Z-transforms are widelyused.
Updated: 2010-02-17
74 CHAPTER 2. TIME INTERVAL MODELING
Chapter 3
Arrival Processes
Arrival processes, such as telephone calls arriving to a switching system or messages arrivingto a server are mathematically described as stochastic point processes. For a point process,we have to be able to distinguish two arrivals from each other. Information concerning thesingle arrival (e.g. service time, number of customers) are ignored. Such information can onlybe used to determine whether an arrival belongs to the process or not.
The mathematical theory for point process was founded and developed by the Swede ConnyPalm during the 1940’es. This theory has been widely applied in many fields. It was mathe-matically refined by Khintchine ([71], 1968), and is widely applicable in many fields.
The Poisson process is the most important point process. Later we will realize that itsrole among point processes is as fundamental as the role of the Normal distribution amongstatistical distributions. By the central limit theorem we obtain the Normal distributionwhen adding random variables. In a similar way we obtain the exponential distribution whensuperposing stochastic point processes.
Most other applied point processes are generalizations or modifications of the Poisson process.This process gives a surprisingly good description of many real–life processes. This is becauseit is the most random process. The more complex a process is, the better it will in generalbe modeled by a Poisson process.
Due to its great importance in practice, we shall study the Poisson process in detail in thischapter. First (Sec. 3.5) we base our study on a physical model with main emphasis upon thedistributions associated to the process, and then we shall consider some important propertiesof the Poisson process (Sec. 3.6). Finally, in Sec. 3.7 we consider the interrupted Poissonprocess and the Batched Poisson process as examples of generalization.
75
76 CHAPTER 3. ARRIVAL PROCESSES
0 10 20 30 40 50 600
10
20
30
40
50
60
70
80
90
100
110
120
Time [s]
Accumulated number of calls
Figure 3.1: The call arrival process at the incoming lines of a transit exchange.
3.1 Description of point processes
In the following we only consider simple point processes, i.e. we exclude multiple arrivals asfor example twin arrivals. For telephone calls this may be realized by a choosing sufficientdetailed time scale.
Consider arrival times where the i’th call arrives at time Ti:
0 = T0 < T1 < T2 < . . . < Ti < Ti+1 < . . . . (3.1)
The first observation takes place at time T0 = 0.
The number of calls in the half-open interval [0, t[ is denoted as Nt. Here Nt is a randomvariable with continuous time parameters and discrete space. When t increases, Nt neverdecreases.
The time distance between two successive arrivals is:
Xi = Ti − Ti−1, i = 1, 2, . . . . (3.2)
This is called the inter-arrival time, and the distribution of this interval is called the inter-arrival time distribution.
Corresponding to the two random variables Nt and Xi, a point process can be characterizedin two ways:
3.1. DESCRIPTION OF POINT PROCESSES 77
1. Number representation Nt: time interval t is kept constant, and we observe the randomvariable Nt for the number of calls in t.
2. Interval representation Ti: number of arriving calls n is kept constant, and we observethe random variable Ti for the time interval until there has been n arrivals (especiallyT1 = X1).
The fundamental relationship between the two representations is given by the following simplerelation:
Nt < n if and only if Tn =n∑
i=1
Xi ≥ t , n = 1, 2, . . . (3.3)
This is expressed by Feller-Jensen’s identity:
p Nt < n = p Tn ≥ t , n = 1, 2, . . . (3.4)
Analysis of point process can be based on both of these representations. In principle theyare equivalent. Interval representation corresponds to the usual time series analysis. If wefor example let i = 1, we obtain call averages, i.e. statistics on a per-call basis. Numberrepresentation has no parallel in time series analysis. The statistics we obtain are averagedover time and we get time averages, i.e. statistics on a per time unit basis (cf. the differencebetween call congestion and time congestion). The statistics of interests when studying pointprocesses can be classified according to the two representations.
3.1.1 Basic properties of number representation
There are three properties which are of interest:
1. The total number of arrivals in interval [t1, t2[ is equal to Nt2 −Nt1 .The average number of calls in the same interval is called the renewal function H:
H (t1, t2) = E Nt2 −Nt1 . (3.5)
2. The density of arriving calls at time t (time average) is:
λt = lim∆t→0
Nt+∆t −Nt
∆t= N ′t . (3.6)
We assume that λt exists and is finite. We may interpret λt as the intensity by whicharrivals occur at time t (cf. Sec. 2.2.2). For simple or ordinary point processes, we have:
p Nt+∆t −Nt ≥ 2 = o(∆t) , (3.7)
p Nt+∆t −Nt = 1 = λt∆t+ o(∆t) , (3.8)
p Nt+∆t −Nt = 0 = 1− λt∆t+ o(∆t) , (3.9)
where by definition:
lim∆t→0
o(∆t)
∆t= 0 . (3.10)
78 CHAPTER 3. ARRIVAL PROCESSES
3. Index of Dispersion for Counts IDC.To describe second order properties of the number representation we use the index ofdispersion for counts, IDC. This describes the variations of the arrival process during atime interval t and is defined as:
IDC =VarNtENt
. (3.11)
By dividing the time interval t into x intervals of duration t/x and observing the numberof events during these intervals we obtain an estimate of IDC(t). For the Poisson processIDC becomes equal to one. IDC is equal to “peakedness”, which we later introduce tocharacterize the number of busy channels in a traffic process (4.7).
3.1.2 Basic properties of interval representation
Also here we have three properties of interest.
4. The probability density function f(t) of time intervals Xi (3.2) (and by convolving thedistribution by itself i−1 times, the distribution of the time until the i’th arrival).
Fi(t) = p Xi ≤ t , (3.12)
E Xi = m1,i . (3.13)
The mean value is a call average. A renewal process is a point process, where se-quential inter-arrival times are stochastic independent to each other and have the samedistribution, i.e. m1,i = m1 (I ID = Identically and Independently Distributed).
5. The distribution function (pdf) V (t) of the time interval from a random point (epoch)of time until the first arrival occurs. The mean value of V (t) is a time average, whichis calculated per time unit.
6. Index of Dispersion for Intervals, IDI.To describe second order properties for the interval representation we use the Index ofDispersion for Intervals, IDI. This is defined as:
IDI =VarXiEXi2
= ε− 1 , (3.14)
where Xi is the inter-arrival time. For the Poisson process, which has exponentiallydistributed service times, IDI becomes equal to one. IDI is equal to Palm’s form factorminus one (2.13). In general, IDI is more difficult to obtain from observations thanIDC, and more sensitive to the accuracy of measurements and smoothing of the trafficprocess. The digital technology is more suitable for observation of IDC, whereas itcomplicates the observation of IDI (Chap. 13).
3.1. DESCRIPTION OF POINT PROCESSES 79
Which one of the two representations to use in practice, depends on the actual case. Thiscan be illustrated by the following examples.
Example 3.1.1: Measuring principlesMeasures of teletraffic performance are carried out by one of the two basic principles as follows:
1. Passive measures. Measuring equipment records at regular time intervals the number ofarrivals since the last recording. This corresponds to the scanning method, which is suitablefor computers. This corresponds to the number representation where the time interval is fixed.
2. Active measures. Measuring equipment records an event at the instant it takes place. Wekeep the number of events fixed and observe the measuring interval. Examples are recordinginstruments. This corresponds to the interval representation, where we obtain statistics foreach single call.
2
Example 3.1.2: Test callsInvestigation of the traffic quality. In practice this is done in two ways:
1. The traffic quality is estimated by collecting statistics of the outcome of test calls madeto specific (dummy–) subscribers. The calls are generated during busy hour independentlyof the actual traffic. The test equipment records the number of blocked calls etc. Theobtained statistics corresponds to time averages of the performance measure. Unfortunately,this method increases the offered load on the system. Theoretically, the obtained performancemeasures will differ from the correct values.
2. The test equipments collect data from call number N, 2N, 3N, . . ., where for example N =1000. The traffic process is unchanged, and the performance statistics is a call average.
2
Example 3.1.3: Call statisticsA subscriber evaluates the quality by the fraction of calls which are blocked, i.e. call average.The operator evaluates the quality by the proportion of time when all trunks are busy, i.e. timeaverage. The two types of average values (time/call) are often mixed up, resulting in apparentlyconflicting statement. 2
Example 3.1.4: Called party busy (B-Busy)At a telephone exchange typically 10% of the subscribers are busy, but 20% of the call attempts areblocked due to B-busy (called party busy). This phenomenon can be explained by the fact that halfof the subscribers are passive (i.e. make no call attempts and receive no calls), whereas 20% of theremaining subscribers are busy. G. Lind (1976 [82]) analyzed the problem under the assumptionthat each subscriber on the average has the same number of incoming and outgoing calls. If meanvalue and form factor of the distribution of traffic per subscriber is b and ε, respectively, then theprobability that a call attempts get B-busy is b · ε. 2
80 CHAPTER 3. ARRIVAL PROCESSES
3.2 Characteristics of point process
Above we have discussed a very general structure for point processes. For specific applicationswe have to introduce further properties. Below we only consider number representation, butwe could do the same based on the interval representation.
3.2.1 Stationarity (Time homogeneity)
Regardless of the position on the time axis, then the probability distributions describing thepoint process are independent of the instant of time. The following definition is useful inpractice:
Definition: For an arbitrary t2 > 0 and every k ≥ 0, the probability that there are k arrivalsin [t1, t1 + t2[ is independent of t1, i.e. for all t, k we have:
p Nt1+t2 −Nt1 = k = p Nt1+t2+t −Nt1+t = k . (3.15)
There are many other definitions of stationarity, some stronger, some weaker.
Stationarity can also be defined by interval representation by requiring all Xi to be indepen-dent and identically distributed (IID). A weaker definition is that all first and second ordermoments (e.g. the mean value and variance) of a point process must be invariant with respectto time shifts. Erlang introduced the concept of statistical equilibrium, which requires thatthe derivatives of the process with respect to time are zero.
3.2.2 Independence
This property can be expressed as the requirement that the future evolution of the processonly depends upon the actual state.
Definition: The probability that k events (k is integer and ≥ 0) take place in [t1, t1 + t2[ isindependent of events before time t1
p Nt2 −Nt1 = k|Nt1 −Nt0 = n = p Nt2 −Nt1 = k (3.16)
If this holds for all t, then the process is a Markov process; the future evolution only dependson the present state, but is independent of how this has been obtained. This is the lack ofmemory property. If this property only holds for certain time points (e.g. arrival times), thesepoints are called equilibrium points or regeneration points. The process then has a limitedmemory, and we only need to keep record of the past back the the latest regeneration point.
3.2. CHARACTERISTICS OF POINT PROCESS 81
Example 3.2.1: Equilibrium points = regeneration pointsExamples of point process with equilibrium points.
a) Poisson process is (as we will see in next chapter) memoryless, and all points of the time axesare equilibrium points.
b) A scanning process, where scans occur at a regular cycle, has limited memory. The latestscanning instant has full information about the scanning process, and therefore all scanningpoints are equilibrium points.
c) If we superpose the above-mentioned Poisson process and scanning process (for instance byinvestigating the arrival processes in a computer system), the only equilibrium points in thecompound process are the scanning instants.
d) Consider a queueing system with Poisson arrival process, constant service time and singleserver. The number of queueing positions can be finite or infinite. Let a point process bedefined by the time instants when service starts. All time intervals when the system is idle,will be equilibrium points. During periods, where the system is busy, the time points foracceptance of new calls for service depends on the instant when the first call of the busyperiod started service.
2
3.2.3 Simplicity or ordinarity
We have already mentioned (3.7) that we exclude processes with multiple arrivals.
Definition: A point process is called simple or ordinary, if the probability that there aremore than one event at a given point is zero:
p Nt+∆t −Nt ≥ 2 = o(∆t) . (3.17)
With interval representation, the inter-arrival time distribution must not have a probabilitymass (atom) at zero, i.e. the distribution is continuous at zero (2.2):
F (0+) = 0 (3.18)
Example 3.2.2: Multiple eventsTime points of traffic accidents will form a simple process. Number of damaged cars or dead peoplewill be a non-simple point process with multiple events. 2
82 CHAPTER 3. ARRIVAL PROCESSES
3.3 Little’s theorem
This is the only general result that is valid for all queueing systems. It was first published byLittle (1961 [84]). The proof below was shown by applying the theory of stochastic processin (Eilon, 1969 [25]).
We consider a queueing system, where customers arrive according to a stochastic process.Customers enter the system at a random time and wait to get service, after being servedthey leave the system. In Fig. 3.2, both arrival and departure processes are considered asstochastic processes with cumulated number of customers as ordinate.
We consider a time space T and assume that the system is in statistical equilibrium at initialtime t = 0. We use the following notation (Fig. 3.2):
N(T ) = number of arrivals in period T .
A(T ) = the total service times of all customers in the period T= the shadowed area between curves= the carried traffic volume.
λ(T ) = N(T )T
= the average call intensity in the period T .
W (T ) = A(T )N(T )
= mean holding time in system per call in the period T .
L(T ) = A(T )T
= the average number of calls in the system in the period T .
We have the important relation among these variables:
L(T ) =A(T )
T=W (T ) ·N(T )
T= λ(T ) ·W (T ) (3.19)
If the limits ofλ = lim
T→∞λ(T ) and W = lim
T→∞W (T )
exist, then the limiting value of L(T ) also exists and it becomes:
L = λ ·W (Little’s theorem). (3.20)
This simple formula is valid for all general queueing system. The proof had been refinedduring the years. We shall use this formula in Chaps. 9–12.
Example 3.3.1: Little’s formulaIf we only consider the waiting positions, the formula shows:
The mean queue length is equal to call intensity multiplied by the mean waiting time.
3.4. CHARACTERISTICS OF THE POISSON PROCESS 83
If we only consider the servers, the formula shows:
The carried traffic is equal to arrival intensity multiplied by mean service time(A = y · s = λ/µ).
This corresponds to the definition of offered traffic in Sec. 1.7. 2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...........................................................................
........
........
........
.....................
................
Departure process
.....................................
........
........
........
........................................................... ................Arrival process
00
1
2
3
4
5
6
7
8
9
Time
Number of events
Figure 3.2: A queueing system with arrival and departure of customers. The vertical distancebetween the two curves is equal to the actual number of customers being served. The cus-tomers in general don’t depart in the the same order as they arrive, so the horizontal distancebetween the curves don’t describe the actual time in the system of a customer.
3.4 Characteristics of the Poisson process
The fundamental properties of the Poisson process are defined in Sec. 3.2:
a. Stationary,
b. Independent at all time instants (epochs), and
c. Simple.
84 CHAPTER 3. ARRIVAL PROCESSES
(b) and (c) are fundamental properties, whereas (a) can be relaxed. We may allow a Poissonprocess to have a time–dependent intensity. From the above properties we may derive otherproperties that are sufficient for defining the Poisson process. The two most important onesare:
• Number representation: The number of events within a time interval of fixed length isPoisson distributed. Therefore, the process is named the Poisson process.
• Interval representation: The time distance Xi (3.2) between consecutive events is ex-ponentially distributed.
In this case using (2.46) and (2.47) Feller–Jensen’s identity (3.4) shows the fundamental rela-tionship between the cumulated Poisson distribution and the Erlang distribution (Sec. 3.5.2):
n−1∑
j=0
(λt)j
j!· e−λt =
∫ ∞
x=t
(λx)n−1
(n− 1)!λ · e−λx dx = 1− F (t) . (3.21)
This formula can also be obtained by repeated partial integration.
3.5 Distributions of the Poisson process
In this section we consider the Poisson process in a dynamical and physical way (Fry, 1928 [35])& (Jensen, 1954 [12]). The derivations are based on a simple physical model and focus uponthe probability distributions associated with the Poisson process. The physical model is asfollows: Events (arrivals) are placed at random on the real axis in such a way that every eventis placed independently of all other events. So we put the events uniformly and independentlyon the real axes.
The average density is chosen as λ events (arrivals) per time unit. If we consider the axisas a time axis, then on the average we shall have λ arrivals per time unit. The probabilitythat a given arrival pattern occurs within a time interval is independent of the location ofthe interval on the time axis.
............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................. ....................................................
..........................
....................................................................................................... ................................................................ .......................... .................................................................................................................................................................................. ........................................................................................................................................................ ..........................0 t1 t2 Time
Figure 3.3: When deriving the Poisson process, we consider arrivals within two non–overlapping time intervals of duration t1 and t2, respectively.
Let p(ν, t) denote the probability that ν events occur within a time interval of duration t.The mathematical formulation of the above model is as follows:
3.5. DISTRIBUTIONS OF THE POISSON PROCESS 85
1. Independence: Let t1 and t2 be two non–overlapping intervals (Fig. 3.3), then becauseof the independence assumption we have:
p (0, t1) · p (0, t2) = p (0, t1 + t2) . (3.22)
2. We notice that (3.22) implies that the event “no arrivals within the interval of length 0”has the probability one:
p(0, 0) = 1 . (3.23)
3. The mean value of the time interval between two successive arrivals is 1/λ (2.7):
∫ ∞
0
p(0, t) dt =1
λ, 0 <
1
λ<∞ . (3.24)
Here p(0, t) is the probability that there are no arrivals within the time interval (0, t),which is identical to the probability that the time until the first event is larger than t(the complementary distribution function). The mean value (3.24) is obtained directlyfrom (2.7). Formula (3.24) can also be interpreted as the area under the curve p(0, t),which is a non-increasing function decreasing from 1 to 0.
4. We also notice that (3.24) implies that the probability of “no arrivals within a timeinterval of length ∞” is zero as it never takes place:
p(0,∞) = 0 . (3.25)
3.5.1 Exponential distribution
The fundamental step in the following derivation of the Poisson distribution is to derive p(0, t)which is the probability of no arrivals within a time interval of length t, i.e. the probabilitythat the first arrival appears later than t. We will show that 1 − p(0, t) = F (t) is anexponential distribution (cf. Sec. 2.1.1).
From (3.22) we have:
ln p (0, t1) + ln p (0, t2) = ln p (0, t1 + t2) . (3.26)
Letting ln p(0, t) = f(t), (3.26) can be written as:
f (t1) + f (t2) = f (t1 + t2) . (3.27)
By differentiation with respect to e.g. t2 we have:
f ′(t2) = f ′t2 (t1 + t2) .
86 CHAPTER 3. ARRIVAL PROCESSES
From this we notice that f ′(t) must be a constant and therefore:
f(t) = a+ b t . (3.28)
By inserting (3.28) into (3.27), we obtain a = 0. Therefore p(0, t) has the form:
p(0, t) = ebt .
From (3.24) we obtain b :
1
λ=
∫ ∞
0
p(0, t) dt =
∫ ∞
0
ebt dt = −1
b,
or:b = −λ .
Thus on the basis of item (1) and (2) above we have shown that:
p(0, t) = e−λt . (3.29)
If we consider p(0, t) as the probability that the next event arrives later than t, then the timeuntil next arrival is exponentially distributed (Sec. 2.1.1):
1− p(0, t) = F (t) = 1− e−λt, λ > 0 , t ≥ 0 , (3.30)
F ′(t) = f(t) = λ · e−λt , λ > 0 , t ≥ 0 . (3.31)
We have the following mean value and variance (??):
m1 =1
λ,
σ2 =1
λ2. (3.32)
The probability that the next arrival appears within the interval (t, t + dt) may be writtenas:
f(t) dt = λe−λt dt
= p(0, t)λ dt , (3.33)
i.e. the probability that an arrival appears within the interval (t, t + dt) is equal to λ dt,independent of t and proportional to dt (2.24).
Because λ is independent of the actual age t, the exponential distribution has no memory(cf. Secs. 2.1.1 & 2.2.2). The process has no age.
The parameter λ is called the intensity or rate of both the exponential distribution and ofthe related Poisson process and it corresponds to the intensity in (3.6). The exponentialdistribution is in general a very good model of call inter-arrival times when the traffic isgenerated by human beings (Fig. 3.4).
3.5. DISTRIBUTIONS OF THE POISSON PROCESS 87
0 4 8 12 16 20
Inter–arrival time [scan=0.2s]
1
2
5
10
20
50
100
200
500
1000
2000
→
5916 Observations
= Theory
Number of observations
Figure 3.4: Inter–arrival time distribution of calls at a transit exchange. The theoretical valuesare based on the assumption of exponentially distributed inter–arrival times. Due to themeasuring principle (scanning method) the continuous exponential distribution is transformedinto a discrete Westerberg distribution (13.14) (χ2-test = 18.86 with 19 degrees of freedom,percentile = 53).
3.5.2 Erlang–k distribution
From the above we notice that the time until exactly k arrivals have appeared is a sum of kIID (independently and identically distributed) exponentially distributed random variables.The distribution of this sum is an Erlang–k distribution (Sec. 2.3.1) and the density is givenby (2.46):
fk(t) dt = λ(λt)k−1
(k − 1)!e−λt dt , λ > 0 , t ≥ 0 , k = 1, 2, . . . . (3.34)
88 CHAPTER 3. ARRIVAL PROCESSES
The mean value and the variance are obtained in (2.48) – (2.52): from (3.32):
m1 =k
λ,
σ2 =k
λ2, (3.35)
ε = 1 +1
k.
Example 3.5.1: Call statistics from an SPC-system (cf. Example 3.1.2)Let calls arrive to a stored program–controlled telephone exchange (SPC–system) according to aPoisson process. The exchange automatically collects full information about every 1000’th call. Theinter-arrival times between two registrations will then be Erlang–1000 distributed and have the formfactor ε = 1.001, i.e. the registrations will take place very regularly. 2
0
25
50
75
100
125
150
0 2 4 6 8 10 12 14 16 18
Number of calls/s
900 observationsλ = 6.39 calls/s = theory
Number of observations
Figure 3.5: Number of Internet dial-up calls per second. The theoretical values are basedon the assumption of a Poisson distribution. A statistical test accepts the hypothesis of aPoisson distribution.
3.5. DISTRIBUTIONS OF THE POISSON PROCESS 89
3.5.3 Poisson distribution
We shall now show that the number of arrivals in an interval of fixed length t is Poissondistributed with mean value λt. When we know the above-mentioned exponential distributionand the Erlang distribution, the derivation of the Poisson distribution is only a matter ofapplying simple combinatorics. The proof can be carried through by induction.
We want to derive p(i, t) = probability of i arrivals within a time interval t. Let us assumethat:
p(i− 1, t) =(λt)i−1
(i− 1)!· e−λt , λ > 0 , i = 1, 2, . . .
This is correct for i = 1 (3.29). The interval (0, t) is divided into three non–overlappingintervals (0, t1) , (t1, t1 + dt1) and (t1 + dt1, t). From the earlier independence assumption weknow that events within an interval are independent of events in the other intervals, becausethe intervals are non–overlapping. By choosing t1 so that the last arrival within (0, t) appearsin (t1, t1 + dt1), then the probability p(i, t) is obtained by integrating over all possible valuesof t1 as a product of the following three independent probabilities:
a) The probability that (i− 1) arrivals occur within the time interval (0, t1):
p (i− 1, t1) =(λt1)i−1
(i− 1)!· e−λt1 , 0 ≤ t1 ≤ t .
b) The probability that there is just one arrival within the time interval from t1 to t1 + dt1:
λ dt1 .
c) The probability that no arrivals occur from t1 + dt1 to t:
e−λ(t−t1) .
The product of the first two probabilities is the probability that the i’th arrival appears in(t1, t1 + dt1), i.e. the Erlang distribution from the previous section.
By integration we have:
p(i, t) =
∫ t
0
(λt1)i−1
(i− 1)!e−λt1 · λ dt1 · e−λ(t−t1)
=λi
(i− 1)!e−λt
∫ t
0
t i−11 dt1 ,
p(i, t) =(λt)i
i!· e−λt , i = 0, 1, . . . λ > 0 . (3.36)
90 CHAPTER 3. ARRIVAL PROCESSES
This is the Poisson distribution which we thus have obtained from (3.29) by induction. Themean value and variance are:
m1 = λ · t , (3.37)
σ2 = λ · t . (3.38)
The Poisson distribution is in general a very good model for the number of calls in a telecom-munication system (Fig. 3.5) or jobs in a computer system.
0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.000.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.................................................................................................................................................................................................................................
...................................
....................................................
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
......................................................................................................................................
........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Offered traffic
Carried traffic
Ideal
Slotted Aloha
Simple Aloha
Figure 3.6: The carried traffic in a slotted Aloha system has a maximum throughput twicethe maximum throughput of the simple Aloha system (example 3.5.2). The Simple Alohaprotocol is dealt with in example 4.2.1.
Example 3.5.2: Slotted Aloha Satellite SystemLet us consider a digital satellite communication system with constant packet length h. The satelliteis in a geostationary position about 36.000 km above equator, so the round trip delay is about 280ms. The time axes is divided into slots of fixed duration corresponding to the packet length h.The individual terminal (earth station) transmits packets so that they are synchronized with thetime slots. All packets generated during a time slot are transmitted in the next time-slot. Thetransmission of a packet is only correct if it is the only packet being transmitted in a time slot. Ifmore packets are transmitted simultaneously, we have a collision and all packets are lost and mustbe retransmitted. All earth stations receive all packets and can thus decide whether a packet istransmitted correctly. Due to the time delay, the earth stations transmit packets independently. Ifthe total arrival process is a Poisson process (rate λ), then we get a Poisson distributed number of
3.6. PROPERTIES OF THE POISSON PROCESS 91
packets in each time slot.
p(i) =(λh)i
i!· e−λh. (3.39)
The probability of a correct transmission is:
p(1) = λh · e−λh. (3.40)
This corresponds to the proportion of the time axes which is utilized effectively. This function,which is shown in Fig. 3.6, has an optimum when the derivative with respect to λh is zero:
p′λh(1) = e−λh · (1− λh) = 0 , (3.41)
λh = 1 .
Inserting this value in (3.40) we get:
maxp(1) = e−1 = 0.3679 . (3.42)
We thus have a maximum utilization of the channel equal to 0.3679, when on the average we transmitone packet per time slot. A similar result holds when there is a limited number of terminals andthe number of packets per time slot is Binomially distributed. 2
3.5.4 Static derivation of the distributions of the Poisson process
As it is known from statistics, these distributions can also be derived from the Binomialprocess by letting the number of trials n (e.g. throws of a die) increase to infinity and at thesame time letting the probability of success in a single trial p converge to zero in such a waythat the average number n·p is constant.
This approach is static and does not stress the fundamental properties of the Poisson processwhich has a dynamic independent existence. But it shows the relationship between the twoprocesses as illustrated in Table 3.1.
The exponential distribution is the only continuous distribution with lack of memory, and thegeometrical distribution is the only discrete distribution with lack of memory. For example,the next outcome of a throw of a die is independent of the previous outcome. The distributionsof the two processes are shown in Table 3.1.
3.6 Properties of the Poisson process
In this section we shall show some fundamental properties of the Poisson process. Fromthe physical model in Sec. 3.5 we have seen that the Poisson process is the most random
92 CHAPTER 3. ARRIVAL PROCESSES
BINOMIAL PROCESS POISSON PROCESSDiscrete time Continuous timeProbability of success: p , 0 < p < 1 Intensity of succes: λ , λ > 0
Number of attempts since previous success or Interval between two successes or fromsince a random attempt to get a success a random point until next success
GEOMETRIC DISTRIBUTION EXPONENTIAL DISTRIBUTION
p(n) = p · (1− p)n−1 , n = 1, 2, . . . f(t) = λ · e−λt , t ≥ 0
m1 =1p
, σ2 =1− pp2
m1 =1λ
, σ2 =1λ2
Number of attempts to get k successes Time interval until k’th success
PASCAL = NEGATIVE BINOMIAL DISTR. ERLANG–K DISTRIBUTION
p(n | k) =(n− 1k − 1
)pk (1− p)n−k , n ≥ k fk(t) =
(λt)k−1
(k − 1)!· λ · e−λt , t ≥ 0
m1 =k
p, σ2 =
k(1− p)p2
m1 =k
λ, σ2 =
k
λ2
Number of successes in n attempts Number of successes in a time interval t
BINOMIAL DISTRIBUTION POISSON DISTRIBUTION
p(x | n) =(n
x
)px (1− p)n−x , x = 0, 1, . . . f(x, t) =
(λt)x
x!· e−λt , t ≥ 0
m1 = p n , σ2 = p n · (1−p) m1 = λ t , σ2 = λ t
Table 3.1: Correspondence between the distributions of the Binomial process and the Poissonprocess. A success corresponds to an event or an arrival in a point process. Mean value =m1, variance = σ2. For the geometric distribution we may start with a zero class. The meanvalue is then reduced by one whereas the variance is the same.
3.6. PROPERTIES OF THE POISSON PROCESS 93
point process that may be found (maximum disorder process). It yields a good descriptionof physical processes when many different factors are behind the total process. In a Poissonprocess events occur at random during time and therefore call averages and time averagesare identical. This is the so-called PASTA–property: Poisson Arrivals See Time Averages.
3.6.1 Palm’s theorem (Superposition theorem)
The fundamental properties of the Poisson process among all other point processes were firstdiscussed by the Swede Conny Palm. He showed that the exponential distribution plays thesame role for stochastic point processes (e.g. inter–arrival time distributions), where pointprocesses are superposed, as the Normal distribution does when stochastic variables are added(the central limit theorem).
................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................. ..........................
................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................. ..........................
................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................. ..........................
................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................. ..........................
Process 1
Process 2
Process N
Total process
......
× × × ×
× × × ×
× × × ×
× ×× × × × × × × ×× ×.............................................................................
.......................... TimeRandom point of time
Figure 3.7: By superposition of N independent point processes we obtain under certainassumptions a process which locally is a Poisson process.
Theorem 3.1 Palm’s theorem: by superposition of many independent point processes theresulting total process will locally be a Poisson process.
The term “locally” means that we consider time intervals which are so short that each processcontributes at most with one event during this interval. This is a natural requirement sinceno process may dominate the total process (similar conditions are assumed for the centrallimit theorem). The theorem is valid only for simple point processes. If we consider a randompoint of time in a certain process, then the time until the next arrival is given by (2.32).
94 CHAPTER 3. ARRIVAL PROCESSES
We superpose N processes into one total process. By appropriate choice of the time unitthe mean distance between arrivals in the total process is kept constant, independent of N .The time from a random point of time to the next event in the total process is then givenby (2.32):
pT ≤ t = 1−N∏
i=1
1− Vi
(t
N
). (3.43)
If all sub-processes are identical, we get:
pT ≤ t = 1−
1− V(t
N
)N. (3.44)
From (2.32) and (3.18) we find (letting m1 = 1):
lim∆t→0
v(∆t) = 1 ,
and thus:
V (∆t) =
∫ ∆t
0
1 dt = ∆t . (3.45)
Therefore, we get from (3.44) by letting the number of sub-processes increase to infinity:
pT ≤ t = limN→∞
1−
(1− t
N
)N
= 1− e−t . (3.46)
which is the exponential distribution. We have thus shown that by superposition of Nidentical processes we locally get a Poisson process. In a similar way we may superposenon-identical processes and locally obtain a Poisson process.
Example 3.6.1: Life-time of a route in an ad-hoc networkA route in a network consists of a number of links connecting the end-points of the route (Chap. 8).In an ad-hoc network links exist for a limited time period. The life-time of a route is therefore thetime until the first link is disconnected. From Palm’s theorem we see that the life-time of the routetends to be exponentially distributed. 2
Corollary to Palm’s theorem (Poisson superposition theorem): By superposition of Nindependent Poisson processes we obtain a Poisson process.
This is the only case we obtain an exact Poisson process. It can be proven (1) by remem-bering that the smallest of N exponential distributions is itself an exponential distribution(Example 2.2.7) (interval representation) or (2) by observing that the sum of N Poissondistributions is a Poisson distribution (number representation).
3.6. PROPERTIES OF THE POISSON PROCESS 95
3.6.2 Raikov’s theorem (Decomposition theorem)
A similar theorem, the decomposition theorem, is valid when we split a point process intosub-processes, when this is done in a random way. If there are N times fewer events in asub-process, then it is natural to reduce the time axes with a factor N .
Theorem 3.2 Raikov’s theorem: by a random decomposition of a point process into sub-processes, the individual sub-process converges to a Poisson process, when the probability thatan event belongs to the sub-process tends to zero.
This is also indicated by the following general result. If we generate a sub-process by randomsplitting of a point process choosing an event with probability pi, i = 1, 2, . . . , N, then thesub-process has the form factor εi:
εi = 2 + pi · (ε− 2) , (3.47)
where ε is the form factor of the original process. When pi approaches zero the form factorbecomes 2 as for the exponential distribution. The result is only exact when the originalprocess is a Poisson process:
Corollary to Raikov’s theorem (Poisson splitting theorem): By splitting a Poisson processinto N sub-processes, each sub-process will be an independent Poisson processes.
This can be shown both by interval representation and number representation.
In addition to superposition and decomposition (merge and split, or join and fork), we canmake another operation on a point process, namely translation (displacement) of the indi-vidual events. When this translation for every event is a random variable, independent of allother events, an arbitrary point process will converge to a Poisson process.
As concerns point processes occurring in real–life, we may, according to the above, expectthat they are Poisson processes when a sufficiently large number of independent conditionsfor having an event are fulfilled. This is why the Poisson process for example is a gooddescription of the arrival processes to a local exchange which usually is generated by manyindependent local subscribers.
3.6.3 Uniform distribution – a conditional property
In Sec. 3.5 we have seen that a uniform distribution in a very large interval corresponds to aPoisson process. The inverse property is also valid (proof left out):
96 CHAPTER 3. ARRIVAL PROCESSES
Theorem 3.3 If for a Poisson process we have n arrivals within an interval of duration t,then these arrivals are uniformly distributed within this interval.
The length of this interval can itself be a random variable if it is independent of the Poissonprocess. This is for example the case in traffic measurements with variable measuring intervals(Chap. 13). This can be shown both from the Poisson distribution (number representation)and from the exponential distribution (interval presentation).
3.7 Generalization of the stationary Poisson process
The Poisson process has been generalized in many ways. In this section we only considerthe interrupted Poisson process, but further generalizations are MMPP (Markov ModulatedPoisson Processes) and MAP (Markov Arrival Processes).
3.7.1 Interrupted Poisson process (IPP)
Due to its lack of memory the Poisson process is very easy to apply. In some cases, however,the Poisson process is not flexible enough to describe a real arrival process as it has only oneparameter. Kuczura (1973 [78]) proposed a generalization which has been widely used.
The idea of generalisation comes from the overflow problem (Fig. 3.8 & Sec. 6.4). Customersarriving at the system will first try to be served by a primary system with limited capacity(n servers). If the primary system is busy, then the arriving customers will be served bythe overflow system. Arriving customers are routed to the overflow system only when theprimary system is busy. During the busy periods customers arrive at the overflow systemaccording to the Poisson process with intensity λ. During the non-busy periods no calls arriveto the overflow system, i.e. the arrival intensity is zero. Thus we can consider the arrivalprocess to the overflow system as a Poisson process which is either on or off (Fig. 3.9). As asimplified model to describe these on (off ) intervals, Kuczura used exponentially distributedtime intervals with intensity γ (ω ). He showed that this corresponds to hyper-exponentiallydistributed inter–arrival times to the overflow link, which are illustrated by a phase–diagramin Fig 3.10. It can be shown that the parameters are related as follows:
λ = p λ1 + (1− p)λ2 ,
λ · ω = λ1 · λ2 , (3.48)
λ+ γ + ω = λ1 + λ2 .
Because a hyper–exponential distribution with two phases can be transformed into a Cox–2distribution (Sec. 2.3.3), the IPP arrival process is a Cox-2 arrival processes as shown in
3.7. GENERALIZATION OF THE STATIONARY POISSON PROCESS 97
Primary link
Total traffic [erlang]
Overflow link
ON
Channels
ON
OFF
Figure 3.8: Overflow system with Poisson arrival process (intensity λ). Normally, calls arriveto the primary group. During periods when all n trunks in the primary group are busy, allcalls are offered to the overflow group.
98 CHAPTER 3. ARRIVAL PROCESSES
..................................................................................................................................................................................................................................................................................... ..........................
............................................................................................................................................................................................................................................................................................................................. ..........................
............................................................................................................................................................................................................................................................................................................................. ..........................
............................................................................................ ..........................
............................................................................................ ..........................
..............................................................................................................................................................................................................................................................................................................
................................
..........................
...............
.................
..........................
........
.................................
...................................................................................................................................................................................
........
.................................
...................................................................................................................................................................................
..........................................................................................................................................................................
..........................................................................................................................................................................
λ
λ
λ
ω γPoisson process
IPP arrival process
Arrivals ignored
Switch
on
off
Figure 3.9: Illustration of the interrupted Poisson process (IPP) (cf. Fig. (3.8)). The positionof the switch is controlled by a two-state Markov process.
........................................................................................................................................................ ..........................
........................................................................................................................................................ ..........................
........................................................................................................................................................ ..........................
........................................................................................................................................................ ..........................
..................................................................................................................................................................................................................................................................................................................................
..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
λ1
λ2
p
1− p
Figure 3.10: The interrupted Poisson process is equivalent to a hyper–exponential arrivalprocess (3.48).
Fig. 2.13. We have three parameters available, whereas the Poisson process has only oneparameter. This makes it more flexible for modelling empirical data.
3.7.2 Batched Poisson process
We consider an arrival process where events occur according to a Poisson process with rateλ. At each event a batch of calls (packets, jobs) arrive simultaneously. The distribution ofthe batch size is in the general case a discrete distribution p(i) , (i = 1, 2, . . .). The batch sizeis at least one. In the Poisson arrival process the batch size is always one. We choose thesimplest case where the distribution is a geometric distribution (Tab. 3.1, p. 92):
p(i) = p (1− p)i−1 , i = 1, 2, . . . . (3.49)
m1 =1
p, (3.50)
σ2 =1− pp2
. (3.51)
The number of events during a time interval t then becomes a stochastic sum (Sec. 2.3.3)where N (2.76) is a Poisson distribution with mean value and variance λ t and T (2.77) is the
3.7. GENERALIZATION OF THE STATIONARY POISSON PROCESS 99
geometric distribution given above. The mean value of the number of events during a timeinterval t is (2.82):
m1,s = λ t · 1
p, (3.52)
and the variance is (2.84):
σ2s = λ t · 1− p
p2+
(1
p
)2
λ t
= λ t · 2− pp2
. (3.53)
The index of dispersion of counts (3.11) becomes:
IDC =σ2s
m1,s
=2− pp
. (3.54)
For p = 1 the geometric distribution always takes the value one and we get a Poisson process.For p < 1 the process is more bursty than the Poisson process.
100 CHAPTER 3. ARRIVAL PROCESSES
Chapter 4
Erlang’s loss system and B–formula
In this and the following chapters we consider the classical teletraffic theory developed byErlang (Denmark), Engset (Norway) and Fry & Molina (USA). It has successfully been ap-plied for more than 80 years. In this chapter we consider the fundamental Erlang-B formula.In Sec. 4.1 we specify the assumptions for the model. Sec. 4.2 deals with infinite capacity,which results in a Poisson distributed number of busy channels. In Sec. 4.3 we consider alimited number of channels and obtain the truncated Poisson distribution and Erlang’s B-formula. Sec. 4.4 describes a standard procedure for dealing with state transition diagrams(STD) which are the key to classical teletraffic theory. We also derive an accurate recursiveformula for numerical evaluation of Erlang’s B-formula (Sec. 4.5). In Sec. 4.6 properties ofErlang’s B-formula are studied. Thus we consider non-integral number of channels, insensi-tivity, derivatives, inverse formulæ, and approximations. Sec. 4.7 considers the Blocked CallsHeld model, which is useful for many applications. Finally, in Sec. 4.8 we study the basicprinciples of dimensioning, where we balance Grade–of–Service (GoS) against costs of thesystem.
4.1 Introduction
Erlang’s B-formula is based on the following model, described by the three elements structure,strategy, and traffic (Fig. 1.1):
a. Structure: We consider a system of n identical channels (servers, trunks, slots) workingin parallel. This is called a homogeneous group.
b. Strategy: A call arriving at the system is accepted for service if at least one channelis idle. Every call needs one and only one channel. We say the group has full accessi-bility. Often the term full availability is used, but this terminology will only be usedin connection with reliability and dependability. If all channels are busy the system
101
102 CHAPTER 4. ERLANG’S LOSS SYSTEM AND B–FORMULA
is congested and call attempts are blocked. A blocked (= rejected, lost) call attemptdisappears without any after-effect as it may be accepted by an alternative route. Thisstrategy is the most important one and has been applied with success for many years.This is called Erlang’s loss model or the Blocked Calls Cleared = BCC–model. Usually,we assume that the service time is independent of both the arrival process and otherservice times.
Within a full accessible group we may look for an idle channel in different ways:
– Random hunting: we choose a random channel among the idle channels. Onaverage every channel will carry the same traffic.
– Ordered hunting: the channels are numbered 1, 2, . . . n, and we search for anidle channel in this order, always starting with channel one (ordered hunting withhoming). This is also called sequential hunting. A channel will on the averagecarry more traffic than the following channels.
– Cyclic hunting: this is similar to ordered hunting, but without homing. We con-tinue hunting for an idle channel starting from the position where we ended lasttime. Also in this case every channel will on the average carry the same traffic.
The hunting takes place momentarily. If all channels are busy a call attempts is blocked.The blocking probability is independent of the hunting mode.
c. Traffic: In the following we assume that:
– The arrival process is a Poisson process with rate λ, and
– The service times are exponentially distributed with intensity µ (corresponding toa mean value 1/µ).
This type of traffic is called Pure Chance Traffic type One, PCT-I. The traffic processthen becomes a pure birth and death process, a simple Markov process which is easyto deal with mathematically.
Definition of offered traffic: We define the offered traffic as the traffic carried when thenumber of channels is infinite (1.2). In Erlang’s loss model with Poisson arrival processthis definition of offered traffic is equivalent to the average number of call attempts permean holding time:
A = λ · 1
µ=λ
µ. (4.1)
Scenarios: We consider two cases:
1. n =∞: Poisson distribution (Sec. 4.2),
2. n <∞: Truncated Poisson distribution (Sec. 4.3).
Insensitivity: We shall later see that this model is insensitive to the holding time distribution,i.e. only the mean holding time is of importance for the state probabilities. The type ofdistribution has no importance for the state probabilities.
4.2. POISSON DISTRIBUTION 103
Performance–measures: The most important grade-of-service measures for loss systems aretime congestion E, call congestion B, and traffic (load) congestion C as described in Sec. 1.9.They are identical for Erlang’s loss model because of the Poisson arrival process (PASTA–property: Poisson Arrivals See Time Averages).
4.2 Poisson distribution
We assume the arrival process is a Poisson process and that the holding times are exponen-tially distributed, i.e. we consider PCT-I traffic. The number of channels is assumed to beinfinite, so we never observe congestion (blocking).
4.2.1 State transition diagram
........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
.......................................................................................................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
0 1 i − 1 i· · · · · ·
λ λ λ λ λ
µ 2µ (i−1)µ iµ (i+1)µ
Figure 4.1: The Poisson distribution. State transition diagram for a system with infinitelymany channels, Poisson arrival process (λ), and exponentially distributed holding times (µ).
We define the state of the system, [ i ], as the number of busy channels i (i = 0, 1, 2, . . .).In Fig. 4.1 all states of the system are shown as circles, and the rates by which the trafficprocess changes from one state to another state are shown upon the arcs of arrows betweenthe states. As the process is simple (Sec. 3.2.3), we only have transitions to neighboringstates. If we assume the system is in statistical equilibrium, then the system will be in state[ i ] the proportion of time p(i), where p(i) is the probability of observing the system in state[ i ] at a random point of time, i.e. a time average. When the process is in state [ i ] it willjump to state [ i+1] λ times per time unit and to state [ i−1] i µ times per time unit. Ofcourse, the process will leave state [ i ] at the moment there is a state transition. When ichannels are busy, each channel will terminate calls with rate µ so that the total service rateis i ·µ (Palm’s theorem 3.1). The future development of the traffic process only depends uponthe present state, not upon how the process came to this state (the Markov-property).
The equations describing the states of the system under the assumption of statistical equi-librium can be set up in two ways, which both are based on the principle of global balance:
a. Node equationsIn statistical equilibrium the number of transitions per time unit into state [ i ] equals
104 CHAPTER 4. ERLANG’S LOSS SYSTEM AND B–FORMULA
the number of transitions out of state [ i ]. The equilibrium state probability p(i) denotesthe proportion of time (total time per time unit) the process spends in state [ i ]. Theaverage number of jumps from state [ 0 ] to state [ 1 ] is λ · p(0) per time unit, and theaverage number of jumps from state [ 1 ] to state [ 0 ] is µ · p(1) per time unit. Thus wehave for state i = 0:
λ · p(0) = µ · p(1) , i = 0 . (4.2)
For state i > 0 we get the following equilibrium or balance equation:
λ · p(i−1) + (i+ 1)µ · p(i+1) = (λ+ i µ) · p(i) , i > 0 . (4.3)
Node equations are always applicable, also for state transition diagrams more dimen-sions, which we will consider in later chapters.
b. Cut equationsIn many cases we may exploit a simple structure of the state transition diagram. Iffor example we put a fictitious cut between the states [ i−1 ] and [ i ] (correspondingto a global cut around the states [ 0 ], [ 1 ], . . . [ i−1] ), then in statistical equilibrium thetraffic process changes from state [ i−1 ] to [ i ] the same number of times as it changesfrom state [ i ] to [ i−1 ]. In statistical equilibrium we thus have per time unit:
λ · p(i−1) = i µ · p(i) , i = 1, 2, . . . . (4.4)
Cut equations are easy to apply for one-dimensional state transition diagrams, whereasnode equations are applicable for any diagram.
As the system always will be in some state, we have the normalization restriction:
∞∑
i=0
p(i) = 1 , p(i) ≥ 0 . (4.5)
We notice that node equations (4.3) involve three state probabilities, whereas cut equations(4.4) only involve two. Therefore, it is easier to solve the cut equations. Loss system willalways be able to enter statistical equilibrium because we have a limited number of states.We do not specify the mathematical conditions for statistical equilibrium in this chapter.
4.2. POISSON DISTRIBUTION 105
4.2.2 Derivation of state probabilities
For one-dimensional state transition diagrams the application of cut equations is the mostappropriate approach. From Fig. 4.1 we get the following balance equations:
λ · p(0) = µ · p(1) ,
λ · p(1) = 2µ · p(2) ,
. . . . . .
λ · p(i−2) = (i− 1)µ · p(i−1) ,
λ · p(i−1) = i µ · p(i) ,
λ · p(i) = (i+ 1)µ · p(i+1) ,
. . . . . . .
Expressing all state probabilities by p(0) and introducing the offered traffic A = λ/µ we get:
p(0) = p(0) ,
p(1) = A · p(0) ,
p(2) =A
2· p(1) =
A2
2· p(0) ,
. . . . . . . . .
p(i−1) =A
i−1· p(i−2) =
Ai−1
(i− 1)!· p(0) ,
p(i) =A
i· p(i−1) =
Ai
i !· p(0) ,
p(i+1) =A
i+1· p(i) =
Ai+1
(i+ 1)!· p(0) ,
. . . . . . . . .
The normalization constraint (4.5) implies:
1 =∞∑
j=0
p(j)
= p(0) ·
1 + A+A2
2 !+ · · ·+ Ai
i !+ . . .
= p(0) · eA ,
p(0) = e−A .
106 CHAPTER 4. ERLANG’S LOSS SYSTEM AND B–FORMULA
Thus the state probabilities become Poisson distributed:
p(i) =Ai
i !· e−A , i = 0, 1, 2, . . . . (4.6)
The number of busy channels at a random point of time is thus Poisson distributed with bothmean value (3.37) and variance (3.38) equal to the offered traffic A. We have earlier shownthat the number of calls in a fixed time interval also is Poisson distributed (3.36). Thus thePoisson distribution is valid both in time and in space.
We would, of course, obtain the same solution by using node equations.
4.2.3 Traffic characteristics of the Poisson distribution
From a dimensioning point of view, the system with unlimited capacity is of little interest inpractise. The traffic characteristics of this system become:
Time congestion: E = 0 ,
Call congestion: B = 0 ,
Carried traffic: Y =∞∑
i=1
i · p(i) = A ,
Lost traffic: A` = A− Y = 0 ,
Traffic congestion: C = 0 .
Only ordered hunting makes sense in this case, and traffic carried by the i’th channel is latergiven in (4.14).
Peakedness Z is defined as the ratio between variance and mean value of the distribution ofstate probabilities (cf. IDC, Index of Dispersion of Counts (3.11)). For the Poisson distribu-tion we find (3.37) & (3.38):
Z =σ2
m1
=A
A= 1 . (4.7)
The peakedness has dimension [number of channels] and is different from the coefficient ofvariation which has no dimension (2.12).
Duration of state [ i ]:
In state [ i ] the process has the total intensity (λ + i µ) away from the state. Therefore, thetime until the first transition (state transition to either [ i+1 ] or [ i−1 ]) is exponentiallydistributed (Sec. 2.2.7):
fi(t) = (λ+ i µ)e−(λ+ i µ) t , t ≥ 0 .
4.3. TRUNCATED POISSON DISTRIBUTION 107
Example 4.2.1: Simple Aloha protocolIn example 3.5.2 we considered the slotted Aloha protocol, where the time axes was divided intotime slots. We now consider the same protocol in continuous time. We assume that packets arriveaccording to a Poisson process and that they are of constant length h. The system corresponds tothe traffic case resulting in a Poisson distribution which can be shown to be valid also for constantholding times. The state probabilities are Poisson distributed (4.6) with A = λh. A packet is onlytransmitted correctly if:
a: the system is in state [ 0 ] at the arrival time, andb: no other packets arrive during the service time h.
We find:pcorrect = p(0) · e−λh = e−2A .
The traffic transmitted correctly thus becomes:
Acorrect = A · pcorrect = A · e−2A .
This is the proportion of the time axis which is utilized efficiently. It has an optimum for λh = A =1/2, where the derivative with respect to A equals zero:
∂Acorrect∂A
= e−2A · (1− 2A) ,
maxAcorrect =12e
= 0.1839 . (4.8)
We thus obtain a maximum utilization equal to 0.1839 when we offer 0.5 erlang. This is half thevalue we obtained for a slotted system by synchronizing the satellite transmitters. The models arecompared in Fig. 3.6. 2
4.3 Truncated Poisson distribution
We still assume Pure Chance Traffic Type I (PCT-I) as in Sec. 4.2. The number of channelsis now limited so that n is finite. The number of states becomes n+1, and the state transitiondiagram is shown in Fig. 4.2.
........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
...................
.....................................................................................................................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
0 1 i−1 i n· · · · · ·
λ λ λ λ λ λ
µ 2µ (i−1)µ iµ (i+1)µ nµ
Figure 4.2: The truncated Poisson distribution. State transition diagram for a system with alimited number of channels (n), Poisson arrival process (λ), and exponential service times (µ).
108 CHAPTER 4. ERLANG’S LOSS SYSTEM AND B–FORMULA
4.3.1 State probabilities
We get similar cut equations as for the Poisson case, but the state space is limited to0, 1, . . . , n and the normalization condition (4.5) now becomes:
p(0) =
n∑
j=0
Aj
j!
−1
.
We get the so-called truncated Poisson distribution:
p(i) =
Ai
i!n∑
j=0
Aj
j!
, 0 ≤ i ≤ n . (4.9)
The name truncated means cut-off and is due to the fact that the solution may be interpretedas a conditional Poisson distribution p(i | i≤n). This is seen by multiplying both numeratorand denominator by e−A. It is not a trivial fact that we are allowed to truncate the Poissondistribution, so that the relative ratios between the state probabilities are unchanged.
4.3.2 Traffic characteristics of Erlang’s B-formula
Knowing the state probabilities, we are able to find all performance measures defined by stateprobabilities.
Time congestion:
The probability that all n channels are busy at a random point of time is equal to theproportion of time all channels are busy (time average). This is obtained from (4.9) fori = n:
En(A) = p(n) =
An
n!
1 + A+A2
2!+ · · ·+ An
n!
. (4.10)
This is Erlang’s famous B-formula (1917, [12]). It is denoted by En(A) = E1,n(A), whereindex “one” refers to the alternative name Erlang’s first formula.
Call congestion:
The probability that a random call attempt will be lost is equal to the proportion of callattempts blocked. If we consider one time unit, we find by summation over all possiblestates:
Bn(A) =λ · p(n)n∑
ν=0
λ · p(ν)
= p(n) = En(A) . (4.11)
4.3. TRUNCATED POISSON DISTRIBUTION 109
The denominator is the average number of call attempts per time unit, and the numerator isthe average number of blocked calls per time unit.
Carried traffic:
If we use the cut equation between states [ i−1 ] and [ i ] we get:
Yn(A) =n∑
i=1
i · p(i) =n∑
i=1
λ
µ· p(i−1) = A · 1− p(n) ,
Yn(A) = A · 1− En(A) , (4.12)
where A is the offered traffic. The carried traffic will be less than both A and n.
Lost traffic:A` = A− Yn(A) = A · En(A) , 0 ≤ A <∞ .
Traffic congestion:
Cn(A) =A− YA
= En(A) , 0 ≤ Y < n .
We thus have E = B = C because the arrival intensity λ is independent of the state. Thisis called the PASTA–property, Poisson Arrivals See Time Averages, which is valid for allsystems with Poisson arrival processes. In all other cases at least two of the three congestionmeasures will be different. Erlang’s B-formula is shown graphically in Fig. 4.3 for someselected values of the parameters.
Traffic carried by the i’th channel (utilization yi of channel i) :
1. Random hunting and cyclic hunting: In this case all channels on the average carry thesame traffic. The total carried traffic is independent of the hunting strategy and wefind the utilization:
yi = y =Y
n=A 1− En(A)
n. (4.13)
This function is shown in Fig. 4.4. We observe that for a given congestion E we obtainthe highest utilization for large channel groups (economy of scale).
2. Ordered hunting = sequential hunting: The traffic carried by channel i is the differencebetween the traffic lost from i−1 channels and the traffic lost from i channels:
yi = A · Ei−1(A)− Ei(A) . (4.14)
It should be noticed that the traffic carried by channel i is independent of the numberof channels after i in the hunting order. Thus channels after channel i have no influenceupon the traffic carried by channel i. There is no feed-back. As the total carried trafficis independent of the hunting mode we have:
Y =n∑
i=1
yi .
110 CHAPTER 4. ERLANG’S LOSS SYSTEM AND B–FORMULA
Improvement function:
This denotes the increase in carried traffic when the number of channels is increased by onefrom n to n+ 1:
Fn(A) = Yn+1 − Yn ,
= A1− En+1 − A1− En , (4.15)
Fn(A) = A En(A)− En+1(A) . (4.16)
We have that 0 ≤ Fn(A) < 1 , as one channel at most can carry one erlang. The improvementfunction Fn(A) is tabulated in Moe’s Principle (Arne Jensen, 1950 [58]) and shown in Fig. 4.5.In Sec. 4.8.2 we consider the application of this principle for optimal economic dimensioning.
Peakedness:
This is defined as the ratio between the variance and the mean value of the distribution ofthe number of busy channels, cf. IDC (3.11). For the truncated Poisson distribution it canbe shown that:
Z = Zn(A) =σ2
m= 1− A En−1(A)− En(A) = 1− yn , (4.17)
where we have used (4.14). The dimension is [channels]. In a group with ordered hunting wemay thus estimate the peakedness from observation of the traffic carried by the last channel.
Duration of state [ i ]:
The total intensity for leaving state [ i ] is equal to (λ+ i µ), and therefore the duration of thetime in state [ i ] (sojourn time) is exponentially distributed with probability density function(pdf):
fi(t) = (λ+ i µ) · e−(λ+ i µ) t , 0 ≤ i < n ,
fn(t) = (nµ) · e−(nµ) t , i = n . (4.18)
The fundamental assumption for the validity of Erlang’s B-formula is the Poisson arrivalprocess. According to Palm’s theorem this is fulfilled in ordinary telephone systems withmany independent subscribers. As the state probabilities are independent of the holdingtime distribution, the model is very robust. The combined arrival process and service timeprocess are described by a single parameter A. This explains the wide application of theB-formula both in the past and today.
4.3. TRUNCATED POISSON DISTRIBUTION 111
0 4 8 12 16 20 24 280.0
0.2
0.4
0.6
0.8
1.0
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.........
........
........
........
........
........
........
........
........
........
........
........
........
.........
........
........
........
........
........
........
........
........
.........
........
........
........
........
........
........
........
................................................................................................................................................................................................................................................................................................................................................................................................
...................................
........................................
...................................................
.................................................................
.............................................................................................
.......................................................................................................................................
........................................................................................................................................................................................................................................
.....................................................................................................................................................................
..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
....................................
.....................................
............................................
.................................................
............................................................
...........................................................................
...............................................................................................
.............................................................................................................................
................................................................................................................................................................................
......................
..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.................................
...................................
...................................
.........................................
.............................................
...............................................
.....................................................
.............................................................
.....................................................................
...............................................................................
.............................................
................................................................................................................................................................................
....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...............................
...............................
................................
..................................
....................................
......................................
.......................................
..........................................
............................................
..........................................
........................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...................................................
......................................
...................................
...............................
............................................
.............................................................................................................................................................................................................................................................................................................
............................................
.............................................
...............................
...............................
...................
Offered traffic A
nBlocking probability E (A)
12
5
10
20
Figure 4.3: Blocking probability En(A) as a function of the offered traffic A for various valuesof the number of channels n (4.9).
112 CHAPTER 4. ERLANG’S LOSS SYSTEM AND B–FORMULA
0 4 8 12 16 20 24 280.0
0.2
0.4
0.6
0.8
1.0
........
........
.........
........
........
........
.........
........
........
........
........
........
.........
........
........
........
........
.........
........
........
........
........
........
........
........
........
........
.........
........
........
........
........
........
........
........
........
.........
........
........
........
........
........
........
........
.........
........
........
........
........
........
........
.........
........
........
........
........
.........
........
........
........
........
.........
........
........
........
.........
........
........
........
........
........
........
........
........
.........
........
.........................................................................................................................................................................................................................................................................................................................................................................
...................................
..........................................
....................................................
.......................................................................
.....................................................................................................
.............................................................................................................................................................
.................................................................................................................................................................................................................................................................................................
.....................................................................
...................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...............................
...................................
......................................
...........................................
..................................................
..........................................................
.....................................................................
....................................................................................
........................................................................................................
....................................................................................................................................
................................................................................................
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...............................
................................
....................................
........................................
............................................
..................................................
.......................................................
...............................................................
.........................................................................
......................................................................................
......................................................................................................
............................................................................................
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
................................
.................................
..................................
.....................................
..........................................
.............................................
.................................................
........................................................
.............................................................
.....................................................................
................................................................................
.............................................................................................
........................................................
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
............................................
.................................
.................................
.....................................
.......................................
............................................
..............................................
...................................................
........................................................
.............................................................
....................................................................
.............................................................................
......................................................................................
..
..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...............................
................................
....................................
.......................................
.........................................
.............................................
...............................................
....................................................
.........................................................
............................................................
.....................................................................
...........................................................................
..................................................
.....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...............................
................................
...................................
.....................................
......................................
..........................................
............................................
..............................................
..................................................
.....................................................
.........................................................
.............................................................
....................................................................
.............................................
.......................................................
...................................................................................................................................................................................................................................................................................................................................................................................................
................................
................................
................................
...................................
.....................................
.......................................
........................................
...........................................
..............................................
................................................
...................................................
........................................................
..........................................................
...............................................................
..........................................................
Number of channels n
EUtilization y
0.5
0.2
0.1
0.05
0.02
0.01
0.001
0.0001
Figure 4.4: The average utilization per channel y (4.13) as a function of the number ofchannels n for given values of the congestion E.
4.3. TRUNCATED POISSON DISTRIBUTION 113
0 4 8 12 16 20 24 280.0
0.2
0.4
0.6
0.8
1.0
.................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Number of channels n
AImprovement function F1,n(A) = yn+1
1
2
5
10
20
Figure 4.5: Improvement function Fn(A) (4.16) of Erlang’s B–formula. By sequential huntingFn(A) equals the traffic yn+1 carried on channel number (n+ 1).
114 CHAPTER 4. ERLANG’S LOSS SYSTEM AND B–FORMULA
4.4 General procedure for state transition diagrams
The most important tool in teletraffic theory is formulation and solution of models by meansof state transition diagrams. From the previous sections we identify the following standardprocedure for dealing with state transition diagrams. It consists of a number of steps andis formulated in general terms. The procedure is also applicable for multi-dimensional statetransition diagrams, which we consider later. We always go through the following steps:
a. Construction of the state transition diagram.
– Define the states of the system in an unique way,
– Draw the states as circles,
– Consider the states one at a time and draw all possible arrows for transitions awayfrom the state due to:
(a) the arrival process (new arrival or phase shift in the arrival process),
(b) the departure (service) process (service time termination or phase shift).
In this way we obtain the complete state transition diagram.
b. Set up the equations describing the system in equilibrium.
– If the conditions for statistical equilibrium are fulfilled, the steady state equationscan be obtained from:
∗ node equations (general),
∗ cut equations.
c. Solve the balance equations assuming statistical equilibrium.
– Express all state probabilities by for example the probability of state [ 0 ], p(0).
– Find p(0) by normalization.
d. Calculate the performance measures expressed by the state probabilities.
For small values of n we let the non-normalized value of the state probability q(0) equal toone, and then calculate the relative values q(i), (i = 1, 2, . . .). By normalizing we then find:
p(i) =q(i)
Qn
, i = 0, 1, . . . , n , (4.19)
where
Qn =n∑
ν=0
q(ν) . (4.20)
The time congestion becomes:
p(n) =q(n)
Qn
= 1− Qn−1
Qn
. (4.21)
For large values of n we should use the procedure described below.
4.4. GENERAL PROCEDURE FOR STATE TRANSITION DIAGRAMS 115
4.4.1 Recursion formula
If q(i) becomes very large (e.g. 1010), then we may as an intermediate normalization multiplyall q(i) by the same constant (e.g. 10−10) as we know that all probabilities are within theinterval [0, 1]. In this way we avoid numerical problems. If q(i) becomes very small, thenwe may truncate the state space as the density function of p(i) often will be bell-shaped(unimodal) and therefore has a maximum. In many cases we are theoretically able to controlthe error introduced by truncating the state space (Stepanov, 1989 [109]).
We may normalize the state probabilities after each step which implies more calculations,but ensures a higher accuracy. Let the normalized state probabilities for a system with x−1channels be given by:
Px−1 = px−1(0), px−1(1), . . . , px−1(x−2), px−1(x−1) , x = 1, 2, . . . , (4.22)
where index (x−1) indicates that we consider state probabilities for a system with (x−1)channels. Let us assume we have the following recursion formula for obtaining qx(x) from rprevious state probabilities (often r = 1):
qx(x) = f(px−1(x−1), px−1(x−2), . . . , px−1(x−r)
), x = 1, 2, . . . , (4.23)
where qx(x) will be a relative (non-normalized) state probability. We know the normalizedstate probabilities for (x− 1) channels (4.22), and we want to find the normalized stateprobabilities for a system with x channels. The relative values of state probabilities do notchange when we increase number of channels by one, so we get:
qx(i) =
px−1(i) , i = 0, 1, 2, . . . , x−1 ,
qx(x) , i = x .(4.24)
The new normalization constant becomes:
Qx =x∑
i=0
qx(i) = 1 + qx(x) ,
because in the previous step we normalized the state probabilities ranging from 0 to x−1 sothey add to one. We thus get:
px(i) =
px−1(i)
1 + qx(x), i = 0, 1, 2, . . . , x− 1 ,
qx(x)
1 + qx(x), i = x .
(4.25)
The initial value for the recursion is given by p0(0) = 1. The recursion algorithm thus startswith this value, and we find the state probabilities of a system with one channel more by(4.24) and (4.25). The recursion is numerically very stable because we in (4.25) divide witha number greater than one.
116 CHAPTER 4. ERLANG’S LOSS SYSTEM AND B–FORMULA
Example 4.4.1: Calculating probabilities of the Poisson distributionWe may calculate the Poisson distribution (4.6) by the above approach by starting with class zeroand stopping at a state i where for example q(i) < 10−10. If we want to calculate the Poissondistribution for very large mean values m1 = A = λ/µ, then we may start with class m by lettingq(m) = 1, where m is equal to the integral part of (m1 + 1). The relative values of q(i) for bothdecreasing values (i = m−1,m−2, . . . , 0) and for increasing values (i = m+1,m+2, . . .) will then bedecreasing, and we may stop when for example q(i) < 10−10 for increasing, respectively decreasingvalues (or when i = 0). We normalize the state probabilities in each step. In this way we avoidcalculating many classes with state probability less than 10−10, and we also avoid problems withunderflow and overflow. 2
Above we calculate all state probabilities. To calculate the time congestion for a loss systemwe need only store the latest state probability. Let us consider a system with simple birth anddeath traffic process with arrival rate λi and departure rate i · µ in state i. Then qx(x) onlydepends on the previous state probability. By using the cut equation we get the followingrecursion formula:
qx(x) =λx−1
xµ· px−1(x− 1) =
λx−1
xµ· Ex−1 . (4.26)
The time congestion for x channels is Ex = px(x). Inserting (4.26) into (4.25) we get a simplerecursive formula for the time congestion:
Ex =qx(x)
1 + qx(x)=
λx−1xµ· Ex−1
1 + λx−1xµ· Ex−1
=
λx−1µ· Ex−1
x+ λx−1µ· Ex−1
, E0 = 1 . (4.27)
Introducing the inverse time congestion probability Ix = E−1x we get:
Ix = 1 +xµ
λx−1
· Ix−1 , I0 = 1 . (4.28)
This is a general recursion formula for calculating time congestion for all systems with statedependent arrival rates λi and homogeneous servers.
4.5 Evaluation of Erlang’s B-formula
For numerical calculations the formula (4.10) is not very appropriate, since both n! and An
increase quickly so that overflow in the computer will occur. If we apply (4.27), then we getthe recursion formula:
Ex(A) =A · Ex−1(A)
x+ A · Ex−1(A), E0(A) = 1 . (4.29)
4.5. EVALUATION OF ERLANG’S B-FORMULA 117
From a manual calculation point of view, the inverse linear form (4.28) may be simpler:
Ix(A) = 1 +x
A· Ix−1(A) , I0(A) = 1 , (4.30)
where In(A) = 1/En(A). This recursion formula is exact, and even for large values of (n,A)there are no round off errors. It is the basic formula for numerous tables of the Erlang B-formula, i.a. the classical table (Palm, 1947 [94]). For very large values of n there are moreefficient algorithms. Notice that a recursive formula, which is accurate for increasing index,usually is inaccurate for decreasing index, and vice versa.
Example 4.5.1: Erlang’s loss systemWe consider an Erlang-B loss system with n = 6 channels, arrival rate λ = 2 calls per time unit,and departure rate µ = 1 departure per time unit, so that the offered traffic is A = 2 erlang. Ifwe denote the non-normalized relative state probabilities by q(i), we get by setting up the statetransition diagram the values shown in the following table:
i λ(i) µ(i) q(i) p(i) i · p(i) λ(i) · p(i)
0 2 0 1.0000 0.1360 0.0000 0.27191 2 1 2.0000 0.2719 0.2719 0.54382 2 2 2.0000 0.2719 0.5438 0.54383 2 3 1.3333 0.1813 0.5438 0.36254 2 4 0.6667 0.0906 0.3625 0.18135 2 5 0.2667 0.0363 0.1813 0.07256 2 6 0.0889 0.0121 0.0725 0.0242
Total 7.3556 1.0000 1.9758 2.0000
We obtain the following blocking probabilities:
Time congestion: E6(2) = p(6) = 0.0121 .
Traffic congestion: C6(2) =A− YA
=2− 1.9758
2= 0.0121 .
Call congestion: B6(2) = λ(6) · p(6)/
6∑
i=0
λ(i) · p(i)
=0.02422.0000
= 0.0121 .
We notice that E = B = C due to the PASTA–property.
118 CHAPTER 4. ERLANG’S LOSS SYSTEM AND B–FORMULA
By applying the recursion formula (4.29) we of course obtain the same results:
E0(2) = 1 ,
E1(2) =2 · 1
1 + 2 · 1 =23,
E2(2) =2 · 2
3
2 + 2 · 23
=25,
E3(2) =2 · 2
5
3 + 2 · 25
=419
,
E4(2) =2 · 4
19
4 + 2 · 419
=221
,
E5(2) =2 · 2
21
5 + 2 · 221
=4
109,
E6(2) =2 · 4
109
6 + 2 · 4109
=4
331= 0.0121 .
2
Example 4.5.2: Recursion formula for Erlang-BThe recursion formulæ (4.29) and (4.30) are numerically very stable. For larger values of number ofchannels n, the initial value E0(A) in (4.30) has only minor influence. For example, for A = 20 erlangand n = 10 channels we find with 6 decimals accuracy the same blocking probability independent ofwhether we start the iteration with the correct value E0(A) = 1 or the erroneous value E0(A) = 0.If we choose n = 20 channels, then the first eight decimals are the same. Errors are eliminatedwhen we iterate with increasing n. Upon the other hand, the recursion formula becomes inaccurateif we iterate with decreasing n because errors then accumulate. In general, if a recursion formula isaccurate in one direction, then it will be inaccurate in the opposite direction. 2
Example 4.5.3: Calculation of Ex(A) for large xBy recursive application of (4.30) we find the inverse blocking probability of the B-formula:
Ix(A) = 1 +x
A+x (x− 1)
A2+ . . .+
x(x− 1) . . . (x− j + 1)Aj
+ . . .+x!Ax
=x∑
j=0
(x
j
)j!Aj
,
For small values of number of channels n we include all terms and get the exact value. For largevalues of n and A this formula can be applied for fast calculation of the B-formula, because we
4.6. PROPERTIES OF ERLANG’S B-FORMULA 119
may truncate the sum when the terms of summation become very small. This corresponds to usethe general recursion formulæ (4.24) and (4.25) for calculating state probabilities (or inverse stateprobabilities) for decreasing x, starting with state n We get the next state q(x− 1) by multiplyingthe previous state q(x) by (x − j)/A, and then normalize q(n) and q(x − 1) by (1 + q(x − 1)). Atsome stage (x− j) < A and the terms start decreasing. We may truncate the summation after k+ 1terms when for example q(x− 1) < 10−10. This can be done not only for Ix(A), but also for Ex(A).The truncation level depends on the required accuracy. In this way we avoid calculating many lowerstates and can control the accuracy. In Example 4.5.2 we were unable to control the accuracy. 2
4.6 Properties of Erlang’s B-formula
4.6.1 Non-integral number of channels
For practical applications of Erlang’s B-formula (e.g. Sec. 6.4) we need to generalize Erlang’sB-formula to non-integral values of the number of channels x. We define Erlang’s extendedB-formula by:
Ex(A) =Ax · e−A∫ ∞
A
tx · e−t dt
(4.31)
=Ax · e−A
Γ(x+ 1, A). (4.32)
where x and A are real numbers and A > 0. The incomplete gamma function is defined as:
Γ(x,A) =
∫ ∞
A
tx−1 · e−t dt , (4.33)
where A is a non-negative real numbers and x is a real number, including negative values..The number of channels may be any positive or negative number, the recursion formula(4.29) will still be valid. In Chap. 6 we shall see how we need to work with a negative andnon-integral number of channels when evaluating overflow systems.
120 CHAPTER 4. ERLANG’S LOSS SYSTEM AND B–FORMULA
For integral values of x which we denote by n this can be rewritten as:∫ ∞
A
tn · e−t dt =
∫ +∞
0
(t+ A)n · e−(t+A)dt
= e−An∑
j=0
(n
j
)Aj∫ +∞
0
tn−j e−t dt
= e−An∑
j=0
n !
j! (n− j)! · Aj · (n− j)!
= n! · e−A ·n∑
j=0
Aj
j !
which inserted in (4.31) yields:
En(A) =An · e−A
n! · e−A ·∑nj=0
Aj
j !
=
An
n !n∑
j=0
Aj
j !
, q.e.d.
The recursion formula (4.29) will still be valid as we have:
Γ(x+ 1, A) = Ax · e−A + x · Γ(x,A) . (4.34)
Example 4.6.1: Erlang-B for non-integral number of channelsThe recursion formula for Erlang-B (4.29) is valid for non-integral number of channels. To calculateEx(A) for any real value of x, we need to find the initial value of Ex(A) for 0 < x < 1, where xis the fractional part of x. If we want to calculate Ex(A) for large non-integral number of channels,then we will get the correct blocking probability by using the initial value Ex(A) = 1. For smallervalues of x we may use an approximation given in Sec. 4.6.7. To get the exact blocking probability,we have to evaluate the incomplete gamma function in (4.32). 2
4.6.2 Insensitivity
We have the following definition of insensitivity:
Insensitivity: A system is insensitive to the holding time distribution if the stateprobabilities of the system only depend on the mean value of the holding time.
4.6. PROPERTIES OF ERLANG’S B-FORMULA 121
It can be shown that Erlang’s B-formula, which above is derived under the assumptionof exponentially distributed holding times, is valid for arbitrary holding time distributions(holding time = service time). The state probabilities for both the Poisson distribution (4.6)and the truncated Poisson distribution (4.9) only depend on the holding time distributionthrough the mean value which is included in the offered traffic A. It can be shown that allclassical loss systems with full accessibility are insensitive to the holding time distribution.
4.6.3 Derivatives of Erlang-B formula and convexity
The Erlang-B formula is a function of the offered traffic A and the number of channelsn which in general may be a real non-negative number. In some cases when we want tooptimise systems we need the partial derivatives of Erlang-B formula.
4.6.4 Derivative of Erlang-B formula with respect to A
Erlang’s B-formula is given by 4.10:
En(A) =
An
n!
1 + A+A2
2!+ · · ·+ An
n!
=
An
n!Qn
,
where n,A > 0 are non-negative real numbers, and Qn denotes the denominator (normal-izing constant). We find the derivative with respect to A:
∂En(A)
∂A=Qn · A
n−1
(n−1)!− An
n!· ∂Qn∂A
Q2n
(4.35)
where
∂Qn
∂A= 1 +
A
1+ . . .+
An−1
(n− 1)!= Qn−1 .
Thus Qn−1 is the normalizing constant of a system with n− 1 channels.
From the recursion formula for Erlang-B (4.29) we have:
Ex−1 =n
A Ex(A)− 1
122 CHAPTER 4. ERLANG’S LOSS SYSTEM AND B–FORMULA
From (4.35) we then get:
∂En(A)
∂A=
nA· Ann!
Qn
− En(A) · Qn−1
Qn
=n
A· En(A)− En(A) 1− En(A)
∂En(A)
∂A=
(nA− 1)· En(A) + En(A)2 . (4.36)
In a similar way we may obtain higher order derivatives.
4.6.5 Derivative of Erlang-B formula with respect to n
It can be shown that:
∂En(A)
∂n= −En(A)2 · A ·
∫ ∞
0
exp(−Ax)(1 + x)n ln(1 + x) dx (4.37)
Esteves & Craveirinha & Cardoso (1995 [30]) presents a numerical algorithm for the eval-uation of (4.37). In a way similar to (4.29) for the Erlang-B formula, there is a recursiveformula to calculate the derivative of order k of the Erlang-B formula for x channels fromthe value at x−1 channels. Let the inverse value of the derivative of order k be denoted byIk(A, x), we then have:
Ik(A, x) =k
A· Ik−1(A, x−1) +
x
A· Ik(A, x−1) , k = 1, 2, 3, . . . . (4.38)
where I0(A, x) = Ix(A) given by (4.30). It can be shown that the Erlang-B formula is convexfor n > 1, as this is equivalent to the following requirement:
En−1(A)− En(A) > En(A)− En+1(A) . (4.39)
If we multiply both sides by A, we observe that this corresponds to yn < yn+1 (4.14) & (4.16),which intuitively is obvious. The first explicit proof of this was given by Messerli (1972 [88])for integral values of n. Jagers & van Doorn (1986 [55]) show that the Erlang B-formula isconvex for all real positive values of the number of trunks. This property is e.g. exploited inMoe’s principle (Sec. 4.8.2).
Example 4.6.2: Call admission control with moving windowErlang’s B-formula is valid for arbitrary service times. We may therefore assume that the holdingtime is equal to a constant h and consider a system with n channels. At an arbitrary instant tall calls accepted during the interval (t − h, t) are still being served. We can at most have n callsbeing served simultaneously, therefore we may at most accept n calls during (t − h, t). This isvalid for any instant. Thus the system at most accepts n calls in any moving window of length h.This mechanism can be applied for control of cell arrival processes in ATM systems, i.e. for CAC(connection acceptance control). The mechanism works for any arrival process. For a Poisson-arrivalprocess we can calculate the cell loss probability by Erlang’s B-formula. 2
4.6. PROPERTIES OF ERLANG’S B-FORMULA 123
4.6.6 Inverse Erlang-B formulæ
The inverse formulæ, i.e. n as a function of (A,E) and A as a function of (n,E), may beobtained by means of Newton-Raphson iteration (Szybicki, 1967 [112]).
From a given initial guess (x0) we calculate i.a. sequence which converges to a fixed valuewhich must satisfy a function f(x) = 0:
xk+1 = xk −f (xk)
f ′ (xk). (4.40)
The following functions should be used:
A(E, n) : f(A) = A En(A)− E ,
A0 =n
1− E ,
x(E,A) : f(x) = Ex(A)− E .
The initial value of the number of channels x is chosen so that
n0 − 1 < x ≤ n0, where En0(A) ≤ E < En0−1(A).
The Figs. 4.3 and 4.4 show En(A) for various values of the parameters. The derivatives ofErlang’s B-formula are given in Sec. 4.2.2.
The numerical problems are also dealt with by Farmer & Kaufman (1978 [31]) and Jagerman(1984 [54]).
Example 4.6.3: Traffic carried by the last channelIn electro-mechanical telephone systems with rotating selectors sequential hunting with homing wasoften applied, and the quality of service could be monitored by measuring the carried traffic on thelast channel (switch) (Brockmeyer, 1957 [10]). As mentioned above the improvement function Fn(A)is equal to the additional traffic carried yn+1 when adding an extra channel (n+ 1) for fixed offeredtraffic A. We also define the marginal channel capacity aδ, as the additionally traffic carried (inthe total system) by adding one channel and keeping the blocking probability E fixed. For n = 20channels and E = 1% we find A = 12.0306 erlang. The above parameters then become: yn = 0.0817[erlang], yn+1 = 0.0685 [erlang], aδ = 0.8072 [erlang] and y = 0.5955 erlang. 2
124 CHAPTER 4. ERLANG’S LOSS SYSTEM AND B–FORMULA
4.6.7 Approximations for Erlang-B formula
In the literature various approximations for Ex(A), 0 ≤ x < 1, are published. Yngve Rapp(1964 [100]) applies a parabola:
Ex(A) = C0 − C1 · x+ C2 · x2 , where (4.41)
C0 = 1 ,
C1 =A+ 2
(1 + A)2 + A,
C2 =1
(1 + A) ((1 + A)2 + A).
Another approximation is published by Szybicki (1967 [112]). Approximations which arenot based on the recursion formula, but directly calculates Ex(A) are developed by Stormer(1963 [110]) and Mejlbro (1994 [87]).
The most accurate values are obtain by using a continued-fraction method of the incompletegamma function (Levy-Soussan, 1968 [81]) or by calculating the incomplete gamma functionby numerical integration. The extended B-formula can also be defined for negative values ofthe number of trunks.
4.7 Fry-Molina’s Blocked Calls Held model
In Fry-Molina’s BCH (Blocked Calls Held) model (Fry, 1928 [35]), (Molina, 1922 [89], 1927 [90])a call attempt, which finds all channels busy, will continue to demand service during a timeinterval, which is equal to the service time it would have obtained, if it was accepted. If achannel becomes idle during this time interval, the call attempt will occupy the channel andkeep it busy during the remaining time interval. This model has been applied in North Amer-ica until the sixties, because is was observed to agree better with the real traffic observationsthan Erlang’s Blocked Calls Cleared model. The explanation to this is maybe that USA formany years was dominated by step-by-step systems, where a blocked call attempt often willbe repeated (Lost Call Held).
When applying alternative routing a call attempt, which is blocked on the direct route, willin general be carried on an alternative route, and therefore there will be no repeated callattempt to the direct route (Lost Call Cleared).
The model was already developed by Engset in 1915 in a for many years unknown report(Engset, 1915 [27]). By the introduction of intelligent digital systems with re-arrangement
4.7. FRY-MOLINA’S BLOCKED CALLS HELD MODEL 125
0 5 10 15 20 25 30 35 400
2
4
6
8
10
12
14
16
18
20
22
.....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.................................................................................................................................................................................................................................................................................................................................................
......................................
.....................................................................................
.................................................................................................................................................................................................................
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...................................
..................................................
............................................................................................................................................
...........................................................................................................................................................................................................................
..............................................................................................................................................................................................................................................................................................
......................................
........................................................
................................................................................................
....................................................................................................................................................................................................................................................................................................................................
..............................................................................................................................................................................
..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..................................
.........................................
.....................................................
............................................................................
.................................................................................................................................
.............................................................................................................................................................
n=20
n=10
C
M
B
C
M
B
Offered traffic
Carried traffic
Figure 4.6: The carried traffic as a function of the offered traffic for Erlang’s LCC model(curve B), Fry-Molina’s BCH model (curve M) and Erlang’s waiting time system (curve C,Chap. 9). By Fry-Molina’s model, which corresponds to rearrangement (call packing), wecan increase the utilisation as compared with Erlang’s B-formula.
the model has again become of current interest to modelling of e.g. mobile communicationsystems and service-integrated broadband systems.
Fry-Molina’s BCH-model is based upon the non-truncated state-dependent Poisson arrivalprocesses, e.g. BPP–traffic (Binomial distribution (5.4), Poisson distribution (4.6), and Pascaldistribution (5.65)). If we denote the relative state probabilities by q(i) (i = 0, 1, 2, . . .), thenwe find the absolute state probabilities by a normalization:
p(i) =q(i)
Q(∞), Q(∞) =
∞∑
i=0
q(i) . (4.42)
For Fry-Molina’s BCH model we get the following state probabilities pm(i):
pm(i) =
p(i) , 0 ≤ i < n ,∞∑
j=n
p(j) , i = n . (4.43)
The time congestion E is by definition the proportion of time all channels are busy:
E = pm(n) = 1−Q(n− 1) . (4.44)
126 CHAPTER 4. ERLANG’S LOSS SYSTEM AND B–FORMULA
The traffic congestion C is from a numerical point of view best obtained in the following way.As the offered traffic A per definition is equal to the traffic carried by in infinite trunk groupwe have:
A =∞∑
i=0
i · p(i) . (4.45)
The lost traffic is:
A` =∞∑
i=n+1
(i− n) · p(i) . (4.46)
The traffic congestion therefore becomes:
C =A`A.
4.8 Principles of dimensioning
When dimensioning service systems we have to balance grade-of-service requirements againsteconomic restrictions. In this chapter we shall see how this can be done on a rational ba-sis. In telecommunication systems there are several measures to characterize the serviceprovided. The most extensive measure is Quality-of-Service (QoS), comprising all aspectsof a connection as voice quality, delay, loss, reliability etc. We consider a subset of these,Grade-of-Service (GoS) or network performance, which only includes aspects related to thecapacity of the network.
By the publication of Erlang’s formulæ there was already before 1920 a functional relationshipbetween number of channels, offered traffic, and grade-of-service (blocking probability) andthus a measure for the quality of the traffic. At that time there were direct connectionsbetween all exchanges in the Copenhagen area which resulted in many small and big channelgroups. If Erlang’s B-formula were applied with a fixed blocking probability for dimensioningthese groups, then the utilization in small groups would become low.
Kai Moe (1893–1949), chief engineer in the Copenhagen Telephone Company, made somequantitative economic evaluations and published several papers, where he introduced marginalconsiderations, as they are known today in mathematical economics. Similar considerationswere later done by P.A. Samuelson in his famous book, first published in 1947. On the basisof Moe’s works the fundamental principles of dimensioning are formulated for telecommuni-cation systems in Moe’s Principle (Jensen, 1950 [58]).
4.8.1 Dimensioning with fixed blocking probability
For proper operation, a loss system should be dimensioned for a low blocking probability. Inpractice the number of channels n should be chosen so that E1,n(A) is about 1% to avoid
4.8. PRINCIPLES OF DIMENSIONING 127
overload due to many non-completed and repeated call attempts which both load the systemand are a nuisance to subscribers (Cf. B–busy [60]).
n 1 2 5 10 20 50 100
A (E = 1%) 0.010 0.153 1.361 4.461 12.031 37.901 84.064
y 0.010 0.076 0.269 0.442 0.596 0.750 0.832
F1,n(A) 0.000 0.001 0.011 0.027 0.052 0.099 0.147
A1 = 1.2 ·A 0.012 0.183 1.633 5.353 14.437 45.482 100.877
E [%] 1.198 1.396 1.903 2.575 3.640 5.848 8.077
y 0.012 0.090 0.320 0.522 0.696 0.856 0.927
F1,n(A1) 0.000 0.002 0.023 0.072 0.173 0.405 0.617
Table 4.1: Upper part: For a fixed value of the blocking probability E = 1% n trunks can beoffered the traffic A. The average utilization of the trunks is y, and the improvement functionis F1,n(A) (4.16). Lower part: The values of E, y and F1,n(A) are obtained for an overloadof 20%.
Tab. 4.1 shows the offered traffic for a fixed blocking probability E = 1% for some values of n.The table also gives the average utilization of channels, which is highest for large groups. Ifwe increase the offered traffic by 20 % to A1 = 1.2 ·A, we notice that the blocking probabilityincreases for all n, but most for large values of n.
From Tab. 4.1 two features are observed:
a. The utilisation a per channel is, for a given blocking probability, highest in large groups(Fig. 4.4). At a blocking probability E = 1 % a single channel can at most be used 36seconds per hour on the average!
b. Large channel groups are more sensitive to a given percentage overload than smallchannel groups. This is explained by the low utilization of small groups, which thereforehave a higher spare capacity (elasticity).
Thus two conflicting factors are of importance when dimensioning a channel group: we maychoose among a high sensitivity to overload or a low utilization of the channels.
4.8.2 Improvement principle (Moe’s principle)
As mentioned in Sec. 4.8.1 a fixed blocking probability results in a low utilization (bad econ-omy) of small channel groups. If we replace the requirement of a fixed blocking probability
128 CHAPTER 4. ERLANG’S LOSS SYSTEM AND B–FORMULA
with an economic requirement, then the improvement function F1,n(A) (4.16) should take afixed value so that the extension of a group with one additional channel increases the carriedtraffic by the same amount for all groups.
In Tab. 4.2 we show the congestion for some values of n and an improvement value F = 0.05.We notice from the table that the utilization of small groups becomes better corresponding toa high increase of the blocking probability. On the other hand the congestion in large groupsdecreases to a smaller value. See also Fig. 4.8. If therefore we have a telephone system withtrunk group size and traffic values as given in the table, then we cannot increase the carriedtraffic by rearranging the channels among the groups.
n 1 2 5 10 20 50 100
A (FB = 0.05) 0.271 0.607 2.009 4.991 11.98 35.80 78.73
y 0.213 0.272 0.387 0.490 0.593 0.713 0.785
E1,n(A) [%] 21.29 10.28 3.72 1.82 0.97 0.47 0.29
A1 = 1.2 ·A 0.325 0.728 2.411 5.989 14.38 42.96 94.476
E % 24.51 13.30 6.32 4.28 3.55 3.73 4.62
y 0.245 0.316 0.452 0.573 0.693 0.827 0.901
F1,n(A1) 0.067 0.074 0.093 0.120 0.169 0.294 0.452
Table 4.2: For a fixed value of the improvement function we have calculated the same valuesas in table 4.1.
This service criteria will therefore in comparison with fixed blocking in Sec. 4.8.1 allocatemore channels to large groups and fewer channels to small groups, which is the trend we werelooking for.
The improvement function is equal to the difference quotient of the carried traffic with respectto number of channels n. When dimensioning according to the improvement principle we thuschoose an operating point on the curve of the carried traffic as a function of the number ofchannels where the slope is the same for all groups (∆A/∆n = constant). A marginal increaseof the number of channels increases the carried traffic with the same amount for all groups.
It is easy to set up a simple economical model for determination of F1,n(A). Let us considera certain time interval (e.g. a time unit). Denote the income per carried erlang per time unitby g. The cost of a cable with n channels is assumed to be a linear function:
cn = c0 + c · n . (4.47)
The total costs for a given number of channels is then (a) cost of cable and (b) cost due tolost traffic (missing income):
Cn = g · AE1,n(A) + c0 + c · n , (4.48)
4.8. PRINCIPLES OF DIMENSIONING 129
Here A is the offered traffic, i.e. the potential traffic demand on the group considered. Thecosts due to lost traffic will decrease with increasing n, whereas the expenses due to cableincrease with n. The total costs may have a minimum for a certain value of n. In practice nis an integer, and we look for a value of n, for which we have (cf. Fig. 4.7):
0 10 20 30 40 50 600
5
10
15
20
25Costs
No. of trunks n
Total costs
Blocked traffic
Cable
............................................................................................
..............................
..............................
.......................................
.........................................................
........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
....................................
..................................
...............................
.........................................................................................................................................................................................................................................................................................................................................................................
............. ............. ............. ............. ............. ............. ............. ............. ............. .................................................................................................................................................................................................
Figure 4.7: The total costs are composed of costs for cable and lost income due to blockedtraffic (4.48). Minimum of the total costs are obtained when (4.49) is fulfilled, i.e. when thetwo cost functions have the same slope with opposite signs (difference quotient). (FB = 0.35,A = 25 erlang). Minimum is obtained for n = 30 trunks.
Cn−1 > Cn and Cn ≤ Cn+1 .
As E1,n(A) = En(A) we get:
A En−1(A)− En(A) > c
g≥ A En(A)− En+1(A) , (4.49)
or:F1,n−1(A) > FB ≥ F1,n(A) , (4.50)
where:
FB =c
g=
cost per extra channel
income per extra channel. (4.51)
130 CHAPTER 4. ERLANG’S LOSS SYSTEM AND B–FORMULA
FB is called the improvement value. We notice that c0 does not appear in the condition forminimum. It determines whether it is profitable to carry traffic at all. We must require thatfor some positive value of n we have:
g · A 1− En(A) > c0 + c · n . (4.52)
Fig. 4.8 shows blocking probabilities for some values of FB. We notice that the economic
0 10 20 30 40 50 60 70 80 90 1000
5
10
15Blocking probability E [%]
Offered traffic A
FB
0.050.10
0.20
0.35
.............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Figure 4.8: When dimensioning with a fixed value of the improvement value FB the blockingprobabilities for small values of the offered traffic become large (cf. Tab. 4.2).
demand for profit results in a certain improvement value. In practice we choose FB partlyindependent of the cost function.
In Denmark the following values have been used:
FB = 0.35 for primary trunk groups.
FB = 0.20 for service protecting primary groups. (4.53)
FB = 0.05 for groups with no alternative route.
4.8. PRINCIPLES OF DIMENSIONING 131
2010-02-22
132 CHAPTER 4. ERLANG’S LOSS SYSTEM AND B–FORMULA
Chapter 5
Loss systems with full accessibility
In this chapter we generalize Erlang’s classical loss system to state-dependent Poisson-arrivalprocesses, which include the so-called BPP-traffic models:
• Binomial case: Engset’s model,
• Poisson case: Erlang’s model, and
• Pascal (Negative Binomial) model: Palm–Wallstrom’s model.
Erlang’s model describers random traffic. Engset’s model describes traffic which is moresmooth than random traffic. Negative Binomial model describes traffic which is more burstythan random traffic and includes models with Pareto-distributed inter-arrival times (heavy-tailed traffic) and traffic with batch arrivals. These models are all insensitive to the servicetime distribution. Engset and Pascal models are even insensitive to the distribution of the idletime of sources. It is important always to use traffic congestion as the important performancemetric.
After the introduction in Sec. 5.1 we go through the basic classical theory. In Sec. 5.2 weconsider the Binomial case, where the number of sources S (subscribers, customers, jobs)is limited and the number of channels n always is sufficient (S ≤ n). This system is dealtwith by balance equations in the same way as the Poisson case (Sec. 4.2). We consider thestrategy Blocked-Calls-Cleared (BCC). In Sec. 5.3 we restrict the number of channels so thatit becomes less than the number of sources (n < S). We may then experience blocking andwe obtain the truncated Binomial distribution, which also is called the Engset distribution.The probability of time congestion E is given by Engset’s formula. With a limited numberof sources, time congestion, call congestion, and traffic congestion differ, and the PASTA–property is replaced by the general arrival theorem, which tells that the state probabilitiesof the system observed by a customer (call average) is equal to the state probability of thesystem without this customer (time average). Engset’s formula is computed numerically bya formula recursive in the number of channels n derived in the same way as for Erlang’s
133
134 CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY
B-formula. Also formulæ recursive in number of sources S and simultaneously in both n &S are derived.
In Sec. 5.6 we consider the Negative Binomial case, also called the Pascal case, where thearrival intensity increases linearly with the state of the system. If the number of channelsis limited, then we get the truncated Negative Binomial distribution (Sec. 5.7). Finally, inSec. 5.8 we consider a Batch Poisson arrival process and show it is similar to the Pascal case.
5.1 Introduction
We consider a system with same structure (full accessibility group) and strategy (Lost-Calls-Cleared) as in Chap. 4, but with more general traffic processes. In the following we assumethe service times are exponentially distributed with intensity µ (mean value 1/µ); the trafficprocess then becomes a birth & death process, a special Markov process, which is easy todeal with mathematically. Usually we define the state of the system as the number of busychannels. All processes considered in Chapter 4 and 5 are insensitive to the service timedistribution, i.e. only the mean service time is of importance to the state probabilities. Theservice time distribution itself has no influence.
Definition of offered traffic: In Sec. 1.7 we define the offered traffic A as the traffic carriedwhen the number of servers is unlimited, and this definition is used for both the Engset-caseand the Pascal-case. The offered traffic is thus independent of the number of servers. Onlyfor stationary renewal processes as the Poisson arrival process this definition is equivalentto the average number of calls attempts per mean service time. In Engset and Pascal casesthe arrival processes are not renewal processes as the mean inter-arrival time depends on theactual state.
Carried traffic is by definition the mean value of the state probabilities (average number ofbusy channels).
Peakedness is defined as the ratio between variance and mean value of the state probabilities.For offered traffic the peakedness is considered for an infinite number of channels.
We consider the following arrival processes, where the first case already has been dealt within Chap. 4:
1. Erlang-case (P – Poisson-case):The arrival process is a Poisson process with intensity λ. This type of traffic is calledrandom traffic or Pure Chance Traffic type One, PCT–I. We consider two cases:
a. n =∞: Poisson distribution (Sec. 4.2).The peakedness is in this case equal to one: Z=1.
b. n <∞: Truncated Poisson distribution (Sec. 4.3).
5.2. BINOMIAL DISTRIBUTION 135
2. Engset-case (B – Binomial-case):There is a limited number of sources S. Each source has a constant call (arrival)intensity γ when it is idle. When it is busy the call intensity is zero. The arrivalprocess is thus state-dependent. If i sources are busy, then the arrival intensity is equalto (S − i) γ.
This type of traffic is called Pure Chance Traffic type Two, PCT–II. We consider thefollowing two cases:
a. n ≥ S: Binomial distribution (Sec. 5.2).In this case the peakedness is less than one: Z<1.
b. n < S: Truncated Binomial distribution (Sec. 5.3).
3. Palm-Wallstrom–case (P – Pascal-case):There is a limited number of sources S. If at a given instant we have i busy sources,then the arrival intensity equals (S + i) γ. Again we have two cases:
a. n =∞: Pascal distribution = Negative Binomial distribution (Sec. 5.6).In this case peakedness is greater than one: Z>1.
b. n <∞: Truncated Pascal distribution (truncated negative Binomial distribution)(Sec. 5.7).
As the Poisson process may be obtained by an infinite number of sources with a limited totalarrival intensity λ, the Erlang-case may be considered as a special case of the two other cases:
limS→∞, γ→0
(S ± i) γ = limS→∞, γ→0
S γ = λ .
For any state 0 ≤ i ≤ n (n finite) we then have a constant arrival intensity λ. This is alsoseen from Palm’s theorem (Sec. 3.6.1).
The three traffic types are referred to as BPP-traffic according to the abbreviations givenabove (Binomial & Poisson & Pascal). As these models include all values of peakedness Z>0,they can be used for modeling traffic with two parameters: mean value A and peakedness Z.For arbitrary values of Z the number of sources S in general becomes non-integral.
Performance–measures: The performance parameters for loss systems are time congestion E,Call congestion B, traffic congestion C, and the utilization of the channels. Among these,traffic congestion C is the most important characteristic. These measures are derived foreach of the above-mentioned models.
5.2 Binomial Distribution
We consider a system with a limited number of sources S. Sources is a generic term forsubscribers, users, terminals, etc. The individual source alternates between the states idle
136 CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY
and busy. A source is idle during a time interval which is exponentially distributed withintensity γ, and the source is busy during an exponentially distributed time interval (servicetime, holding time) with intensity µ (Fig. 5.2). This kind of sources are called sporadic sourcesor on/off sources. This type of traffic is called Pure Chance Traffic type Two (PCT–II), orpseudo-random traffic.
©− ©− ©− · · · · · · ©− © © © · · · · · · ©
S sources n channels
Figure 5.1: A full accessible loss system with S sources, which generates traffic to n channels.The system is shown by a so-called chicko-gram. The beak of a source symbolizes a selectorwhich points upon the channels (servers) among which the source may choose.
In this section the number of channels (trunks, servers) n is assumed to be greater than orequal to the number of sources (n ≥ S), so that no calls are lost. Both n and S are assumed tobe integers, but it is possible to deal with non–integral values (Iversen & Sanders, 2001 [48]).
..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... ..................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.............................
..........................
..................................................................................................................................... ........................................................................................................... .......................... ..................................................................................................................................... ........................................................................................................... ..........................
........
........
........
........
........
........
................................
..........................
........
........
........
........
........
........
................................
..........................
........
........
........
........
........
........
................................
..........................µ
1 γ 1
Idle
Busy
State
Time
arrival arrivaldeparture
Figure 5.2: Every individual source is either idle or busy, and behaves independent of allother sources.
........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
.......................................................................................................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
0 1 S−1 S n· · · · · ·
S γ (S−1) γ 2 γ γ
µ 2 µ (S−1) µ S µ
Figure 5.3: State transition diagram for the Binomial case (Sec. 5.2). The number of sourcesS is less than or equal to the number of channels n (S ≤ n).
5.2.1 Equilibrium equations
We are interested in the steady state probabilities p(i), which are the proportion of time theprocess spends in state [ i ]. Our calculations are based on the state transition diagram shown
5.2. BINOMIAL DISTRIBUTION 137
in Fig. 5.3. We consider cuts between neighboring states and find:
S γ · p(0) = µ · p(1) ,
(S − 1) γ · p(1) = 2µ · p(2) ,
. . . . . .
(S − i− 1) γ · p(i− 1) = i µ · p(i) ,(S − i) γ · p(i) = (i+ 1)µ · p(i+ 1) ,
. . . . . .
1 γ · p(S − 1) = Sµ · p(S) .
(5.1)
All state probabilities are expressed by p(0):
p(1) =S γ
µ· p(0) = p(0) ·
(S
1
)·(γ
µ
)1
,
p(2) =(S − 1) γ
2µ· p(1) = p(0) ·
(S
2
)·(γ
µ
)2
,
. . . . . . . . . . . .
p(i) =(S − i− 1) γ
i µ· p(i− 1) = p(0) ·
(S
i
)·(γ
µ
)i,
p(i+ 1) =(S − i) γ(i+ 1)µ
· p(i) = p(0) ·(S
i+1
)·(γ
µ
)i+1
,
. . . . . . . . . . . .
p(S) =γ
S µ· p(S − 1) = p(0) ·
(S
S
)·(γ
µ
)S.
The total sum of all probabilities must be equal to one:
1 = p(0) ·
1 +
(S
1
)·(γ
µ
)1
+
(S
2
)·(γ
µ
)2
+ · · ·+(S
S
)·(γ
µ
)S
= p(0) ·
1 +γ
µ
S,
where we have used Newton’s Binomial expansion. By letting β = γ/µ we get:
p(0) =1
(1 + β)S. (5.2)
138 CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY
The parameter β is the offered traffic per idle source (number of call attempts per time unitfor an idle source – the offered traffic from a busy source is zero) and we find:
p(i) =
(S
i
)· β i · 1
(1 + β)S
=
(S
i
)·(
β
1 + β
)i·(
1
1 + β
)S−i, i = 0, 1, . . . , S , 0 ≤ S ≤ n ,
which is the Binomial distribution (Tab. 3.1). Finally, we get by introducing the offeredtraffic per source a, defined as the traffic carried per source when there is no blocking:
a =β
1 + β=
γ
µ+ γ=
1/µ
1/γ + 1/µ, (5.3)
p(i) =
(S
i
)· ai · (1− a)S−i , i = 0, 1, . . . , S , 0 ≤ S ≤ n . (5.4)
In this case a call attempt from an idle source is never blocked, and the carried traffic α persource is equal to the offered traffic per source a, and this is the probability that a source isbusy at a random instant (the proportion of time the source is busy). This is also observedfrom Fig. 5.2, as all arrival and departure points on the time axes are regeneration points(equilibrium points). A cycle from start of a busy state (arrival) till start of the next busystate is representative for the whole time axes, and time averages are obtained by averagingover one cycle.
The Binomial distribution obtained in (5.4) is in teletraffic theory sometimes called theBernoulli distribution, but this should be avoided as we in statistics use this name for atwo-point distribution.
Example 5.2.1: Binomial distribution and convolutionFormula (5.4) can be derived by elementary considerations. All subscribers can be split into twoclasses: idle subscribers and busy subscribers. The probability that an arbitrary subscriber is busyis y = a, which is independent of the state of all other subscribers as the system has no blockingand call attempts always are accepted. Then the state of a single source is given by the Bernoullidistribution:
p1(i) =
1−a , i = 0 ,
a , i = 1 .(5.5)
which has a finite mean value a. If we in total have S subscribers (sources), then the probabilitypS(i) that i sources are busy at an arbitrary instant is given by the Binomial distribution ((5.4) &Tab. 3.1):
pS(i) =(S
i
)ai (1−a)S−i ,
S∑
i=0
pS(i) = 1 , (5.6)
which has the mean value S · a. If we add one source more to the system, then the distribution ofthe total number of busy sources is obtained by convolution of the Binomial distribution (5.6) and
5.2. BINOMIAL DISTRIBUTION 139
the Bernoulli distribution (5.5):
pS+1(i) = pS(i) · p1(0) + pS(i−1) · p1(1)
=(S
i
)ai (1−a)S−i · (1−a) +
(S
i− 1
)ai−1 (1−a)S−i+1 · a
=(
S
i
)+(
S
i− 1
)ai (1−a)S−i+1
=(S + 1i
)ai (1−a)S−i+1 , q.e.d.
2
5.2.2 Traffic characteristics of Binomial traffic
We summarize definitions of parameters given above:
γ = call intensity per idle source, (5.7)
1/µ = mean service (holding) time, (5.8)
β = γ/µ = offered traffic per idle source. (5.9)
By definition, the offered traffic of a source is equal to the carried traffic in a system with nocongestion, where the source freely alternates between states idle and busy. Therefore, wehave the following definition of the offered traffic:
a =β
1 + β= offered traffic per source, (5.10)
A = S · a = S · β
1 + β= total offered traffic, (5.11)
α = carried traffic per source (5.12)
Y = S · α = total carried traffic (5.13)
y = Y/n = carried traffic per channel with random hunting (5.14)
Offered traffic per source is a difficult concept to deal with because the proportion of time asource is idle depends on the congestion. The number of calls offered by a source depends onthe number of channels (feed-back): a high congestion results in more idle time for a sourceand thus in more call attempts.
140 CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY
Time congestion:
E = 0 , S < n ,
E = p(n) = an , S = n .
Carried traffic:
Y = S · α =S∑
i=0
i · p(i)
= S · a = A , (5.15)
which is the mean value of the Binomial distribution (5.4). In this case with no blocking weof course have a = α and:
Traffic congestion:
C =A− YA
= 0 . (5.16)
Number of call attempts per time unit:
Λ =S∑
i=0
p(i) · (S − i) γ
= γ S − γ ·S∑
i=0
i · p(i) = γ S − γ S a
= S (1− α) · γ ,
where S (1− α) is the avetrage number of idle sources.
As all call attempts are accepted we get:
Call congestion:
B = 0 . (5.17)
Traffic carried by channel i:
Random hunting: y =Y
n=S · αn
. (5.18)
Sequential hunting: complex expression derived by L.A. Joys (1971 [64]).
Improvement function:
Fn(A) = Yn+1 − Yn = 0 . (5.19)
5.3. ENGSET DISTRIBUTION 141
Peakedness of the Binomial distribution is (Tab. 3.1):
Z =σ2
m1
=S · a · (1− a)
S · a ,
Z = 1− a = 1− A
S=
1
1 + β< 1 . (5.20)
We observe that the peakedness Z is independent of the number of sources and always lessthan one. Therefore it corresponds to smooth traffic.
Duration of state i: This is exponentially distributed with rate:
γ(i) = (S − i) · γ + i · µ , 0 ≤ i ≤ S ≤ n . (5.21)
Finite source traffic is characterized by number of sources S and offered traffic per idle sourceβ. Alternatively, we often use offered traffic A and peakedness Z. From (5.11) and (5.20) weget the following relations between the two set of parameters (S, β) and (A,Z):
A = S · β
1 + β, (5.22)
Z =1
1 + β, (5.23)
and by solving these equations with respect to A and Z we get:
β =1− ZZ
, (5.24)
S =A
1− Z . (5.25)
5.3 Engset distribution
The only difference in comparison with Sec. 5.2 is that number of sources S now is greaterthan or equal to number of trunks (channels), S ≥ n. Therefore, call attempts may experiencecongestion.
5.3.1 State probabilities
The cut equations are identical to (5.1), but they only exist for 0 ≤ i ≤ n (Fig. 5.4). Thenormalization equation becomes:
1 = p(0) ·
1 +
(S
1
)·(γ
µ
)+ · · ·+
(S
n
)·(γ
µ
)n.
142 CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY
........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
.......................................................................................................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
0 1 i n−1 n· · · · · ·
Sγ (S−1) γ (S−i) γ (S−n+1) γ
µ 2 µ i µ (n−1) µ n µ
Figure 5.4: State transition diagram for the Engset case with S > n, where S is the numberof sources and n is the number of channels.
From this we obtain p(0), and by letting β = γ/µ the state probabilities become:
p(i) =
(S
i
)· β i
n∑
j=0
(S
j
)· β j
, 0 ≤ i ≤ n . (5.26)
In the same way as above we may by using (5.10) rewrite this expression to a form, which isanalogue to (5.4):
p(i) =
(S
i
)· ai · (1− a)S−i
n∑
j=0
(S
j
)· aj · (1− a)S−j
, 0 ≤ i ≤ n , (5.27)
from which we directly observe why it is called a truncated Binomial distribution (cf. trun-cated Poisson distribution (4.10)). The distribution (5.26) & (5.27) is called the Engset–distribution after the Norwegian T. Engset (1865–1943) who first published the model witha finite number of sources (1918 [28]).
5.3.2 Traffic characteristics of Engset traffic
The Engset-distribution results in more complicated calculations than the Erlang loss system.The essential issue is to understand how to find the performance measures directly from thestate probabilities using the definitions. The Engset system is characterized by the parametersβ = γ/µ = offered traffic per idle source, S = number of sources, and n = number of channels.
Time congestion E: this is by definition equal to the proportion of time the system is blockingnew call attempts, i.e. p(n) (5.26):
En,S(β) = p(n) =
(S
n
)· β n
n∑
j=0
(S
j
)· β j
, S ≥ n . (5.28)
5.3. ENGSET DISTRIBUTION 143
Call congestion B: this is by definition equal to the proportion of call attempts which arelost. Only call attempts arriving at the system in state n are blocked. During one unit oftime we get the following ratio between the number of blocked call attempts and the totalnumber of call attempts:
Bn,S(β) =p(n) · (S − n) γn∑
j=0
p(j) · (S − j) γ
=
(S
n
)· β n · (S − n) γ
n∑
j=0
(S
j
)· β j · (S − j) γ
.
Using (S
i
)· S − i
S=
(S − 1
i
),
we get:
Bn,S(β) =
(S − 1
n
)· β n
n∑
j=0
(S − 1
j
)· β j
,
Bn,S(β) = En,S−1(β) , S ≥ n . (5.29)
This result may be interpreted as follows. The probability that a call attempt from a randomidle source (subscriber) is blocked is equal to the probability that the remaining (S−1) sourcesoccupy all n channels. This is called the arrival theorem, and it can be shown to be validfor both loss and delay systems with a limited number of sources. The result is based onthe product form among sources and the convolution of sources. As E increases when Sincreases, we have Bn,S(β) = En,S−1(β) < En,S(β).
Theorem 5.1 Arrival-theorem: For full accessible systems with a limited number of sources,a random source upon arrival will observe the state of the system as if the source itself doesnot belong to the system.
The PASTA–property is included in this case because an infinite number of sources less oneis still an infinite number.
144 CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY
Carried traffic: By applying the cut equation between state [ i− 1 ] and state [ i ] we get:
Y =n∑
i=1
i · p(i) (5.30)
=n∑
i=1
γ
µ· (S − i+ 1) · p(i− 1)
=n−1∑
i=0
β · (S − i) · p(i) (5.31)
=n∑
i=0
β · (S − i) · p(i)− β · (S − n) · p(n) ,
Y = β · (S − Y )− β · (S − n) · E , (5.32)
as E = En,S(β) = p(n). This is solved with respect to Y :
Y =β
1 + β· S − (S − n) · E . (5.33)
Traffic congestion C = Cn,S(A). This is the most important congestion measure. The offeredtraffic is given by (5.22) and we get:
C =A− YA
=
S β
1 + β− β
1 + β· S − (S − n) · ES β
1 + β
,
C =S − nS· E . (5.34)
We may also find the carried traffic if we know the call congestion B. The number of acceptedcall attempts from a source which on the average is idle 1/γ time unit before it generate onecall attempt is 1 · (1 − B), and each accepted call has an average duration 1/µ. Thus thecarried traffic per source, i.e. the proportion of time the source is busy, becomes:
α =(1−B)/µ
1/γ + (1−B)/µ.
The total carried traffic becomes:
Y = S · α = S · β (1−B)
1 + β (1−B). (5.35)
5.3. ENGSET DISTRIBUTION 145
Equalizing the two expressions for the carried traffic (5.33) & (5.35) we get the followingrelation between E and B:
E =S
S − n ·B
1 + β(1−B). (5.36)
Number of call attempts per time unit:
Λ =n∑
i=0
p(i) · (S − i) γ
Λ = (S − Y ) · γ , (5.37)
where Y is the carried traffic (5.30). Thus (S − Y ) is the average number of idle sources,which is evident.
Historically, the total offered traffic was earlier defined as Λ/µ. This is, however, misleadingbecause we cannot assign every repeated call attempt a mean holding time 1/µ. Also ithas caused a lot of confusion because the offered traffic by this definition depends upon thesystem (number of channels). With few channels available many call attempts are blockedand the sources are idle a higher proportion of the time and thus generate more call attemptsper time unit.
Lost traffic:
A` = A · C
= Sβ
1 + β· S − n
SE
=(S − n) β
1 + β· E . (5.38)
Duration of state i: This is exponentially distributed with intensity:
γ(i) = (S − i) · γ + i · µ , 0 ≤ i < n ,
γ(n) = nµ , i = n .
(5.39)
Improvement function:Fn,S(A) = Yn+1 − Yn . (5.40)
Example 5.3.1: Call average and time averageAbove we have under the assumption of statistical equilibrium defined the state probabilities p(i)as the proportion of time the system spends in state i, i.e. as a time average. We may also studyhow the state of the system looks when it is observed by an arriving or departing source (user) (callaverage). If we consider one time unit, then on the average (S − i) γ · p(i) sources will observe thesystem in state [ i ] just before the arrival epoch, and if they are accepted they will bring the system
146 CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY
into state [ i+ 1 ]. Sources observing the system in state n are blocked and remain idle. Therefore,arriving sources observe the system in state [ i ] with probability:
πn,S,β(i) =(S − i) γ · p(i)n∑
j=0
(S − j) γ · p(j), i = 0, 1, . . . n . (5.41)
In a way analogue to the derivation of (5.29) we may show that in agreement with the arrivaltheorem (Theorem 5.1) we have as follows:
πn,S,β(i) = pn,S−1,β(i) , i = 0, 1, . . . , n . (5.42)
When a source leaves the system and looks back, it observes the system in state [ i − 1 ] withprobability:
ψn,S,β(i− 1) =i µ · p(i)n∑
j=1
j µ · p(j), i = 1, 2, . . . , n . (5.43)
By applying cut equations we immediately get that this is identical with (5.41), if we include theblocked customers. On the average, sources thus depart from the system in the same state as theyarrive to the system. The process will be reversible and insensitive to the service time distribution.If we make a film of the system, then we are unable to determine whether time runs forward orbackward. 2
5.4 Relations between E, B, and C
From (5.36) we get the following relation between E = En,S(β) and B = Bn,S(β) = En,S−1(β):
E =S
S − n ·B
1 + β(1−B)or
1
E=S − nS
(1 + β) · 1
B− β
, (5.44)
B =(S − n) · E · (1 + β)
S + (S − n) · E · β or1
B=
1
1 + β
S
S − n ·1
E+ β
. (5.45)
The expressions to the right-hand side are linear in the reciprocal blocking probabilities. In(5.34) we obtained the following simple relation between C and E:
C =S − nS· E , (5.46)
E =S
S − n · C . (5.47)
5.5. EVALUATION OF ENGSET’S FORMULA 147
If we in (5.46) express E by B (5.44), then we get C expressed by B:
C =B
1 + β · (1−B), (5.48)
B =(1 + β)C
1 + β C. (5.49)
This relation between B and C is general for any system and may be derived from carriedtraffic as follows. The carried traffic Y corresponds to (Y ·µ) accepted call attempts per timeunit. The average number of idle sources is (S − Y ), so the average number of call attemptsper time unit is (S − Y ) · γ (5.37). The call congestion is the ratio between the number ofrejected call attempts and the total number of call attempts, both per time unit:
B =(S − Y ) γ − Y · µ
(S − Y ) γ
=(S − Y ) β − Y
(S − Y ) β.
By definition, Y = A (1 − C) and from (5.22) we have S = A (1 + β)/β. Inserting this weget:
B =A(1 + β)− A(1− C) β − A (1− C)
A(1 + β)− A(1− C) β
B =(1 + β)C
1 + β Cq.e.d.
From the last equation we see that for small values of the traffic congestion C (1 + β C ≈ 1)the traffic congestion is Z (peakedness value) times bigger than the call congestion:
C ≈ B
1 + β= Z ·B . (5.50)
From (5.48) and (5.29) we get for Engset traffic:
Cn,S(β) < Bn,S(β) < En,S(β) . (5.51)
5.5 Evaluation of Engset’s formula
If we try to calculate numerical values of Engset’s formula directly from (5.28) (time con-gestion E), then we will experience numerical problems for large values of S and n. Inthe following we derive various numerically stable recursive formulæ for E and its reciprocalI = 1/E. When the time congestion E is known, it is easy to obtain the call congestion B andthe traffic congestion C by using the formulæ (5.45) and (5.46). Numerically it is also simpleto find any of the four parameters S, β, n, E when we know three of them. Mathematicallywe may assume that n and eventually S are non-integral.
148 CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY
5.5.1 Recursion formula on n
From the general formula (4.27) recursive in n we get using λx = (S − x) γ and β = γ/µ :
Ex,S(β) =
γx−1
xµ· Ex−1,S(β)
1 + γx−1
xµ· Ex−1,S(β)
,
Ex,S(β) =(S− x+1)β · Ex−1,S(β)
x+ (S− x+1)β · Ex−1,S(β), E0,S(β) = 1 . (5.52)
Introducing the reciprocal time congestion In,S(β) = 1/En,S(β), we find the recursion formula:
Ix,S(β) = 1 +x
(S − x+ 1) βIx−1,S(β) , I0,S(β) = 1 . (5.53)
The number of iterations is n. Both (5.52) and (5.53) are analytically exact, numericallystable and accurate recursions for increasing values of x. However, for decreasing values of xthe numerical errors accumulate and the recursions are not reliable.
5.5.2 Recursion formula on S
Let us denote the normalized state probabilities of a system with n channels and S−1 sourcesby pn,S−1(i). We get the state probabilities of a system with n channels and S sources byconvolving these state probabilities with the state probabilities of a single source which aregiven by p1,1(0) = 1− a, p1,1(1) = a. We then get states from zero to n + 1, truncate thestate space at n, and normalize the state probabilities (cf. Example 5.2.1) (assuming p(x) = 0when x < 0):
qn,S(i) = (1−a) · pn,S−1(i) + a · pn,S−1(i− 1) , i = 0, 1, . . . , n . (5.54)
The obtained state probabilities qn,S(i) are not normalized, because we truncate at state [n ]and exclude the last term for state [n+1 ]: qn,S(n+ 1) = a · pn,S−1(n). The normalized stateprobabilities pn,S(i) for a system with S sources and n channels are thus obtained from thenormalized state probabilities pn,S−1(i) for a system with S − 1 sources by:
pn,S(i) =qn,S(i)
1− a · pn,S−1(n), i = 0, 1, . . . , n . (5.55)
5.5. EVALUATION OF ENGSET’S FORMULA 149
The time congestion En,S(β) for a system with S sources can be expressed by the timecongestion En,S−1(β) for a system with S−1 sources by inserting (5.54) in (5.55):
En,S(β) = pn,S(n)
=(1−a) · pn,S−1(n) + a · pn,S−1(n−1)
1− a · pn,S−1(n)
=(1−a) · En,S−1(β) + a · nµ
(S−n) γEn,S−1(β)
1− a · En,S−1(β),
where we have used the balance equation between state [n−1, S−1] and state [n, S−1].Replacing a by using (5.10) we get:
En,S(β) =En,S−1(β) + n
S−n En,S−1(β)
1 + β − β En,S−1(β).
Thus we obtain the following recursive formula:
En,S(β) =S
S−n ·En,S−1(β)
1 + β 1− En,S−1(β) , S>n , En,n(β) = an . (5.56)
The initial value is obtained from (5.15). Using the reciprocal blocking probability I = 1/Ewe get:
In,S(β) =S−n
S (1−a)· In,S−1(β)− a , S>n , In,n(β) = a−n . (5.57)
For increasing S the number of iterations is S−n. However, numerical errors accumulatedue to the multiplication with (S/(S − n) which is greater than one, and the applicability islimited. Therefore, it is recommended to use the recursion (5.59) given in the next section forincreasing S. For decreasing S the above formula is analytically exact, numerically stable,and accurate. However, the initial value should be known beforehand.
5.5.3 Recursion formula on both n and S
If we insert (5.52) into (5.56), respectively (5.53) into (5.57), we find:
En,S(β) =S a · En−1,S−1(β)
n+ (S−n)a · En−1,S−1(β), E0,S−n(β) = 1 , (5.58)
In,S(β) =n
S a· In−1,S−1(β) +
S−nS
, I0,S−n(β) = 1 , (5.59)
which are recursive in both the number of servers and the number of sources. Both of theserecursions are numerically accurate for increasing indices and the number of iterations is n(Joys, 1967 [62]).
150 CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY
From the above we have the following conclusions for recursion formulæ for the Engset for-mula. For increasing values of the parameter, recursion formulæ (5.52) & (5.53) are veryaccurate, and formulæ (5.58) & (5.59) are almost as good. Recursion formulæ (5.56) &(5.57)are numerically unstable for increasing values, but unlike the others stable for decreasingvalues. In general, we have that a recursion, which is stable in one direction, will be unstablein the opposite direction.
Example 5.5.1: Engset’s loss systemWe consider an Engset loss system having n = 3 channels and S = 4 sources. The call rate per idlesource is γ = 1/3 calls per time unit, and the mean service time (1/µ) is 1 time unit. We find thefollowing parameters:
β =γ
µ=
13
erlang (offered traffic per idle source),
a =β
1 + β=
14
erlang (offered traffic per source),
A = S · a = 1 erlang (offered traffic),
Z = 1− A
S=
34
(peakedness).
From the state transition diagram we obtain the following table:
i γ(i) µ(i) q(i) p(i) i · p(i) γ(i) · p(i)
0 4/3 0 1.0000 0.3176 0.0000 0.42351 3/3 1 1.3333 0.4235 0.4235 0.42352 2/3 2 0.6667 0.2118 0.4235 0.14123 1/3 3 0.1481 0.0471 0.1412 0.0157
Total 3.1481 1.0000 0.9882 1.0039
We find the following blocking probabilities:
Time congestion: E3,4
(13
)= p(3) = 0.0471 ,
Traffic congestion: C3,4
(13
)=
A− YA
=1− 0.9882
1= 0.0118 ,
Call congestion: B3,4
(13
)= γ(3) · p(3)
/3∑
i=0
γ(i) · p(i)
=0.01571.0039
= 0.0156 .
We notice that E > B > C, which is a general result for the Engset case (5.51) & (Fig. 5.7). By
5.6. PASCAL DISTRIBUTION 151
applying the recursion formula (5.52) we, of course, get the same results:
E0,4
(13
)= 1 ,
E1,4
(13
)=
(4− 1 + 1) · 13 · 1
1 + (4− 1 + 1) · 13 · 1
=47,
E2,4
(13
)=
(4− 2 + 1) · 13 · 4
7
2 + (4− 2 + 1) · 13 · 4
7
=29,
E3,4
(13
)=
(4− 3 + 1) · 13 · 2
9
3 + (4− 3 + 1) · 13 · 2
9
=485
= 0.0471 , q.e.d.
2
Example 5.5.2: Limited number of sourcesThe influence from the limitation in the number of sources can be estimated by considering eitherthe time congestion, the call congestion, or the traffic congestion. The congestion values are shownin Fig. 5.7 for a fixed number of channels n, a fixed offered traffic A, and an increasing value ofthe peakedness Z corresponding to a number of sources S, which is given by S = A/(1−Z) (5.25).The offered traffic is defined as the traffic carried in a system without blocking (n = ∞). HereZ = 1 corresponds to a Poisson arrival process (Erlang’s B-formula, E = B = C). For Z < 1 weget the Engset-case, and for this case the time congestion E is larger than the call congestion B,which is larger than the traffic congestion C. For Z > 1 we get the Pascal-case (Secs. 5.6 & 5.7 andExample 5.7.2). 2
5.6 Pascal Distribution
In the Binomial case the arrival intensity decreases linearly with an increasing number ofbusy sources. Palm & Wallstrom introduced a model where the arrival intensity increaseslinearly with the number of busy sources (Wallstrom, 1964 [117]). The arrival intensity instate i is given by:
λi = γ · (S + i), 0 ≤ i ≤ n , (5.60)
where γ and S are positive constants. The holding times are still assumed to be exponentiallydistributed with intensity µ. In this section we assume the number of channels is infinite. We
152 CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY
........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
...................
.....................................................................................................................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
0 1 i−1 i· · · · · ·
S γ (S+1) γ (S+i−2) γ (S+i−1) γ (S+i) γ
µ 2 µ (i−1)µ i µ (i+1)µ
Figure 5.5: State transition diagram for Negative Binomial case with infinite capacity.
set up a state transition diagram (Fig. 5.6 with n infinite) and get the following cut equations:
S γ · p(0) = µ · p(1) ,
(S + 1) γ · p(1) = 2µ · p(2) ,
. . . . . .
(S + i− 1) γ · p(i− 1) = i µ · p(i) ,(S + i) γ · p(i) = (i+ 1)µ · p(i+ 1) ,
. . . . . .
(5.61)
To obtain statistical equilibrium it is obvious that for infinite number of channels we mustrequire that γ < µ so that the arrival rate becomes smaller than the service rate from somestate. All state probabilities can be expressed by p(0). Assuming
β = γ/µ < 1
and using:
(−Si
)= (−1)i ·
(S + i− 1
i
)=
(−S)(−S − 1) . . . (−S − i+ 1)
i !(5.62)
we get:
p(1) =S γ
µ· p(0) = p(0) ·
(−S1
)· (−β)1 ,
p(2) =(S + 1) γ
2µ· p(1) = p(0) ·
(−S2
)· (−β)2 ,
. . . . . . . . . . . .
p(i) =(S+i−1) γ
i µ· p(i−1) = p(0) ·
(−Si
)· (−β)i ,
p(i+ 1) =(S − i) γ(i+ 1)µ
· p(i) = p(0) ·(−Si+1
)· (−β)i+1 ,
. . . . . . . . . . . .
5.7. TRUNCATED PASCAL DISTRIBUTION 153
The total sum of all probabilities must be equal to one:
1 = p(0)
·(−S
0
)· (−β)0 +
(−S1
)· (−β)1 +
(−S2
)· (−β)2 + . . .
= p(0) · −β + 1−S , (5.63)
where we have used the generalized Newton’s Binomial expansion:
(x+ y)r =∞∑
i=0
(r
i
)xi yr−i , (5.64)
which by using the definition (5.62) is valid also for complex numbers, in particular realnumbers (need not be positive or integer). Thus we find the steady state probabilities:
p(i) =
(−Si
)· (−β)i (1− β)S , 0 ≤ i <∞ , β < 1 . (5.65)
By using (5.62) we get:
p(i) =
(S+i−1
i
)· (−β)i (1− β)S , 0 ≤ i <∞ , β < 1 , (5.66)
which is the Pascal distribution (Tab. 3.1). The carried traffic is equal to the offered trafficas the capacity is unlimited, and it may be shown it has the following mean value andpeakedness:
A = S · β
1 + β,
Z =1
1− β .
These formulæ are similar to (5.22) and (5.23). The traffic characteristics of this model maybe obtained by an appropriate substitution of the parameters of the Binomial distribution asexplained in the following section.
5.7 Truncated Pascal distribution
We consider the same traffic process as in Sec. 5.6, but now we restrict the number of serversto a limited number n. The restriction γ < µ is no more necessary as we always will obtainstatistical equilibrium with a finite number of states. The state transition diagram is shownin Fig. 5.6, and state probabilities are obtained by truncation of (5.65):
p(i) =
(−Si
)(−β)i
n∑
j=0
(−Sj
)(−β)j
, 0 ≤ i ≤ n . (5.67)
154 CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY
........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
.......................................................................................................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
0 1 i n−1 n· · · · · ·
S γ (S+1) γ (S+i) γ (S+n−1) γ
µ 2 µ i µ (n−1) µ n µ
Figure 5.6: State transition diagram for the Pascal (truncated Negative Binomial) case.
This is the truncated Pascal distribution. Formally it can be obtained from the Engset caseby the the following substitutions:
S is replaced by − S , (5.68)
γ is replaced by − γ . (5.69)
By these substitutions all formulæ of the Bernoulli/Engset cases are valid for the truncatedPascal distribution, and the same computer programs can be use for numerical evaluation.
It can be shown that the state probabilities (5.67) are valid for arbitrary holding time distri-bution (Iversen, 1980 [43]) like state probabilities for Erlang and Engset loss systems (insen-sitivity). Assuming exponentially distributed holding times, this model has the same stateprobabilities as Palm’s first normal form, i.e. a system with a Poisson arrival process havinga random intensity distributed as a gamma-distribution. Inter-arrival times are Pareto dis-tributed, which is a heavy-tailed distribution. The model is used for modeling overflow trafficwhich has a peakedness greater than one. For the Pascal case we get (cf. (5.51)):
Cn,S(β) > Bn,S(β) > En,S(β) . (5.70)
Example 5.7.1: Pascal loss systemWe consider a Pascal loss system with n = 4 channels and S = 2 sources. The arrival rate isγ = 1/3 calls/time unit per idle source, and the mean holding time (1/µ) is 1 time unit. We findthe following parameters when we for the Engset case let S = −2 (5.68) and γ = −1/3 (5.69):
β =γ
µ= −1
3,
a =β
1 + β= −1
2,
A = S · a = −2 ·−1
2
= 1 erlang,
Z =1
1 + β=
11− 1
3
=32.
From a state transition diagram we get the following parameters:
5.7. TRUNCATED PASCAL DISTRIBUTION 155
i γ(i) µ(i) q(i) p(i) i · p(i) γ(i) · p(i)
0 0.6667 0 1.0000 0.4525 0.0000 0.30171 1.0000 1 0.6667 0.3017 0.3017 0.30172 1.3333 2 0.3333 0.1508 0.3017 0.20113 1.6667 3 0.1481 0.0670 0.2011 0.11174 2.0000 4 0.0617 0.0279 0.1117 0.0559
Total 2.2099 1.0000 0.9162 0.9721
We find the following blocking probabilities:
Time congestion: E4,−2
(−1
3
)= p(4) = 0.0279 .
Traffic congestion: C4,−2
(−1
3
)=
A− YA
=1− 0.9162
1= 0.0838 .
Call congestion: B4,−2
(−1
3
)=
γ(4) · p(4)∑4i=0 γ(i) · p(i)
=0.05590.9721
= 0.0575 .
We notice that E < B < C, which is a general result for the Pascal case. By using the samerecursion formula as for the Engset case (5.52), we of course get the same results:
E0,−2
(−1
3
)= 1.0000 ,
E1,−2
(−1
3
)=
23 · 1
1 + 23 · 1
=25,
E2,−2
(−1
3
)=
33 · 2
5
2 + 33 · 2
5
=16,
E3,−2
(−1
3
)=
43 · 1
6
3 + 43 · 1
6
=229,
I4,−2
(−1
3
)=
53 · 2
29
4 + 53 · 2
29
=5
179= 0.0279 q.e.d.
2
Example 5.7.2: Peakedness: numerical exampleIn Fig. 5.7 we keep the number of channels n and the offered traffic A fixed, and calculate theblocking probabilities for increasing peakedness Z. For Z > 1 we get the Pascal-case. For this casethe time congestion E is less than the call congestion B which is less than the traffic congestion C.We observe that both the time congestion and the call congestion have a maximum value. Only thetraffic congestion gives a reasonable description of the performance of the system. 2
156 CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY
0.0 0.5 1.0 1.5 2.0 2.50
4
8
12
16
...................................................................................................................................................................................................................................................................................................
....................................
..................................................
............................................................................................................
..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..........................................................................................................................................................................................................................................................................................................................................
..................................
.......................................
.....................................................
..............................................................
...........................................................................................
....................................................................................................................................................................
.....................................................................................................................................................................................................................................................................
..........................................................................................................................................
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Z
Peakedness Z
E B C
E
B
C
Congestion Probability [%]
Figure 5.7: Time congestion E, Call congestion B and Traffic congestion C as a function ofpeakedness Z for BPP–traffic i a system with n = 20 trunks and an offered traffic A = 15erlang. More comments are given in Example 5.5.2 and Example 5.7.2. For applications thetraffic congestion C is the most important, as it is almost a linear function of the peakedness.
5.8. BATCHED POISSON ARRIVAL PROCESS 157
5.8 Batched Poisson arrival process
We consider an arrival process where events occur according to a Poisson process with rateλ. At each event a batch of calls (packets, jobs) arrive simultaneously. The distribution ofthe batch size is a discrete distribution b(i) , (i = 1, 2, . . .). The batch size is at least one. Inthe classical Erlang loss system the batch size is always one. We choose the simplest casewhere the batch size distribution is a geometric distribution (Tab. 3.1, p. 92):
b(i) = p (1− p)i−1 , i = 1, 2, . . . . (5.71)
m1 =1
p, (5.72)
σ2 =1− pp2
, (5.73)
Zgeo =1− pp
. (5.74)
The complementary distribution function is given by:
b(≥ i) =∞∑
j=i
p(j) =p (1− p)i−1
1− (1− p) = (1− p)i−1 . (5.75)
By the splitting theorem for the Poisson process the arrival process for batches of size i is aPoisson process with rate λ · b(i). If we assume service times are exponentially distributedwith rate µ, and that each member of the batch is served independently, then the stateprobabilities of the number of busy channels in a system with infinite capacity has meanvalue and peakedness (Panken & van Doorn, 1993 [96]) as follows:
A =λ
µ· 1
p, (5.76)
Z =1
p. (5.77)
The offered traffic A is defined as the average number of batches per mean service timemultiplied by the average batch size. The peakedness is greater than one, and we may modelbursty traffic by this batch Poisson model.
158 CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY
5.8.1 Infinite capacity
If we assume balance in a cut between state [x−1] and state [x] (x = 1, 2, . . .) we find:
xµ · p(x) =x∑
i=1
p(x−i) · λ · b(≥ i) (5.78)
= p(0)λ b(≥ x) + p(1)λ b(≥ x−1) + . . .+ p(x−2)λ b(≥ 2) + p(x−1)λ b(≥ 1)
For a cut between states state [x−1] and [x−2] we have in similar way:
(x−1)µ · p(x−1) =x−1∑
i=1
p(x−1−i) · λ b(≥ i) (5.79)
= p(0)λ b(≥ x−1) + p(1)λ b(≥ x−2) + . . .+ p(x−2)λ b(≥ 1) .
As we from (3.49) have b(≥ i+1) = (1−p) · b(≥ i), then by multiplying (5.79) by (1−p) theright hand side becomes identical with the right hand side of (5.78) except for the last termin (5.78). Observing that b(≥ 1) = 1, we get:
xµ · p(x)− p(x−1)λ = (1−p) · (x−1)µ · p(x−1) , (5.80)
p(x) =
(1−p)(x−1)
x+
λ
xµ
· p(x−1) (5.81)
We thus only need the previous state probability to calculate the next state probability.We may start by letting p(0) = 1, calculate the states recursively, and normalize the stateprobabilities in each step of the recursion.
5.8.2 Finite capacity
If we have a finite number of channels and the batch size is bigger than the idle capacity, thenwe may either accept as many calls as possible and block the remaining calls (partial-blocking)or we may block the total batch (batch-blocking).
For partial-blocking we get the same relative state probabilities as above. This is similar tothe classical loss systems where we may truncate the state space and re-normalize the stateprobabilities.
For batch-blocking the balance equations become:
xµ · p(x) =x−1∑
i=0
p(i) · λ b(u | u ≤ n− i) , 0 < x ≤ n , (5.82)
5.8. BATCHED POISSON ARRIVAL PROCESS 159
where the batch size b(u) to be accepted in state i now have an upper limit n− i. By using(5.75) we get the balance equation
xµ · p(x) =x−1∑
i=0
p(i) · λ 1− (1− p)n+i . (5.83)
5.8.3 Performance measures
From the state probabilities we find the time, call, and traffic congestion in the usual way.The batch Poisson arrival process has the PASTA property, and therefore the time, call, andtraffic congestion are equal. The traffic congestion is obtained from the state probabilities inthe usual way:
Y =n∑
i=0
i · p(i) , (5.84)
C =A− YA
, (5.85)
where the offered traffic is given by (5.76).
If we rewrite (5.80) we get:
xµ · p(x) = (1−p)(x−1)µ+ λ · p(x−1) .
For a Pascal traffic process we have:
xµ · p(x) = (S+x−1) γ · p(x−1) .
Equalizing the right hand sides we get for the factors to (x−1):
(1− p)µ = γ ,
γ
µ= β = (1− p) =
Z − 1
Z, (5.86)
where we have used (5.77). For the constant factors we get, exploiting (5.76) and (5.77):
λ = S γ ,
S =λ
γ=A
β· p =
A
Z − 1, (5.87)
which is in agreement with the Pascal case (Z > 1). So if we have a Batch geometric arrivalprocess with mean value A = λ/(µ · p) (5.76) and peakedness Z = 1/p (5.77), then we get an
160 CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY
equivalent Pascal stream by choosing β as (5.86) and S as (5.87). Thus the Batch PoissonProcess is identical with a Pascal traffic stream.
(The following will be elaborated further)For partial-blocking (pb) in the Batch Poisson process we have Epb = Bpb = Cpb, whereas theequivalent Pascal model get the same traffic congestion Cpas = Cpb, but smaller values of callcongestion Bpas and time congestion Epas.
For batch-blocking (bb) and a single traffic stream time congestion becomes:
Ebb =Ebb/p
1 + Ebb(1/p− 1)=
Ebb · Z1 + Ebb · (Z−1)
≈ Ebb · Z . (5.88)
This is close to the traffic congestion for the Pascal model, as the traffic congestion is ap-proximately proportional to the time congestion times the peakedness.
Updated: 2010.03.04
Chapter 6
Overflow theory
In this chapter we consider systems with limited accessibility where traffic blocked from aprimary group of channels overflows to secondary groups. Both carried traffic and overflowtraffic have properties different from pure chance traffic (PCT), and therefore we cannot usethe classical traffic models for these streams. In Sec. 6.1 we describe a typical problem fromtelecommunication networks, where we use limited accessibility both for service protectionand for saving equipment. The exact solution by state probabilities is dealt with in Se. 6.2.This approach is only possible for very small systems because of the state space explosion.Only for Erlang’s ideal grading are we able obtain a solution for any values of the parameters.
For real systems we have to use approximate solutions or computer simulations. Approxi-mations are either based on state space (Sec. 6.3 – Sec. 6.6) or time space (Sec. 6.7). InSec. 6.3 we describe the carried traffic and the lost traffic by mean value and variance (orpeakedness). Then we assume that two traffic streams which have same mean and varianceare equivalent, thereby looking away from moments of order higher than two. For a givenmean and variance of overflow traffic we are able to find an Erlang loss system (defined byoffered traffic and number of channels) which has the same mean and variance. This is ex-ploited in the ERT-method (Sec. 6.4) which is the method most used in practice. Fredericks& Hayward’s method (Sec. 6.5) is applicable for both smooth and bursty traffic, and easy toapply. It uses a simple transformation of the the parameters of Erlang’s loss model, and isbased on an optimal splitting of the traffic process. Other state-based methods are describedin Sec. 6.6. In particular, the method based on the BPP modeling paradigm, using trafficcongestion, is of interest.
Methods based on time space are in general more complex. State space based methodsbased on Erlang’s loss model only allows for two parameters (mean and variance). Methodsbased on time space allows for any number of parameters. In Sec. 6.7 we describe theapplication of interrupted Poisson processes and Cox-2 distributions. They both have threeparameters. Using general Cox distributions or Markov modulated Poisson processes MMPPmore parameters are available.
161
162 CHAPTER 6. OVERFLOW THEORY
6.1 Limited accessibility
In this section we consider systems with restricted (limited) accessibility, i.e. systems wherea subscriber or a traffic flow only has access to k specific channels from a total of n (k ≤ n).If all k channels are busy, then a call attempt is blocked even if there are idle channelsamong the remaining (n−k) channels. An example is shown in Fig. 6.1, where we consider ahierarchical network with traffic from A to B, and from A to C. From A to B there is a direct(primary) route with n1 channels. If these channels all are busy, then the call is directed tothe alternative (secondary) route via T to B. In a similar way, the traffic from A to C has afirst-choice route AC and an alternative route ATC. If we assume the routes TB and TC arewithout blocking, then we get the accessibility scheme shown to the right in Fig. 6.1. Fromthis we notice that the total number of channels is (n1 + n2 + n12) and that the traffic ABonly has access to (n1 +n12) of these. In this case sequential hunting among the routes shouldbe applied so that a call only is routed via the group n12, when all n1 primary channels arebusy.
TA
n
n
n
1
2
12
n n
n1
2 12
A
A B
C
B
C
Figure 6.1: Telecommunication network with alternate routing and the corresponding acces-sibility scheme, which is called an O’Dell–grading. We assume the links between the transitexchange T and the exchanges B and C are without blocking. The n12 channels are commonfor both traffic streams.
It is typical for a hierarchical network that it possesses a certain service protection. Indepen-dent of how high the traffic from A to C is, then it will never get access to the n1 channels.On the other hand, we may block calls even if there are idle channels, and therefore the uti-lization will always be lower than for systems with full accessibility. However, the utilizationwill be bigger than for two separate systems with the same total number of channels. Thecommon channels allows for a certain traffic balancing between the two groups.
Historically, it was necessary to consider restricted accessibility because the electro-mecha-nical systems had very limited intelligence and limited selector capacity (accessibility). Indigital systems we do not have these restrictions, but still the theory of restricted accessibility
6.2. EXACT CALCULATION BY STATE PROBABILITIES 163
is important, both in network planning and in guaranteeing a certain grade-of-service.
........
.................................
....................................................................................................................................................................................... ........
.................................
.......................................................................................................................................................................................
........
.................................
.......................................................................................................................................................................................
........
.................................
.......................................................................................................................................................................................
........
.................................
.......................................................................................................................................................................................
........
.................................
.......................................................................................................................................................................................
........
.................................
.......................................................................................................................................................................................
........
.................................
......................................................................................................................................................................................................................................................................................................................................................................
..................................................................................................................................................................................................................................................................
......
.......................
.......................
......................
.......................
.......................
......................
.......................
.......................
......................
.......................
.......................
..................
................
........................................................................................................................................................................................................................................................
...........................
.................................................................................................................................................................................................................................................................................
......................................................................................................................................................................................
...................................................................................................................................................................... ................
...................................................................................................................................................................... ................
.............................................................................................................................................................................................................................................................................................................................................................................................................................................
...........
......
.................................................................................................................................................................................................................................................................
................
......................
......................
......................
.......................
.......................
.......................
......................
.......................
.......................
.......................
......................................
......................................................................................................................................................................................................................................................................................
......
.........................................................................................................................................................................................................................................................................
........................
.......................
.......................
.......................
.......................
.......................
.......................
.......................
.......................
.......................
................................................
........................................................................................................................................................................................................................................................
...........................
.................................................................................................................................................................................................................................................................................
...................................................................................................................................................................... ................
......................................................................................................................................................................................
......................
.......................
......................
.......................
.......................
.......................
......................
.......................
.......................
.......................
.....................................
......................................................................................................................................................................................................................................................................................
......
........................................................................................................................................................................................................................................................................ ......................................................................................................................................................................................................................................................................
......................
0
1
3
2
1 3
1 2
2 3
1 2 3
λ2
λ2
λ2
λ2
λ2
λ2
λ2
λ2
λ2
λ2
λ
1
1
1
1
1
1
1
1
1
1
1
1
Figure 6.2: State transition diagram for a small O’Dell grading (Fig. 6.1) with n = 3 channels,n1 = n2 = n12 = 1 (accessibility k = 2), ordered hunting, and offered traffic A = λ,equally distributed between the two groups (mean service time = time unit). The detailedstate transition diagram has 8 states. We specify the state of each channel. The stateprobabilities can only be obtained by setting up all 8 balance equations (7 node equationsand a normalization condition) and solve these linear equations.
6.2 Exact calculation by state probabilities
The problem of evaluating systems with limited accessibility is due to the state space explo-sion. The number of states is in general so large that problems become intractable.
6.2.1 Balance equations
To have full information about the state of the system it is not sufficient to know how manychannels are busy, we should also know which group a busy channel belongs to. Thus forthe system in Fig. 6.1 the number of states will be (n1 + 1))(n2 + 1)(n12 + 1). In worst casewe have to specify the state of each channel and thus get 2n states. For a very small O’Dellgrading with n1 = n2 = n12 = 1 we get 8 states as shown in Fig. 6.2.
For real systems the number of states becomes very large, and it is not convenient to find stateprobabilities from balance equations, or find performance measures from state probabilities.Only Erlang’s ideal grading has a simple and general solution.
164 CHAPTER 6. OVERFLOW THEORY
6.2.2 Erlang’s ideal grading
Erlang’s ideal grading (EIG) is the only system with limited accessibility, where the exactblocking probability can be calculated for any value of number of channels n, accessibility k,and offered traffic A. It is also named Erlang’s interconnection formula (EIF).
The grading is optimal in the sense, that it can carry more traffic than any other gradingswith random hunting and the same parameters. A small grading with sequential or intelligenthunting can sometimes carry a little more traffic. In this case (EIG) is very close to theoptimal value, and the great importance of Erlang’s ideal grading is, that it can be usedas optimal reference value for the utilization, which can be obtained in practice for anygrading. Historically, there has been many misunderstandings about EIG, and numerically ithas been difficult to evaluate the formula without computers. However, it is a model of basictheoretical interest. It can be shown that Erlangs’ ideal grading is insensitive to the holdingtime distribution.
In our terminology we consider PCT–I traffic offered to n identical channels. Each time acall attempt arrives, it chooses k channels at random among the n channels, and seizes anidle channels among these k channels, if there is any. If all k channels chosen are busy, thecall attempt is lost.
In order to implement this grading in an electromechanical system we divide the traffic intog inlet groups. By random hunting the number of inlet groups is:
grt =
(n
k
)(6.1)
(or a whole multiple of this). This is the number of ways we can choose k channels amongn channels. Each channels will appear in all possible different combinations of the otherchannels. By ordered hunting the number of inlet groups becomes:
goh =
(n
k
)· k! (6.2)
(or a whole multiple of this). By ordered hunting a hunting position of a channel is important,and we ensure therefore that all possible permutations of the k hunting positions occur once.In a digital stored-program-controlled (SPC) system, we do not construct these groups, butby random numbers we may choose k channels at random, and thus construct a randomgroup when needed. In Fig. 6.3 a realization of Erlang’s ideal grading is shown.
State probabilities
Under the above mentioned assumptions we get a system where all channels are offered thesame traffic load, and therefore all have the same probability of being occupied at an arbitrary
6.2. EXACT CALCULATION BY STATE PROBABILITIES 165
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
........
........
........
........
........
...
........
........
........
........
........
...
........
........
........
........
........
...
........
........
........
........
........
...
........
........
........
........
........
...
........
........
........
........
........
...
........
........
........
........
........
...
........
........
........
........
........
...
.........................................................................................................................................................................................................................................
.........................................................................................................................................................................................................................................
...............................................................................................................................................................................
..............................................................................................................................................................................
........................................................................................................................................................................................................................................
......................................................................................................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.......
.........
.........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.......
.........
.........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.....
...........
...........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.......
.........
.........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.......
...........
...........
...........
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
Group
1
2
3
4
5
6
7
8
9
10
11
12
Combination
1 – 2
1 – 3
1 – 4
2 – 3
2 – 4
3 – 4
2 – 1
3 – 1
4 – 1
3 – 2
4 – 2
4 – 3
1
1
1
1
1
1
2
2
2
2
2
2
3
3
3
3
3
3
4
4
4
4
4
4
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
Figure 6.3: An example of Erlang’s ideal grading (Erlang’s interconnection formula) (EIG)with n = 4 channels and k = 2. By random hunting the physical grading has gts = 6 groups(upper part), by sequential hunting the grading has gos = 12 inlet groups. The offered trafficis distributed among the groups, so all groups receives PCT-I traffic with same intensity.
point of time. By exploiting this symmetry it is possible to set up the state equations underthe assumption of statistical equilibrium in the such a way that we obtain the same advantagesas in a full accessible group, where a state is uniquely determined by the total number ofbusy channels.
Fig. 6.4 shows the state transition diagram of an EIG with the same parameters as the O’Dellgrading in Fig. 6.2. The state transition diagram is reversible and has local balance (Sec. 7.2).These are properties we consider further in connections with multi-dimensional loss systemsand networks (Chap. 7).
For a call that arrives when i channels are busy, the blocking probability is equal to theprobability that all k channels chosen at random are among the i busy channels. For i < k
166 CHAPTER 6. OVERFLOW THEORY
........
.................................
....................................................................................................................................................................................... ........
.................................
.......................................................................................................................................................................................
........
.................................
.......................................................................................................................................................................................
........
.................................
.......................................................................................................................................................................................
........
.................................
.......................................................................................................................................................................................
........
.................................
.......................................................................................................................................................................................
........
.................................
.......................................................................................................................................................................................
........
.................................
.......................................................................................................................................................................................
........
...............................
......................................................................................................................................................................................... ........
...............................
......................................................................................................................................................................................... ........
...............................
......................................................................................................................................................................................... ........
...............................
.........................................................................................................................................................................................
...................................................................................................................................................................... ................
............................................................................................................................................................................................................................................................................................................................................................................................................................................
............
......
.......................
.......................
......................
.......................
.......................
......................
.......................
.......................
......................
.......................
.......................
..................
................
........................................................................................................................................................................................................................................................
...........................
.................................................................................................................................................................................................................................................................................
......................................................................................................................................................................................
...................................................................................................................................................................... ................
...................................................................................................................................................................... ................
.............................................................................................................................................................................................................................................................................................................................................................................................................................................
...........
......
........................................................................................................................................................................................................................................................................
................
.......................
.......................
........................
.......................
.......................
........................
.......................
.......................
.......................
.......................................
................
.......................................................................................................................................................................................................................................................................
................
........................................................................................................................................................................................................................................................................
........................
........................
........................
.......................
........................
........................
........................
........................
........................
........................
......................................
.....................................................................................................................................................................................................................................................
............................
.........................................................................................................................................................................................................................................................................
...................................................................................................................................................................... ................
......................................................................................................................................................................................
......................
.......................
......................
.......................
.......................
.......................
......................
.......................
.......................
.......................
.....................................
......................................................................................................................................................................................................................................................................................
......
........................................................................................................................................................................................................................................................................ ......................................................................................................................................................................................................................................................................
......................
0
1
3
2
1 3
1 2
2 3
1 2 3
λ2
λ2
λ2
λ2
λ2
λ2
λ3
λ3
λ3
2λ3
2λ3
2λ3
1
1
1
1
1
1
1
1
1
1
1
1
...................................................................................................................................................................... ................
......................................................................................................................................................................................
...................................................................................................................................................................... ................
......................................................................................................................................................................................
...................................................................................................................................................................... ................
......................................................................................................................................................................................0 1 2 3
λ λ 2 λ3
1 2 3
Figure 6.4: State transition diagram for Erlang’s Ideal grading with n = 3 channels, accessi-bility k = 2, and offered traffic A = λ (mean service time = time unit). The detailed statetransition diagram has 8 states, and there is local balance. The state is a list of individualbusy channels. Due to symmetry, the detailed state transition diagram can be aggregatedinto a one-dimensional state transition (shown in the lower part of the figure) with the samenumber of states as a full accessible group.
no calls are lost. For k ≤ i ≤ n the blocking probability for a call attempt becomes:
bi =
(ik
)(nk
) , k ≤ i ≤ n . (6.3)
For i < k this is also valid, as we by definition have(ik
)= 0 for i < k. The denominator is
the number of different ways we can choose k channels among n channels. The numerator isthe number of times all k channels chosen are busy.
We look for the steady state probabilities p(i) of the system. The cut flow balance equationbetween state i− 1 and i is:
λ (1− bi−1) · p(i− 1) = i µ · p(i) .Thus we get:
p(i) =λ(1− bi−1)
i µ· λ(1− bi−2)
(i− 1)µ· . . . · λ(1− b0)
µ· p(0)
= Qi ·A i
i !· p(0) ,
6.2. EXACT CALCULATION BY STATE PROBABILITIES 167
where
Qi =i−1∏
j=0
(1− bj) , i = 1, 2, . . . , n , Q0 = 1 . (6.4)
The steady state probabilities become (Brockmeyer, 1948 [12]), pp. 113–119):
p(i) =Qi · A
i
i !∑nj=0Qj · A j
j !
, i = 0, 1, . . . , n . (6.5)
A call is blocked in state i with probability bi, and the total blocking probability of Erlang’sideal grading becomes:
E =n∑
i=0
bi · p(i) , (6.6)
E =
∑ni=0 bi ·Qi · A
i
i !∑ni=0Qi · A i
i !
. (6.7)
Due to the Poisson arrival process (PASTA-property) We have E = B = C. For k = n weobtain Erlang’s B-formula (4.10), since bi = 0 for i < n and bn = 1.
Upper limit of channel utilization
In a trunk (= channel) group there is correlation between the traffic carried by two differentchannels. On the average each channel carries the traffic y. The probability that a singlechannel is busy equals y. However, he probability that two channels chosen at random arebusy at the same time is not y2 due to the correlation. Only when the channel group is verybig, the correlation between the carried traffic on two channels becomes small. If n becomesvery big and k is limited (k n), then the congestion becomes:
E ≈ yk =
A(1− E)
n
k, k n . (6.8)
as y = A(1− E)/n is the carried traffic per trunk (channel).
It can be shown, that (6.8) is the theoretical lower bound for the blocking in a grading withrandom hunting and hunting capacity k. The utilization per trunk has therefore the upperbound:
limn→∞
y =A(1− E)
n= E1/k < 1 . (6.9)
Notice that this bound is less than one and independent of n (Fig. 6.5). This formula gives alinear relation between the carried traffic A(1− E) and the number of the channels n, i.e. afixed carried traffic per channel.
168 CHAPTER 6. OVERFLOW THEORY
0 10 20 30 40 50 60 70 80 900.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..................................................
......................................................
.....................................................................................................................................................................................................
............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..........................................................................
......................................................
...............................................................................................
.......................................................................................................................................................................................................................
....................................................................................................................................................................................................................................................................................................................................................
..................................
.................................................................................................
.........................................................................................................
...............................................................................................................................................................................................
.........................................................................................................................................................................................................................
...................................................................................................................................................................................................................................................................................................................................................................................................................
.................................
........................................
..............................................
......................................................
.......................................................................
.......................................................................................
......................................................................................................................
....................................................................................................................................................................
................................................................E=0.01
22
55
1010
2020
n
n
∞Number of channels n
Traffic per channel y Accessibility k
...................................................
...................................................
...................................................
...................................................
...................................................
............................................................... ................
............................................................... ................
............................................................... ................
............................................................... ................
................................................................................................................................................
Figure 6.5: Carried traffic y per channel as a function of the number of channels for Erlang’sideal grading with fixed blocking (E = 0.01). k = n corresponds to a full accessible group.For fixed value of k, the carried traffic per channel y has an upper limit, which is obtainedfor n→∞ (6.9). This upper limit is indicated to the right.
6.3 Overflow theory
Classical traffic models assume that the traffic offered to a system is pure chance traffic typeone or two, PCT–I or PCT–II. In communication networks with alternate traffic routing, thetraffic blocked from the primary group is offered to an overflow group, and this overflow traffichas properties different from PCT traffic as discussed in Sec. 3.7. Therefore, we cannot usethe classical models for evaluating blocking probabilities of overflow traffic.
Example 6.3.1: Group divided into twoLet us consider a group with 16 channels which is offered 10 erlang PCT–I traffic. By using Erlang’sB–formula we find the lost traffic:
A` = A · E16(10) = 10 · 0.02230 = 0.2230 [erlang] .
We now assume sequential hunting and split the 16 channels into a primary group and an overflowgroup, each of 8 channels. By using Erlang’s B–formula we find the overflow traffic from the primarygroup equal to:
Ao = A · E8(A) = 10 · 0.33832 = 3.3832 [erlang] .
This traffic is offered to the overflow group.
6.3. OVERFLOW THEORY 169
Applying Erlang’s B–formula for the overflow group we find the lost traffic from this group:
A` = Ao · E8(Ao) = 3.3832 · 0.01456 = 0.04927 [erlang] .
The total blocking probability in this way becomes 0.4927%, which is much less than the correctresult 2.23%. We have made an error by applying the B–formula to the overflow traffic, which isnot PCT–I traffic, but more bursty. 2
In the following we describe two classes of models for overflow traffic. We can in principlestudy the traffic process either vertically or horizontally. By state space (vertical) studieswe consider the state probabilities (Sec. 6.3.1–6.6.3). By time space (horizontal) studies weanalyze the interval between call arrivals, i.e. the inter-arrival time distribution (Sec. 6.7).
©© © ©© ©
©© © ©© ©
©© © ©© © ©© ©
........................................................................................ ................ ........................................................................................ ................
........................................................................................ ................ ........................................................................................ ................ ........................................................................................ ................
........................................................................................ ................ ........................................................................................ ................ ........................................................................................ ................ ........................................................................................ ................
A
A
A
n
n
n
`
` k
∞Kosten’s system
Brockmeyer’s system
Schehrer’s system
· · · · · ·· · ·
· · ·
· · ·
· · ·
· · · · · · · · ·
..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
..
Figure 6.6: Different overflow systems described in the literature.
6.3.1 State probabilities of overflow systems
Let us consider a full accessible group with ordered (sequential) hunting. The group is splitinto a primary group with n channels and an overflow group with infinite capacity. Theoffered traffic A is assumed to be PCT–I. This is called Kosten’s system (Fig. 6.6). The stateof the system is described by a two-dimensional vector:
p(i, j), 0 ≤ i ≤ n, 0 ≤ j ≤ ∞ , (6.10)
which is the probability that i channels are occupied in the primary group and j channelsin the overflow group at a random point of time. The state transition diagram is shown inFig. 6.7. Kosten (1937 [76]) analyzed this model and derived the marginal state probabilities:
170 CHAPTER 6. OVERFLOW THEORY
........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
...................
.....................................................................................................................................................................................................
........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
...................
.....................................................................................................................................................................................................
........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
...................
.....................................................................................................................................................................................................
............................................................................................... ................
...............................................................................................................
............................................................................................... ................
...............................................................................................................
............................................................................................... ................
...............................................................................................................
............................................................................................... ................
...............................................................................................................
................................................................................
........
........
........
................................................................................
........
........
........
................................................................................
........
........
.......................................................................................................................
.......................................................................................
........
........
........
............................................................................................... ................
...............................................................................................................
............................................................................................... ................
...............................................................................................................
............................................................................................... ................
...............................................................................................................
............................................................................................... ................
...............................................................................................................
................................................................................
........
........
........
................................................................................
........
........
........
................................................................................
........
........
.......................................................................................................................
.......................................................................................
........
........
........
............................................................................................... ................
...............................................................................................................
............................................................................................... ................
...............................................................................................................
............................................................................................... ................
...............................................................................................................
............................................................................................... ................
.......................................................................................................................................................................................................................
................................................................................
........
........
........
................................................................................
........
........
........
................................................................................
........
........
........
................................................................................
........
........
........
................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
.......................
...............................................................................................................................
........
........
........
........
........
........
........
........
........
.......................
...............................................................................................................................
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
0 0 1 0 2 0 n 0
0 1 1 1 2 1 n 1
0 j 1 j 2 j n j
λ λ λ λλ
λ λ λ λλ
λ λ λ λλ
λ
1
1
1
1 1 1 1
2
2
2
2 2 2 2
3
3
3
n
n
nj j j j
j+1 j+1 j+1 j+1
Figure 6.7: State transition diagram for Kosten’s system, which has a primary group withn channels and an unlimited overflow group. The states are denoted by [i, j], where i is thenumber of busy channels in the primary group, and j is the number of busy channels in theoverflow group. The mean holding time is chosen as time unit.
p(i, · ) =∞∑
j=0
p(i, j), 0 ≤ i ≤ n , (6.11)
p(· , j) =n∑
i=0
p(i, j), 0 ≤ j <∞ . (6.12)
Riordan (1956 [102]) derived the moments of the marginal state probability distributions ofthe two groups. Mean value (carried traffic) and peakedness (= variance/mean ratio) become:
Primary group:
m1,p = A · 1− En(A) , (6.13)
Zp =vpm1,p
= 1− A · En−1(A)− En(A) (6.14)
= 1− Fn−1(A) ≤ 1 ,
where Fn−1(A) is the improvement function of Erlang’s B-formula.
6.4. EQUIVALENT RANDOM TRAFFIC METHOD 171
Secondary group = Overflow group:
m1 = A · En(A) , (6.15)
Z =v
m1
= 1−m1 +A
n+ 1− A+m1
≥ 1 . (6.16)
For a fixed offered traffic, Fig. 6.8 shows that the peakedness of overflow traffic has a maximumfor an increasing number of channels. Peakedness has the dimension [channels]. In practice weestimate the offered traffic by measuring the carried traffic. The peakedness is not measured,but used when dimensioning networks by the above theory.
For PCT–I traffic the peakedness is equal to one, and the blocking probability is calculatedby using the Erlang-B formula. If the peakedness is less than one (6.14), the traffic is calledsmooth and it will experience less congestion than PCT–I traffic. If the peakedness is largerthan one, then the traffic is called bursty and it experiences larger congestion than PCT–Itraffic. Overflow traffic is usually bursty (6.16).
Brockmeyer (1954 [11]) derived the state probabilities and moments of a system with alimited overflow group, which is called Brockmeyer’s system (Fig. 6.6). Bech (1954 [6])did the same by using matrix equations, and obtained more complicated and more generalexpressions. Brockmeyer’s system is further generalized by Schehrer who also derived higherorder moments for successive finite overflow groups (Fig. 6.6).
Wallstrom (1966 [118]) derived state probabilities and moments for overflow traffic of a gen-eralized Kosten system, where the arrival intensity depends either upon the total number ofcalls in the system (Engset model), or the number of calls in the primary group only.
6.4 Equivalent Random Traffic Method
This equivalence method is called the Equivalent Random Traffic Method (ERT–method =ERM), Wilkinson’s method, or Wilkinson-Bretschneider’s method. It was published sameyear in USA by Wilkinson (1956 [119]) and in Germany by Bretschneider (1956 [8]). It is amoment-matching method, approximating the first two moments of the state probabilities ofan unknown traffic process with the first two moments of overflow traffic from Erlang’s losssystems. It plays a key role when dimensioning telecommunication networks. (EART is anerroneous name for ERT in Cisco literature!).
6.4.1 Preliminary analysis
Let us consider a group with ` channels which is offered g traffic streams (Fig. 6.9). Thetraffic streams may be traffic which is offered from other exchanges to a transit exchange,
172 CHAPTER 6. OVERFLOW THEORY
0 4 8 12 16 20 24 281.00
1.25
1.50
1.75
2.00
2.25
2.50
2.75
3.00
....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.................................................................................................................................................................................................................
..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.....................................................................................................................................................................................................................................................................................................................................................................
...........................................
.................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..........................................................
...................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........................................
.....................................
.....................................
...................................
.................................
................................
...............................
............................
...............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...................................
.................................................................................................................................................................................................
n
AZ
1
2
5
10
20
Figure 6.8: Peakedness Z of overflow traffic as a function of number of channels for a fixedvalue of offered Poisson (PCT–I) traffic. Notice that Z has a maximum. When n = 0 allthe offered traffic overflows and Z = 1. When n becomes very large call attempts are seldomblocked, and the blocked attempts will be mutually independent. Therefore, the process ofoverflowing calls converges to a Poisson process (Chap. 3).
and therefore they cannot be described by classical traffic models. Thus we do not know thedistributions (state probabilities) of the traffic streams, but we are satisfied (as it is often thecase in applications of statistics) by characterizing the i ’th traffic stream by its mean valuem1,i and variance vi. With this simplification we will consider two traffic streams as beingequivalent, if the state probability distributions have same mean value and variance.
The total traffic offered to the group with ` channels has the mean value (2.43):
m1 =
g∑
i=1
m1,i . (6.17)
We assume that the traffic streams are independent (non-correlated), and thus the varianceof the total traffic stream becomes (2.44):
v =
g∑
i=1
vi . (6.18)
6.4. EQUIVALENT RANDOM TRAFFIC METHOD 173
© © ©
© © ©
© © ©
© © © © © ©
.................................................................................................................... ................
.................................................................................................................... ................
.................................................................................................................... ................
.................................................................................................................... ................ .................................................................................................................... ................
.............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.....
...........................................
...........................................
.............................
...........................................
...........................................
.............................
...........................................
...........................................
.............................
............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. .............
Axnx
m1,1, v1
m1,2, v2
m1,g, vg
m1, v
l
l
· · ·
· · ·
· · ·
· · · · · ·
......
......
Figure 6.9: Application of the ERT-method to a system having g independent input trafficstreams offered to a common group of ` channels. The aggregated process of the g trafficstreams is said to be equivalent to the traffic overflowing from an Erlang loss system, whenthe overflow traffic from the two systems have same mean value and variance. (6.17) & (6.18).
The total traffic is characterized by m1 and v. So far we assume that m1 < v. We nowconsider this traffic to be equivalent to a traffic flow, which is lost from a full accessible groupand has same mean value m1 and variance v. In Fig. 6.9 the upper system is replaced bythe equivalent random system at the lower part, which is a full accessible Erlang loss systemwith (nx + `) channels and offered traffic Ax. For given values of m1 and v we therefore solveequations (6.15) and (6.16) with respect to n and A. It can be shown there exists a uniquesolution which we denote by (nx, Ax).
The traffic lost from the total system is obtained by Erlang’s B-formula:
A` = Ax · Enx+` (Ax) . (6.19)
As the offered traffic is m1, the traffic congestion of the system becomes:
C =A`m1
. (6.20)
Important note: the blocking probability is not Enx+`(Ax). We should remember the laststep (6.20), where we relate the lost traffic to the originally offered traffic, which in this caseis given by m1 (6.17). Thus it is the traffic congestion C we find.
We notice that if the overflow traffic is from a single primary group with PCT–I traffic, thenthe method is exact. In the general case with more traffic streams the method is approximate,and it does not yield the exact blocking probability.
Example 6.4.1: ParadoxIn Sec. 3.6 we derived Palm’s theorem, which states that by superposition of many independent
174 CHAPTER 6. OVERFLOW THEORY
arrival processes, we locally get a Poisson process. This is not contradictory with (6.17) and (6.18),because these formulæ are valid globally. 2
6.4.2 Numerical aspects
When applying the ERT–method we need to calculate (m1, v) for given values of (A, n) andvice versa. It is easy to obtain (m1, v) for given (A, n) by using (6.15) & (6.16). To obtain(A, n) for given (m1, v), we have to solve two equations with two unknown. It requires aniterative procedure, since En(A) cannot be solved explicitly with respect to neither n nor A(Sec. 4.5). However, we can solve (6.16) with respect to n:
n = A ·m1 + v
m1
m1 + vm1− 1−m1 − 1 , (6.21)
so that we can find n when A is know. Thus A is the only independent variable. We can useNewton-Raphson’s iteration method to find the unknown A by introducing the function:
f(A) = m1 − A · En(A) = 0 .
For a proper starting value A0 we iteratively improve this value until the resulting values ofm1 and v/m1 become close enough to the known values.
Yngve Rapp (1965 [101]) has proposed a simple approximate solution for A, which can beused as initial value A0 in the iteration:
A ≈ v + 3 · vm1
·v
m1
− 1
. (6.22)
From A obtained by iteration we then get n, using (6.21). Rapp’s approximation itself issufficient accurate for practical applications, except when Ax is very small. The peakednessZ = v/m1 has a maximum value, obtained when n is little larger than A (Fig. 6.8). For somecombinations of m1 and v/m1 the convergence is critical, but when using computers we canalways find the correct solution.
Using computers we operate with non-integral number of channels, and only at the end ofcalculations we choose an integral number of channels greater than or equal to the obtainedresults (typical a module of a certain number of channels (8 in GSM, 30 in PCM, etc.). Whenusing tables of Erlang’s B–formula, we should in every step choose the number of channelsin a conservative way so that the blocking probability aimed at becomes a minimum value(worst case).
The above-mentioned method assumes that v/m1 is larger than one, and so it is only validfor bursty traffic. Individual traffic stream in Fig. 6.9 are allowed to have vi/mi < 1, providedthe total aggregated traffic stream is bursty. Bretschneider ([9], 1973) extended the methodto include a negative number of channels during the calculations. In this way it is possibleto deal with smooth traffic (EERT-method = Extended ERT method).
6.4. EQUIVALENT RANDOM TRAFFIC METHOD 175
6.4.3 Individual stream blocking probabilities
The individual traffic streams (parcels) in Fig. 6.9 do not have the same mean value andvariance, and therefore they will not experience the same blocking probabilities in the commonoverflow group with ` channels. From the above we calculate the mean blocking probability(6.20) for all traffic streams aggregated. Experiments show that the blocking probability isapproximately proportional to the peakedness Z = v/m1. We can split the total lost trafficinto individual lost traffic parcels by assuming that the traffic lost by stream i is proportionalto both the mean value m1,i and to the peakedness Zi = vi/m1,i. Introducing a constant ofproportionality c we get:
A`,i = A` ·m1,i · Zi · c
= A` · vi · c .
We find the constant c from the total lost traffic:
A` =
g∑
i=1
A`,i
=
g∑
i=1
A` · vi · c
= A` · v · c .
Thus we find c = 1/v. Inserting this in (6.23) the lost traffic of stream i becomes:
A`,i = A` ·viv. (6.23)
The total lost traffic is thus distributed among the individual streams according to the ratio ofthe individual variance of a stream to the total variance of all streams. The traffic congestionCi for traffic stream i, which is called the parcel blocking probability for stream i, becomes:
Ci =A`,im1,i
=A` · Ziv
. (6.24)
6.4.4 Individual group blocking probabilities
Furthermore, we can divide the blocking probability among the individual groups (primary,secondary, etc.). Consider the equivalent group at the bottom of Fig. 6.9 with nx primarychannels and ` secondary (overflow) channels. We may calculate both the blocking probabilitydue to the nx primary channels, and also the blocking probability due to the ` secondarychannels. The probability that the traffic is lost by the ` channels is equal to the probability
176 CHAPTER 6. OVERFLOW THEORY
that the traffic is lost by the nx + ` channels, under the condition that the traffic is offeredto the ` channels:
H(l) =A · Enx+l(A)
A · Enx(A)=Enx+l(A)
Enx(A). (6.25)
The total loss probability can therefore be related to the two groups:
Enx+l(A) = Enx(A) · Enx+l(A)
Enx(A). (6.26)
By using this expression, we can find the blocking for each channel group and then for exampleobtain information about which group should be increased by adding more channels. Formula(6.25) is called the Palm-Jacobæus formula.
Example 6.4.2: Example 6.3.1 continuedIn example 6.3.1 the blocking probability of the primary group of 8 channels is E8(10) = 0.3383.The blocking of the overflow group is
H(8) =E16(10)E8(10)
=0.022300.33832
= 0.06591 .
The total blocking of the system is:
E16(10) = E8(10) ·H(8) = 0.33832 · 0.06591 = 0.02230 .
2
Example 6.4.3: Hierarchical cellular system (HCS)We consider a cellular system HCS covering three areas. The traffic offered in the areas are 12, 8 and4 erlang, respectively. In the first two cells we introduce micro-cells with 16, respectively 8 channels.We also introduce a common macro-cell covering all three areas with 8 channels. We allow overflowfrom micro-cells to macro-cells, but do not rearrange (take back) the calls from macro-cells to micro-cells when a channel becomes idle. Furthermore, we look away from hand-over traffic. Using (6.15)& (6.16) we find the mean value and the variance of the traffic offered to the macro-cell:
Cell Offered Number of Overflow Overflow Peakednesstraffic channels mean variance
i Ai ni(j) m1,i vi Zi
1 12 16 0.7250 1.7190 2.37112 8 8 1.8846 3.5596 1.88883 4 0 4.0000 4.0000 1.0000
Total 24 6.6095 9.2786 1.4038
The total traffic offered to the macro-cell has mean value 6.61 erlang and variance 9.28. The overflowtraffic from an equivalent system with 10.78 erlang offered to 4.72 channels has the same mean and
6.5. FREDERICKS & HAYWARD’S METHOD 177
variance. Thus we end up with a system where 12.72 channels offered 10.78 erlang. Using theErlang-B formula, we find the total lost traffic 1.3049 erlang. Originally we offered 24 erlang, so thereal traffic blocking probability becomes B = 5.437%.The three areas have individual blocking probabilities. Using (6.23) we estimate the traffic lost fromthe three traffic areas to be 0.2418 erlang, 0.5006 erlang, and 0.5625 erlang, respectively. Thus thetraffic blocking probabilities become 2.02%, 6.26% and 14.06%, respectively.A computer simulation with 100 million calls yields the individual blocking probabilities 1.77%,5.72%, and 15.05%, respectively. The total lost traffic is 1.273 erlang, which corresponds to ablocking probability 5.30%. The accuracy of the method is thus sufficient for real applications.(The confidence intervals for the simulations are very small). 2
6.5 Fredericks & Hayward’s method
Fredericks (1980 [34]) has proposed an equivalence method which is simpler to use thanWilkinson-Bretschneider’s ERT-method. The motivation for the method was first put forwardby W.S. Hayward. Fredericks & Hayward’s equivalence method also characterizes the trafficby mean value A and peakedness Z (0 < Z < ∞) (Z = 0 is a trivial case with constanttraffic). The peakedness (4.7) is the ratio between the variance v and the mean value m1 ofthe state probabilities, and the dimension is [channels]. For random traffic (PCT–I) we haveZ=1 and we can apply the Erlang-B formula.
For peakedness Z 6= 1 Fredericks & Hayward’s method proposes that the system has the sameblocking probability as a system with n/Z channels which is offered the traffic A/Z. By thistransformation the peakedness becomes equal to one. When Z = 1 the traffic is equivalentto PCT–I and apply Erlang’s B–formula for calculating the congestion:
E(n,A, Z) ∼ E
(n
Z,A
Z, 1
)∼ E n
Z
(A
Z
). (6.27)
When using this method we obtain the traffic congestion (Sec. 6.5.1). For fixed value of theblocking probability of the Erlang-B formula we know (Fig. 4.4) that the utilization increases,when the number of channels increases: the larger the system, the higher becomes the uti-lization. Fredericks & Hayward’s method thus expresses that if the traffic has a peakednessZ larger than PCT–I traffic, then we get a lower utilization than the one obtained by usingErlang’s B–formula. If peakedness Z < 1, then we get a higher utilization. The method caneasily be applied for both peaked (bursty) and smooth traffic. By this method we avoid solv-ing the equations (6.15) and (6.16) with respect to (A, n) for given values of (m1, v). We onlyneed to evaluate the Erlang-B formula. In general we get an non-integral number of channelsand thus need to evaluate the Erlang-B formula for a continuous number of channels.
Example 6.5.1: Fredericks & Hayward’s methodIf we apply Fredericks & Hayward’s method to example 6.4.3, then the macro-cell has (8/1.4038)
178 CHAPTER 6. OVERFLOW THEORY
channels and is offered (6.6095/1.4038) erlang. The blocking probability is obtained from Erlang’s B-formula and becomes 0.19470. The lost traffic is calculated from the original offered traffic (6.6095erlang) and becomes 1.2871 erlang. The blocking probability of the system thus becomes E =1.2871/24 = 5.36%. This is very close to the result obtained (5.44%) by the ERT–method. and theresult (5.30%) obtained by simulation. 2
6.5.1 Traffic splitting
In the following we shall give a natural interpretation of Fredericks & Hayward’s method andat the same time discuss splitting of traffic streams. We consider a traffic stream with meanvalue A, variance v, and peakedness Z = v/A. We split this traffic stream into g identicalsub-streams. A single sub-stream then has the mean value A/g, variance v/g2, and thuspeakedness Z/g because the mean value is reduced by a factor g and the variance by a factorg2 (Example 2.3.3). If we choose the number g of identical sub-streams equal to Z, then weget the peakedness Z=1 for each sub-stream.
Let us assume the original traffic stream is offered to n channels. If we also split the n channelsinto g identical sub-group, then each subgroup has n/g channels. Each sub-group will thenhave the same blocking probability as the original total system. By choosing g =Z we getpeakedness Z = 1 in each sub-stream, and we may (approximately) use Erlang’s B–formulafor calculating the blocking probability.
The above splitting of the traffic into g identical traffic streams shows that the blockingprobability obtained by Fredericks-Hayward’s method is the traffic congestion. The equalsplitting of the traffic at any point of time implies that all g traffic streams are identical andthus have the mutual correlation one. In reality, we cannot split circuit switched traffic intoidentical sub-streams. If we have g=2 streams and three channels are busy at a given point oftime, then we will for example use two channels in one sub-stream and one in the other, butanyway we obtain the same optimal utilization as in the total system, because we always willhave access to an idle channel in any sub-group (full accessibility). The correlation betweenthe sub-streams becomes smaller than one. The above is an example of using more intelligentstrategies so that we maintain the optimal full accessibility.
In Sec. 3.6.2 we studied the splitting of the arrival process when the splitting is done in arandom way (Raikov’s theorem 3.2). By this splitting we did not reduce the variation of theprocess when the process is a Poisson process or more regular. The resulting sub-stream pointprocesses converge to Poisson processes. In this section we have considered the splitting ofthe traffic load, which includes both the arrival process and the holding times. The splittingprocess depends upon the state. In a sub-process, a long holding time of a single call willresult in fewer new calls in this sub-process during the following time interval, and the arrivalprocess will no longer be a renewal process. In a sub-process inter-arrival times and holdingtimes become correlated.
6.6. OTHER METHODS BASED ON STATE SPACE 179
Most attempts of improving Fredericks & Hayward’s equivalence method are based on re-ducing the correlation between the sub-streams, because the arrival processes for a singlesub-stream is considered as a renewal process, and the holding times are assumed to beexponentially distributed. From the above we see that these approaches are deemed to beunsuccessful, because they will not result in an optimal traffic splitting. In the following ex-ample we shall see that the optimal splitting can be implemented for packet switched trafficwith constant packet size.
If we split a traffic stream into a sub-stream so that a busy channel belongs to the sub-streamwith probability p, then it can be shown that the sub-stream has peakedness Zp given by:
Zp = 1 + p · (Z − 1) , (6.28)
where Z is the peakedness of the original stream. From this random splitting of the trafficprocess we see that the peakedness converges to one, when p becomes small. This correspondsto a Poisson process and this result is valid for any traffic process. It is similar to Raikov’stheorem (3.47).
Example 6.5.2: Inverse multiplexingIf we need more capacity in a network than what corresponds to a single channel, then we maycombine more channels in parallel. At the originating source we may then distribute the traffic(packets or cells in ATM) in a cyclic way over the individual channels, and at the destination wereconstruct the original information. In this way we get access to higher bandwidth without leasingfixed broadband channels, which are very expensive. If the traffic parcels are of constant size,then the traffic process is split into a number of identical traffic streams, so that we get the sameutilization as in a single system with the total capacity. This principle was first exploited in a Danishequipment (Johansen & Johansen & Rasmussen, 1991 [61]) for combining up to 30 individual 64Kbps ISDN connections for transfer of video traffic for maintenance of aircrafts.Today, similar equipment is applied for combining a number of 2 Mbps connections to be usedby ATM-connections with larger bandwidth (IMA = Inverse Multiplexing for ATM) (Techguide,2001 [113]), (Postigo–Boix & al. 2001 [97]). 2
6.6 Other methods based on state space
From a blocking point of the view, the mean value and variance do not necessarily characterizethe traffic in the optimal way. Other parameters may better describe the traffic. Whencalculating the blocking with the ERT–method we have two equations with two unknownvariables (6.15 & 6.16). The Erlang loss system is uniquely defined by the number of channelsand the offered traffic Ax. Therefore, it is not possible to generalize the method to takeaccount of more than two moments (mean & variance).
180 CHAPTER 6. OVERFLOW THEORY
0 1 2 3 4 5 6 7 8 9 100
4
8
12
16
20
24
28
...................................................
.....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...........................................
..............................
..............................
..............................
..............................
.................................
................................
................................
...................................
..................................
..................................
...............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..............................
..............................
..............................
.................................
................................
................................
.................................
................................
................................
...................................
..................................
..................................
.
.................................................
...................................................................................................................................................................................................................................................................................................................................................
..............................
..............................
...............................
.................................
................................
...................................
...................................
....................................
.....................................
......................................
.....................................
.........................................
........................................
............................................
...........................................
....................
.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..............................
.............................
.........................
Peakedness Z
....................................................................................................................................... ................
BPP
.................................................................................................................. ................
Sanders
.......................................
........................................
........................................
...................................
ERT
............................................
............................................
...............................................
F-H
Traffic congestion [%]
Figure 6.10: Traffic congestion as a function of peakedness evaluated by different methods fora system with 30 channels offered 20.3373 erlang. When Z = 1 this corresponds to blockingprobability 1 %. We notice that BPP-method is worst case method whereas Fredericks-Hayward’s method yields the minimum blocking.
6.6.1 BPP traffic models
The BPP–traffic models describe the traffic by two parameters, mean value and peakedness,and are thus natural candidates to model traffic with two parameters. Historically, however,the concept and definition of traffic congestion has due to earlier definitions of offered trafficbeen confused with call congestion. As seen from Fig. 5.7 only the traffic congestion makessense for overflow calculations. By proper application of the traffic congestion, the BPP–model is very applicable.
Example 6.6.1: BPP traffic modelIf we apply the BPP–model to the overflow traffic in example 6.4.3 we have A = 6.6095 and Z =1.4038. This corresponds to a Pascal traffic with S = 16.37 sources and β = 0.2876. The trafficcongestion becomes 20.52% corresponding to a lost traffic 1.3563 erlang, or a blocking probabilityfor the system equal to E = 1.3563/24 = 5.65%. This result is quite accurate. 2
6.6. OTHER METHODS BASED ON STATE SPACE 181
6.6.2 Sanders’ method
Sanders & Haemers & Wilcke (1983 [108]) have proposed another simple and interestingequivalence method, also based on the state space. We will name it Sanders’ method. LikeFredericks & Hayward’s method, it is based on a transformation of state probabilities so thatthe peakedness becomes equal to one. The method transforms a non–Poisson traffic with(mean, variance) = (m1, v) into a traffic stream with peakedness one by adding a constant(zero–variance) traffic stream with mean v−m1 so that the total traffic has mean equal tovariance v. This constant traffic stream occupies v−m1 channels permanently (with no loss)and we increase the number of channels by this amount. In this way we get a system withn+(v−m1) channels offered m1+(v−m1) = v erlang. The peakedness becomes one, and theblocking probability is obtained using Erlang’s B-formula. We find the traffic lost from theequivalent system. To obtain the traffic congestion C of the original system, his lost trafficis divided by the originally offered traffic as the blocking probability relates to the originallyoffered traffic m1.
The method is applicable for both both smooth m1 > v and bursty traffic m1 < v and itrequires only the evaluation of the Erlang–B formula with a continuous number of channels.
Example 6.6.2: Sanders’ methodIf we apply Sanders’ method to example 6.4.3, we increase both the number of channels and theoffered traffic by v−m1 = 2.6691 (channels/erlang). We thus have 9.2786 erlang offered to 10.6691channels. From Erlang’s B-formula we find the lost traffic 1.3690 erlang, which is on the safe side,but close to the results obtained above. It corresponds to a blocking probability E = 1.3690/24 =5.70%. 2
6.6.3 Berkeley’s method
To get an ERT–method based on only one parameter, we can in principle keep either n or Afixed. Experience shows that we obtain the best results by keeping the number of channelsfixed nx = n. We are now in the position where we only can ensure that the mean value ofthe overflow traffic is correct. This method is called Berkeley’s equivalence method (1934).Wilkinson-Bretschneider’s method requires a certain amount of computations (computers),whereas Berkeley’s method is based on Erlang’s B-formula only. Berkeley’s method is onlyapplicable for systems, where the primary groups all have the same number of channels.
Example 6.6.3: Group divided into primary and overflow groupIf we apply Berkeley’s method two example 6.3.1, then we get the exact solution. The idea of themethod originates from this special case. 2
182 CHAPTER 6. OVERFLOW THEORY
Example 6.6.4: Berkeley’s methodWe consider example 6.4.3 again. To apply Berkeley’s method correctly, we should have the samenumber of channels in all three micro-cells. Let us assume all micro-cells have 8 channels (and not16, 8, 0, respectively). To obtain the overflow traffic 6.6095 erlang the equivalent offered traffic is13.72 erlang to the 8 primary channels. The equivalent system then has a traffic 13.72 erlang offeredto (8+8 =) 16 channels. The lost traffic obtained from the Erlang-B formula becomes 1.4588 erlangcorresponding to a blocking probability 6.08%, which is a value a little larger than values obtainedby other methods. In general, Berkeley’s method will be on the safe side. 2
6.6.4 Comparison of state-based methods
In Fig. 6.10 we compare four different state-base methods. The BPP-method is on the safeside, whereas Frederick-Hayward’s method is the most optimistic method, having lowestblocking probability. We cannot specify which method is the best one. This depends on theactual system generating the overflow traffic, which in general is a superposition of manytraffic streams.
6.7 Methods based on arrival processes
The models in Chaps. 4 & 5 are all characterized by a Poisson arrival process with statedependent intensity, whereas the service times are exponentially distributed with equal meanvalue for all (homogeneous) servers. As these models all are independent of the servicetime distribution (insensitive, i.e. the state probabilities only depend on the mean value ofthe service time distribution), then we may only generalize the models by considering moregeneral arrival processes. By using general arrival processes the insensitivity property is lostand the service time distribution becomes important. As we only have one arrival process, butmany service processes (one for each of the n servers), then we in general assume exponentialservice times to avoid complex models.
6.7.1 Interrupted Poisson Process
In Sec. 3.7 we considered Kuczura’s Interrupted Poisson Process (IPP) (Kuczura, 1977 [79]),which is characterized by three parameters and has been widely used for modeling overflowtraffic. If we consider a full accessible group with n servers, which are offered calls arrivingaccording to an IPP (cf. Fig. 3.9) with exponentially distributed service times, then we canconstruct a state transition diagram as shown in Fig. 6.11. The diagram is two-dimensional.State [i, j] denotes that there are i calls being served (i = 0, 1, . . . , n), and that the arrivalprocess is in phase j (j = a : arrival process on, j = b : arrival process off ). By using the
6.7. METHODS BASED ON ARRIVAL PROCESSES 183
........
...................
..................................................................................................................................................................................................... ........
................................
........................................................................................................................................................................................ ........
................................
........................................................................................................................................................................................ ........
...................
..................................................................................................................................................................................................... ........
................................
........................................................................................................................................................................................
........
...................
..................................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
...................
..................................................................................................................................................................................................... ........
.................................
.......................................................................................................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.....................................................................................................................................................................................................................................................................
............................................................................................................................................................
............................................................................................................................................................
............................................................................................................................................................
............................................................................................................................................................
............................................................................................................................................................
............................................................................................................................................................
............................................................................................................................................................
............................................................................................................................................................
............................................................................................................................................................
..................... ................
..............................................
..................... ................
..............................................
..................... ................
..............................................
..................... ................
..............................................
..................... ................
.............................................. .......................
................
................
.......................
.......................................
................
.......................
.......................................
................
.......................
.......................................
................
.......................
.......................................
................
.......................
........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
0a 1a 2a nan−1a
nbn−1b2b1b0b
· · ·
· · ·
· · ·
λ λ λ λ λ
μ 2μ 3μ (n−1)μ nμ
μ 2μ 3μ (n−1)μ nμ
γγγγγω ω ω ω ω
Figure 6.11: State transition diagram for a full accessible loss system with n servers, IPParrival process (cf. Fig. 3.9) and exponentially distributed service times (µ).
node balance equations we find the equilibrium state probabilities p(i, j). Time congestionE becomes:
E = p(n, a) + p(n, b) . (6.29)
Call congestion B becomes:
B =p(n, a)n∑
i=0
p(i, a)
≥ E . (6.30)
From the state transition diagram we have γ · pon = ω · poff. Furthermore, pon + poff = 1.From this we get:
pon =n∑
i=n
p(i, a) =ω
ω + γ,
poff =n∑
i=n
p(i, b) =γ
ω + γ.
Traffic congestion C is defined as the proportion of the offered traffic which is lost. Theoffered traffic is equal to:
A =pon
pon + poff· λ · 1
µ=
ω
ω + γ· λµ.
The carried traffic is:
Y =n∑
i=0
i · p(i, a) + p(i, b) . (6.31)
From this we obtain
C =A− YA
. (6.32)
The traffic congestion will be equal to the call congestion as the arrival process is a renewalprocess. But this is difficult to derive from the above. As shown in Sec. 3.7.1 the inter-arrival times are hyper-exponentially distributed with two phases (H2). If we apply a Markov
184 CHAPTER 6. OVERFLOW THEORY
modulated Poisson process (MMPP), then in principle we may get any number of parametersto model inter-arrival times.
Example 6.7.1: Calculating state probabilities for IPP modelsThe state probabilities of Fig. 6.11 can be obtained by solving the linear balance equations. Kuczura(1973, [78]) derived explicit expressions for the state probabilities, but they are complex and not fitfor numerical evaluation of large systems. The way to calculate state probabilities in a very accurateway is to use the principles described in Sec. 4.4.1:
• let p(n, b) = 1,
• by using node equation for this state [n, b] we obtain the value of p(n, a) relative to p(n, b),and normalize the two state probabilities so they add to one.
• by using node equation for state [n, a] we obtain p(n−1, a) relative to the previous states, andnormalize the state probabilities obtained so far.
• by using node equation for state [n−1, b], we obtain p(n−1, b) and normalize all the obtainedstate probabilities.
• in this way we zigzag down to state [0, a] and obtain normalized probabilities for all states.
The relative values of for example p(0, a) and p(0, b) depend on the number of channels n. Thuswe cannot truncate the state probabilities and re-normalize for a given number of channels, but wehave to calculate all state probabilities from scratch for every number of channels. 2
........
...................
..................................................................................................................................................................................................... ........
................................
........................................................................................................................................................................................ ........
................................
........................................................................................................................................................................................ ........
...................
..................................................................................................................................................................................................... ........
................................
........................................................................................................................................................................................
........
...................
..................................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
...................
..................................................................................................................................................................................................... ........
.................................
.......................................................................................................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
......................................................................................................... ............................................................................................................................................................
..................... ................
..............................................
..................... ................
..............................................
..................... ................
..............................................
..................... ................
..............................................
..................... ................
.....................................................................................................................................................................................................
........
........
........
.......................................
.......................................................................................................................................................
........
........
........
.........................................................................................................................................................................................................................
................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
.........................................................................................................................................................................................................................
........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
0a 1a 2a nan−1a
nbn−1b2b1b0b
· · ·
· · ·
· · ·
pλ1 pλ1 pλ1 pλ1 pλ1
μ 2μ 3μ (n−1)μ nμ
μ 2μ 3μ (n−1)μ nμ
qλ1 λ2λ2λ2λ2λ2λ2qλ1 qλ1 qλ1 qλ1
Figure 6.12: State transition diagram for a full accessible loss system with n servers, Cox–2arrival processes (cf. Fig. 2.13) and exponentially distributed service times (µ).
6.7.2 Cox–2 arrival process
In Sec. 3.7 we noticed that a Cox–2 arrival process is more general than an IPP (Kuczura,1977 [79]). If we consider Cox–2 arrival processes as shown in Fig. 2.13, then we get the statetransition diagram shown in Fig. 6.12. From this we find under the assumption of statisticalequilibrium the state probabilities and the following performance measures.
6.7. METHODS BASED ON ARRIVAL PROCESSES 185
Time congestion E:E = p(na) + p(nb) . (6.33)
Call congestion B:
B =p λ1 · p(na) + λ2 · p(nb)
p λ1 ·n∑
i=0
p(ia) + λ2 ·n∑
i=0
p(ib)
. (6.34)
Traffic congestion C:
The offered traffic is the average number of call attempts per mean service time. The meaninter-arrival time is (Fig. 2.13):
ma =1
λ1
+ (1− p) · 1
λ2
=λ2 + (1− p)λ1
λ1 λ2
.
The offered traffic then becomes A = (ma · µ)−1. The carried traffic Y is given by (6.31)applied to Fig. 6.12 and then we find the traffic congestion C by (6.32).
If we generalize the arrival process to a Cox–k arrival process, then the state-transitiondiagram is still two-dimensional. By the application of Cox–distributions we can in principletake any number of parameters into consideration.
If we generalize the service time to a Cox–k distribution, then the state transition diagrambecomes much more complex for n > 1 because we have a service process for each server, butonly one arrival process. Therefore, in general we always generalize the arrival process andassume exponentially distributed service times.
vbi-2010.03-16
186 CHAPTER 6. OVERFLOW THEORY
Chapter 7
Multi-Dimensional Loss Systems
In this chapter we generalize the classical teletraffic theory to deal with service-integratedsystems (e.g. B-ISDN). Every class of service corresponds to a traffic stream. Several trafficstreams are offered to the same group of n channels.
In Sec. 7.1 we consider the classical multi-dimensional Erlang-B loss formula. This is anexample of a reversible Markov process which is considered in more details in Sec. 7.2. InSec. 7.3 we look at more general loss models and strategies, including service-protection (max-imum allocation) and multi-rate BPP–traffic. The models all have the so-called product-formproperty, and the numerical evaluation is very simple, using either the convolution algorithmfor loss systems which aggregates traffic streams (Sec. 7.4), or state-based algorithms whichaggregate the state space (Sec. 7.6).
All models considered are based on flexible channel/slot allocation, which means that if acall requests d > 1 channels, then these channels need not be adjacent. The models maybe generalized to arbitrary circuit switched networks with direct routing, where we calculateend-to-end blocking probabilities (Chap. 8). All models considered are insensitive to theservice time distribution, and thus they are very robust for applications.
7.1 Multi-dimensional Erlang-B formula
We consider a group of n trunks (channels, slots), which is offered two independent PCT-Itraffic streams: (λ1, µ1) and (λ2, µ2). The offered traffic becomes A1 = λ1/µ1, respectivelyA2 = λ2/µ2, and the total offered traffic is A = A1 + A2. In this section each connectionrequests one channel.
Let (x1, x2) denote the state of the system, i.e. x1 is the number of channels used by stream
187
188 CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS
1 and x2 is the number of channels used by stream 2. We have the following restrictions:
0 ≤ x1 ≤ n ,
0 ≤ x2 ≤ n , (7.1)
0 ≤ x1 + x2 ≤ n .
The state transition diagram is shown in Fig. 7.1. Under the assumption of statistical equi-librium, the state probabilities are obtained by solving the global balance equations for eachnode (node equations). In total we have (n+1)(n+2)/2 equations. The system has a uniquesolution. So if we somehow find a solution, then we know that this is the correct solution.Many models can, however, be solved in a much simpler way.
As we shall see in next section, this diagram corresponds to a reversible Markov process,which has local balance, and furthermore the solution has product form. We can easily showthat the global balance equations are satisfied by the following state probabilities which maybe written on product form:
p(x1, x2) = Q · p1(x1) · p2(x2)
= Q · Ax11
x1!· A
x22
x2!, (7.2)
where p1(x1) and p2(x2) are one-dimensional truncated Poisson distributions for traffic streamone, respectively two. Q is a normalization constant, and (x1, x2) must fulfil the above re-strictions (7.1). As we have Poisson arrival processes, the PASTA-property (Poisson ArrivalsSee Time Averages) is valid, and time, call, and traffic congestion for both traffic streamsare all equal to p(x1 + x2 = n).
By the Binomial expansion (2.36), or by convolving two Poisson distributions, we find thefollowing aggregated state probabilities, where Q is obtained by normalization:
p(x1 + x2 = x) = Q ·x∑
x1=0
p1(x1) · p2(x− x1) (7.3)
= Q ·x∑
x1=0
Ax11
x1!· Ax−x1
2
(x− x1)!(7.4)
= Q · 1
x!·
x∑
x1=0
(x
x1
)Ax1
1 · Ax−x12 (7.5)
= Q · Ax
x!, (7.6)
7.1. MULTI-DIMENSIONAL ERLANG-B FORMULA 189
........................................................................................
........................................................................................................................................
.............................
............................. ........................................................................................
........................................................................................................................................
.............................
............................. ........................................................................................
........................................................................................................................................
.............................
............................. ........................................................................................
........................................................................................................................................
.............................
............................. ........................................................................................
........................................................................................................................................
.............................
.............................
........................................................................................
........................................................................................................................................
.............................
............................. ........................................................................................
........................................................................................................................................
.............................
............................. ........................................................................................
........................................................................................................................................
.............................
............................. ........................................................................................
........................................................................................................................................
.............................
.............................
........................................................................................
........................................................................................................................................
.............................
............................. ........................................................................................
........................................................................................................................................
.............................
............................. ........................................................................................
........................................................................................................................................
.............................
.............................
.........................................................................................
.......................................................................................................................................
.............................
............................. .........................................................................................
.......................................................................................................................................
.............................
.............................
........................................................................................
........................................................................................................................................
.............................
.............................
0, n
0, n−1
0, 2
0, 1
0, 0
1, n−1
1, 2
1, 1
1, 0
2, 2
2, 1
2, 0
n−1, 1
n−1, 0 n, 0............................................................................................................................................... ...................... ............................................................................................................................................... ...................... ............................................................................................................................................... ......................
............................................................................................................................................... ...................... ............................................................................................................................................... ......................
............................................................................................................................................... ...................... ............................................................................................................................................... ......................
............................................................................................................................................... ......................
..................................................................................................................................................................... ..................................................................................................................................................................... .....................................................................................................................................................................
..................................................................................................................................................................... .....................................................................................................................................................................
..................................................................................................................................................................... .....................................................................................................................................................................
.....................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.............................
......................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.............................
......................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.............................
......................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.............................
......................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.............................
......................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.............................
......................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.............................
......................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.............................
......................
.....................................................................................................................................................................
......................
.....................................................................................................................................................................
......................
.....................................................................................................................................................................
......................
.....................................................................................................................................................................
......................
.....................................................................................................................................................................
......................
.....................................................................................................................................................................
......................
.....................................................................................................................................................................
......................
.....................................................................................................................................................................
......................
........
........
........
........
........
........
........
........
..............................
......................
........
........
........
........
........
........
........
........
..............................
......................
........
........
........
........
........
........
........
........
..............................
......................
........
........
........
........
........
........
........
........
..............................
......................
........
........
........
........
........
........
........
........
..............................
......................
..............................................................................................
......................
..............................................................................................
......................
..............................................................................................
......................
..............................................................................................
......................
..............................................................................................
......................
................................................................................... ......................
................................................................................... ......................
................................................................................... ......................
................................................................................... ......................
................................................................................... ......................
.........................................................................................................
.........................................................................................................
.........................................................................................................
.........................................................................................................
.........................................................................................................
λ1 λ1 λ1 λ1
λ1 λ1 λ1
λ1 λ1 λ1
λ1
µ1
µ1
µ1
µ1
2µ1
2µ1
2µ1
(n−1) µ1
(n−1) µ1
(n−1) µ2 (n−1) µ2
nµ1
nµ2
λ2 λ2 λ2 λ2
λ2 λ2 λ2
λ2 λ2 λ2
λ2
µ2 µ2 µ2 µ2
2 µ2 2 µ2 2 µ2
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
Figure 7.1: Two-dimensional state transition diagram for a loss system with n channels whichare offered two PCT–I traffic streams. This is equivalent to a state transition diagram for aloss system M/H2/n, where the hyper-exponential distribution H2 is given by (7.8).
where A = A1 + A2, and the normalization constant is obtained by: Q−1 =n∑
i=0
Ai
i!. This is
the Truncated Poisson distribution (4.9).
We may also interpret this model as an Erlang loss system with one Poisson arrival processand hyper-exponentially distributed holding times as follows. The total arrival process is asuperposition of two Poisson processes and thus a Poisson process itself with arrival rate:
λ = λ1 + λ2 . (7.7)
The holding time distribution is obtained by weighting the two exponential distributionsaccording to the relative number of calls per time unit and becomes a hyper-exponentialdistribution (random variables in parallel, Sec. 2.3.2):
f(t) =λ1
λ1 + λ2
· µ1 · e−µ1t +λ2
λ1 + λ2
· µ2 · e−µ2t . (7.8)
190 CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS
The mean service time is:
m1 =λ1
λ1 + λ2
· 1
µ1
+λ2
λ1 + λ2
· 1
µ2
=A1 + A2
λ1 + λ2
,
m1 =A
λ, (7.9)
which is in agreement with the definition of offered traffic (1.2).
Thus we have shown that Erlang’s loss model is also valid for hyper-exponentially distributedholding times. This is a special case of the general insensitivity property of Erlang’s B–formula.
We may generalize the above model to N traffic streams:
p(x1, x2, · · · , xN) = Q · p1(x1) · p2(x2) · . . . · pN(xN)
= Q · Ax11
x1!· A
x22
x2!· · · · · A
xNN
xN !, 0 ≤ xj ≤ n ,
N∑
j=1
xj ≤ n , (7.10)
which is the general multi-dimensional Erlang-B formula. By the multinomial theorem (2.96)this can be reduced to:
p(x1 + x2 + . . .+ xN = x) = Q · (A1 + A2 + . . .+ AN)x
x!
= Q · Ax
x!where A =
N∑
j=1
Aj ,
The global state probabilities can be calculated by the following recursion, where q(x) denotesthe relative state probabilities, and p(x) denotes the absolute state probabilities. From a cutequations Erlang’s loss system we have:
q(x) =1
x· A · q(x− 1) =
1
x·N∑
j=1
Aj · q(x− 1) , q(0) = 1 , (7.11)
p(x) =q(x)
Q(n), 0 ≤ x ≤ n , where Q(n) =
n∑
i=0
q(i) . (7.12)
If we use the recursion with normalization in each step (Sec. 4.4), then we get the recursionformula for Erlang–B. For all services the time congestion is E = p(n), and as the PASTA-property is valid, this is also equal to the call and traffic congestion. Multi-dimensionalsystems were first mentioned by Erlang and more thoroughly dealt with by Jensen in theErlangbook (Jensen, 1948 [57]).
7.2. REVERSIBLE MARKOV PROCESSES 191
Example 7.1.1: Infinite server (IS) systemIf the number of channels is infinite, then we get:
p(x1, x2, . . . , xN ) = p1(x1) · p2(x2) · . . . · pN (xN )
=(Ax1
1
x1!· e−A1
)·(Ax2
2
x2!· e−A2
)· . . . ·
(AxNNxN !
· e−AN)
(7.13)
By using the multi-nominal expansion (2.94) the global state probabilities obtained by aggregatingthe detailed states probabilities become Poisson distributed (4.6):
p(x1 + x2 + . . .+ xN = x) =(A1 +A2 + . . .+AN )(x1+x2+...+xN )
(x1 + x2 + . . .+ xN )!· e−(A1+A2+...+AN )
=A
x !· e−A ,
where the mean value is A = A1 + A2 + . . .+ AN . The product of Poisson distributions is alreadynormalized because we don’t truncate the state space. 2
7.2 Reversible Markov processes
In the previous section we considered a two-dimensional state transition diagram. For anincreasing number of traffic streams the number of states (and thus equations) increases veryrapidly. However, we may simplify the problem by exploiting the structure and properties ofthe state transition diagram. Let us consider the two-dimensional state transition diagramshown in Fig. 7.2. The process is reversible if there is no circulation flow in the diagram.Thus, if we consider four neighboring states, then flow in clockwise direction must equal flowin opposite direction (Kingman, 1969 [72]), (Sutton, 1980 [111]). From Fig. 7.2 we have thefollowing average number of jumps per time unit:
Clockwise:
[x1, x2] → [x1, x2 + 1] : p(x1, x2) · λ2(x1, x2)
[x1, x2 + 1] → [x1 + 1, x2 + 1] : p(x1, x2+1) · λ1(x1, x2+1)
[x1 + 1, x2 + 1] → [x1 + 1, x2] : p(x1+1, x2+1) · µ2(x1+1, x2+1)
[x1 + 1, x2] → [x1, x2] : p(x1+1, x2) · µ1(x1+1, x2) ,
Counter clockwise:
[x1, x2] → [x1 + 1, x2] : p(x1, x2) · λ1(x1, x2)
[x1 + 1, x2] → [x1 + 1, x2 + 1] : p(x1+1, x2) · λ2(x1+1, x2)
[x1 + 1, x2 + 1] → [x1, x2 + 1] : p(x1+1, x2+1) · µ1(x1+1, x2+1)
[x1, x2 + 1] → [x1, x2] : p(x1, x2+1) · µ2(x1, x2+1) .
We can reduce both expressions by the state probabilities and then obtain the conditionsgiven by the following theorem.
192 CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS
Theorem 7.1 (Kolmogorov’s criteria) A necessary and sufficient condition for reversibil-ity is that the following two flows are equal:
Clockwise: λ2(x1, x2) · λ1(x1, x2+1) · µ2(x1+1, x2+1) · µ1(x1+1, x2) ,
Counter clockwise: λ1(x1, x2) · λ2(x1+1, x2) · µ1(x1+1, x2+1) · µ2(x1, x2+1) .
...................................................................................................................................................................................................................................................................................................... ................
......................................................................................................................................................................................................................................................................................................................
........................................................................................ ................
........................................................................................................
........................................................................................ ................
........................................................................................................
...................................................................................................................................................................................................................................................................................................... ................
......................................................................................................................................................................................................................................................................................................................
........................................................................................ ................
........................................................................................................
........................................................................................ ................
........................................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.....................
.............................................................................................................................................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.....................
.............................................................................................................................................................................................................................................................................................................................................
x1, x2 x1+1, x2
x1, x2+1 x1+1, x2+1
· · ·· · ·
· · ·· · ·
· · ·· · ·
· · ·· · ·
......
......
......
......
λ1(x1, x2)
µ1(x1+1, x2)
λ1(x1, x2+1)
µ1(x1+1, x2+1)
λ2(x1, x2) µ2(x1, x2+1) λ2(x1+1, x2) µ2(x1+1, x2+1)
Figure 7.2: Kolmogorov’s criteria: a necessary and sufficient condition for reversibility of atwo-dimensional Markov process is that the circulation flow among four neighbouring statesin a square equals zero: Flow clockwise = flow counter-clockwise (Theorem 7.1).
If these two expressions are equal, then there is local balance or detailed balance. A necessarycondition for reversibility is thus that if there is a flow (an arrow) from state x1 to state x2,then there must also be a flow (an arrow) from state x2 to state x1, and the flows mustbe equal. It can be shown that this is also a sufficient condition. We may then apply cutequations locally between any two connected states. For example, we get from Fig. 7.2:
p(x1, x2) · λ1(x1, x2) = p(x1 + 1, x2) · µ1(x1 + 1, x2) . (7.14)
We can express any state probability p(x1, x2) by state probability p(0, 0) by choosing anypath between the two states (Kolmogorov’s criteria). If we for example choose the path:
(0, 0), (1, 0), . . . , (x1, 0), (x1, 1), . . . , (x1, x2) ,
then we obtain the following balance equation:
p(x1, x2) =λ1(0, 0)
µ1(1, 0)· λ1(1, 0)
µ1(2, 0)· · · λ1(x1−1, 0)
µ1(x1, 0)· λ2(x1, 0)
µ2(x1, 1)· λ2(x1, 1)
µ2(x1, 2)· · · λ2(x1, x2−1)
µ2(x1, x2)· p(0, 0)
7.3. MULTI-DIMENSIONAL LOSS SYSTEMS 193
State probability p(0, 0) is obtained by normalization of the total probability mass.
The condition for reversibility will be fulfilled in many cases, for example when:
λ1(x1, x2) = λ1(x1) , µ1(x1, x2) = x1 · µ1 , (7.15)
λ2(x1, x2) = λ2(x2) , µ2(x1, x2) = x2 · µ2 . (7.16)
If we consider a multi-dimensional loss system with N traffic streams, then any traffic streammay be a state-dependent Poisson process, in particular BPP (Bernoulli, Poisson, Pascal)traffic streams. For N–dimensional systems the conditions for reversibility are analogueto (Theorem 7.1). Kolmogorov’s criteria must still be fulfilled for all possible paths. Inpractice, we experience no problems, because the solution obtained under the assumption ofreversibility will be the correct solution if and only if the node balance equations are fulfilled.In the following section we use this as the basis for introducing general advanced multi-servicetraffic model which are robust and easy to deal with.
7.3 Multi-Dimensional Loss Systems
In this section we consider generalizations of the classical teletraffic theory to cover severaltraffic streams (classes, services) offered to a link with a fixed bandwidth, which is expressedin channels of basic bandwidth units (BBU). Each traffic stream may have individual pa-rameters and may be state-dependent Poisson arrival processes with multi-rate traffic andclass limitations. This general class of models is insensitive to the holding time distribution,which may be class dependent with individual parameters for each class. We introduce thegeneralizations one at a time and present a small case-study to illustrate the basic ideas.
7.3.1 Class limitation
In comparison with the case considered in Sec. 7.1 we now restrict the number of simultaneouscalls for each traffic stream (class). Thus, we do not have full accessibility, but unlike overflowsystems where we physically only have access to a limited number of specific channels, thenwe now have access to all channels, but at any instant we may only occupy a maximumnumber of channels. This may be used for the purpose of service protection (virtual circuitprotection = class limitation = threshold priority policy). We thus introduce restrictions tothe number of simultaneous calls in class j as follows:
0 ≤ xj ≤ nj ≤ n , j = 1, 2, . . . , N , (7.17)
whereN∑
j=1
xj ≤ n andN∑
j=1
nj > n .
194 CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS
If the latter restriction is not fulfilled, then we get a system with separate groups, correspond-ing to N ordinary independent one-dimensional loss systems. Due to these restrictions thestate transition diagram is truncated. This is shown for two traffic streams in Fig. 7.3.
• • • •
• • • •
• • • • •
×
×
×
×
×
×
+ + + + + + + +
+
+
+
+
+
(x1, x2)
00
i
j
n1
n2
n
Stream 1
Stream 2
x1+x2 =n
Blocking for stream 1
Blocking for stream 2
..................................................................... ..........................
........
........
........
........
........
.............................
..........................
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
....
Figure 7.3: Structure of the state transition diagram for two-dimensional traffic processes withclass limitations (cf. 7.17). When calculating the equilibrium probabilities, state (x1, x2) canbe expressed by state (x1, x2 − 1) and recursively by state (x1, 0), (x1 − 1, 0), and finally by(0, 0) (cf. (7.15)).
We notice that the truncated state transition diagram still is reversible, and that the values ofp(x1, x2) relative to the value p(0, 0) are unchanged by the truncation. Only the normalizationconstant is modified. In fact, due to the local balance property we can remove any statewithout changing the above properties. We may consider more general class limitations tosubsets of traffic streams so that any traffic stream has a minimum (guaranteed) number ofallocated channels.
7.3.2 Generalized traffic processes
We are not restricted to consider PCT–I traffic only as in Sec. 7.1. Every traffic stream maybe a state-dependent Poisson arrival process with a linear state-dependent death (departure)rate (cf. (7.15) and (7.16)). The system still fulfils the reversibility conditions given byTheorem 7.1. The product form is valid for BPP traffic streams and more general state-dependent Poisson processes. If all traffic streams are Engset (Binomial) processes, thenwe get the multi-dimensional Engset formula (Jensen, 1948 [57]). As mentioned above, thesystem is insensitive to the holding time distributions with individual mean values. Everytraffic stream may have its own individual holding time distribution.
7.3. MULTI-DIMENSIONAL LOSS SYSTEMS 195
7.3.3 Multi-rate traffic
In service-integrated systems the bandwidth requested depend on the type of service. Wechoose a Basic Bandwidth Unit (BBU) and split the available bandwidth into n BBUs. TheBBU is called a channel, a slot, a server, etc. The smaller the basic bandwidth unit is, themore accurate we may model different services on a link, but the state space increases withfiner granularity.
Thus a voice telephone call may only require one channel (slot), whereas for example a videoconnection may require d channels simultaneously. Therefore, we get the capacity restrictions:
0 ≤ xj = ij · dj ≤ nj ≤ n , j = 1, 2, . . . , N , (7.18)
and
0 ≤N∑
j=1
ij · dj ≤ n , (7.19)
where ij is the actual number of type j calls (connections) and xj is the number of channels(BBU) occupied by type j. The resulting state transition diagram will still be reversible andhave product form. The restrictions correspond for example to the physical model shown inFig. 7.5.
Offered traffic Aj is usually defined as the traffic carried when the capacity in unlimited. Ifwe measure the carried traffic Yj as the average number of busy channels, then the lost trafficmeasured in channels becomes:
A` =N∑
j=1
Aj dj −N∑
j=1
Yj , (7.20)
where we as usual define Aj = λj/µj.
Example 7.3.1: Basic bandwidth unitsFor a 640 Mbps link we may choose BBU = 64 Kbps, corresponding to one voice channel. Then thetotal capacity becomes n = 10,000 channels.For a UMTS CDMA system with chip rate 3.84 Mcps, one chip is one bit from the direct sequencespread spectrum code. We can choose the BBU as a multiple of 1 cps. In practice the BBU dependson the code length. A 10–bit code allows for a granularity of 1024 channels, and the BBU becomes3.75 Kcps. (We consider gross rates).For variable bit rate (VBR) services we may statistically define an effective bandwidth which is thecapacity we need to reserve on a link with a given total capacity to fulfill a certain grade-of-service.
2
Example 7.3.2: Ronnblom’s modelThe first example of a multi-rate traffic model was published by Ronnblom (1958 [107]). The paper
196 CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS
Stream 1: PCT–I traffic Stream 2: PCT–II traffic
λ1 = 2 calls/time unit S2 = 4 sourcesγ2 = 1/3 calls/time unit/idle source
µ1 = 1 (time units−1) µ2 = 1 (time units−1)β2 = γ2/µ2 = 1/3 erlang per idle source
Z1 = 1 (peakedness) Z2 = 1/(1 + β2) = 3/4 (peakedness)d1 = 1 channel/call d2 = 2 channels/callA1 = λ1/µ1 = 2 erlang A2 = S2 · β2/(1 + β2) = 1 erlangn1 = 6 = n n2 = 6 = n
Table 7.1: Two traffic streams: a Poisson traffic process (Example 4.5.1) and a Binomial trafficprocess (Example 5.5.1) are offered to the same trunk group.
considers a PABX telephone exchange with both-way channels with both external (outgoing andincoming) traffic and internal traffic. The external calls occupies only one channel per call. Theinternal calls occupies both an outgoing channel and an incoming channel and thus requires twochannels simultaneously. It was shown by Ronnblom that this model has product form. 2
Example 7.3.3: Two traffic streamsWe now illustrate the above models by a small instructive case-study. The principles and proceduresare the same as for the general case considered later by the convolution algorithm (Sec. 7.4.1). Weconsider a trunk group of 6 channels which is offered two traffic streams, specified in Tab. 7.1.We notice that the second traffic stream is a multi-rate traffic stream. We may at most have threetype-2 calls in our system. For state probabilities we need only specify offered traffic, not individualvalues of arrival rates and service rates. The offered traffic is as usually defined as the traffic carriedby an infinite trunk group. For multi-rate traffic we have to consider traffic measured either inconnections or channels.
We get a two-dimensional state transition diagram shown in Fig. 7.4. The total sum of all relativestate probabilities equals 20.1704. So by normalization we find p(0, 0) = 0.0496 and we get stateprobabilities and marginal state probabilities p(x1, ·) and p(·, x2) (Table 7.2). The global stateprobabilities are shown in Table 7.3.
Performance measures for traffic stream 1 (PCT-I traffic):
Due to the PASTA-property time congestion (E1), call congestion (B1), and traffic congestion (C1)
7.3. MULTI-DIMENSIONAL LOSS SYSTEMS 197
........
................................
........................................................................................................................................................................................ ........
...................
..................................................................................................................................................................................................... ........
................................
........................................................................................................................................................................................ ........
...................
..................................................................................................................................................................................................... ........
................................
........................................................................................................................................................................................ ........
................................
........................................................................................................................................................................................ ........
................................
........................................................................................................................................................................................
........
.................................
....................................................................................................................................................................................... ........
...................
..................................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
...................
..................................................................................................................................................................................................... ........
.................................
.......................................................................................................................................................................................
........
................................
........................................................................................................................................................................................ ........
...................
..................................................................................................................................................................................................... ........
................................
........................................................................................................................................................................................
........
.................................
.......................................................................................................................................................................................
0, 0 1, 0 2, 0 3, 0 4, 0 5, 0 6, 0
0, 2 1, 2 2, 2 3, 2 4, 2
0, 4 1, 4 2, 4
0, 6
........
........
........
........
........
........
........
........
........
........
........
........
........
...................
................
........................................................................................................................... ................
PCT-II
PCT–I
2 2 2 2 2 2
2 2 2 2
2 2
2 2 2
1
1
1
2
2
2
3
3
4
4
5 6
43
43
43
43
43
33
33
33
1 1 1 1 1
23 3
..................................................................................................... ...................... ..................................................................................................... ...................... ..................................................................................................... ...................... ..................................................................................................... ...................... ..................................................................................................... ...................... ..................................................................................................... ......................
..................................................................................................... ...................... ..................................................................................................... ...................... ..................................................................................................... ...................... ..................................................................................................... ......................
..................................................................................................... ...................... ..................................................................................................... ......................
........................................................................................................................... ........................................................................................................................... ........................................................................................................................... ........................................................................................................................... ........................................................................................................................... ...........................................................................................................................
........................................................................................................................... ........................................................................................................................... ........................................................................................................................... ...........................................................................................................................
........................................................................................................................... ...........................................................................................................................
........
........
........
........
........
........
........
........
........
.............................
......................
........
........
........
........
........
........
........
........
........
.............................
......................
........
........
........
........
........
........
........
........
........
.............................
......................
........
........
........
........
........
........
........
........
........
.............................
......................
........
........
........
........
........
........
........
........
........
.............................
......................
........
........
........
........
........
........
........
........
........
.............................
......................
........
........
........
........
........
........
........
........
........
.............................
......................
........
........
........
........
........
........
........
........
........
.............................
......................
........
........
........
........
........
........
........
........
........
.............................
......................
.....................................................................................................
......................
.....................................................................................................
......................
.....................................................................................................
......................
.....................................................................................................
......................
.....................................................................................................
......................
.....................................................................................................
......................
.....................................................................................................
......................
.....................................................................................................
......................
.....................................................................................................
......................
427
23
43
43
43
83
83
169
89
1 2 2 43
23
415
445
Figure 7.4: Example 7.3.3: Six channels are offered both a Poisson traffic stream (PCT–I) (hori-zontal states) and an Engset traffic stream (PCT–II) (vertical states). The parameters are specifiedin Tab. 7.1. If we allocate state (0, 0) the relative probability one, then we find by exploiting localbalance the relative state probabilities q(x1, x2) shown below the state transition diagram.
198 CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS
p(x1, x2) x1 = 0 x1 = 1 x1 = 2 x1 = 3 x1 = 4 x1 = 5 x1 = 6 p(· , j)x2 = 6 0.0073 0.0073
x2 = 4 0.0331 0.0661 0.0661 0.1653
x2 = 2 0.0661 0.1322 0.1322 0.0881 0.0441 0.4627
x2 = 0 0.0496 0.0992 0.0992 0.0661 0.0331 0.0132 0.0044 0.3647
p(i, ·) 0.1561 0.2975 0.2975 0.1542 0.0771 0.0132 0.0044 1.0000
Table 7.2: Detailed state probabilities for the system specified in Table 7.1.
p(0) = p(0, 0) = 0.0496
p(1) = p(1, 0) = 0.0992
p(2) = p(0, 2) + p(2, 0) = 0.1653
p(3) = p(1, 2) + p(3, 0) = 0.1983
p(4) = p(0, 4) + p(2, 2) + p(4, 0) = 0.1983
p(5) = p(1, 4) + p(3, 2) + p(5, 0) = 0.1675
p(6) = p(0, 6) + p(2, 4) + p(4, 2) + p(6, 0) = 0.1219
Table 7.3: Global state probabilities for the system specified in Table 7.1.
are identical. We find the time congestion E1:
E1 = p(6, 0) + p(4, 2) + p(2, 4) + p(0, 6)
= p(6) ,
E1 = B1 = C1 = 0.1219 ,
Y1 = 1.7562 .
Performance measures for stream 2 (PCT-II traffic):
Time congestion E2 (proportion of time the system is blocked for stream 2) becomes:
E2 = p(0, 6) + p(1, 4) + p(2, 4) + p(3, 2) + p(4, 2) + p(5, 0) + p(6, 0)
= p(5) + p(6) ,
E2 = 0.2894 .
Call congestion B2 (Proportion of call attempts blocked for stream 2):
The total number of call attempts per time unit is obtained from the marginal distribution in
7.3. MULTI-DIMENSIONAL LOSS SYSTEMS 199
Table 7.2:
xt =6∑
i=0
λ2(i) · p(·, i)
=43· 0.3647 +
33· 0.4627 +
23· 0.1653 +
13· 0.0073
= 1.0616 .
The number of blocked call attempts per time unit becomes (Fig. 7.4):
x` =43· p(5, 0) + p(6, 0)+
33· p(3, 2) + p(4, 2)+
23· p(1, 4) + p(2, 4)+
13· p(0, 6)
= 0.2462 .
Hence:
B2 =x`xt
= 0.2320 .
Traffic congestion C2 (Proportion of offered traffic blocked):
The carried traffic, measured in the unit [channel], is obtained from the marginal distribution inTable 7.2:
Y2 =6∑
i=0
i · p(·, i) ,
Y2 = 2 · 0.4627 + 4 · 0.1653 + 6 · 0.0073 ,
Y2 = 1.6306 erlang .
The offered traffic, measured in the unit [channel ], is d2 ·A2 = 2 erlang (Tab. 7.1). Hence we get:
C2 =2− 1.6306
2= 0.1848 .
2
The above example has only 2 streams and 6 channels, and the total number of states equals16 (Fig. 7.4). When the number of traffic streams and channels increase, then the numberof states increases very fast and we become unable to evaluate the system by calculating theindividual state probabilities. In the following section we introduce two classes of algorithmsfor loss systems which eliminates this problem by aggregation of states.
200 CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..........................................
..........................................
...........................................
...........................................
..........................................
...........................................
...........................................
...........................................
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...........................................
...........................................
...........................................
..........................................
...........................................
...........................................
...........................................
...........................................
........................................................................................................................................................................................................................................................................................................................................................................................
λ1, Z1, d1
λ2, Z2, d2
......
λN , ZN , dN
n1
n2
nN
•
•
•
⊙ ⊙n
Li
Local exchanges
T H
Transit exchange Destination exchange
Figure 7.5: Generalization of the classical teletraffic model to BPP–traffic and multi-ratetraffic. The parameters λj and Zj describe the BPP–traffic, and dj denotes the number ofslots required per connection.
7.4 Convolution Algorithm for loss systems
We now consider a trunk group with a total of n homogeneous channels. Being homogeneousmeans that they have the same service rate. The channel group is offered N different services,also called streams, or classes. A call (connection) of type i requires di channels (slots) duringthe whole service time, i.e. all di channels are occupied and released simultaneously. If lessthan di channels are idle, then the call attempt is blocked (BCC = blocked calls cleared). Wedefine the state of the system x1, x2, . . . , xN where xj is the number of channels occupiedby type j which must fulfill the restrictions (7.18) and (7.19).
The arrival processes are general state-dependent Poisson processes. For the j’th arrivalprocess the arrival intensity in state xj = ij · dj, when ij calls (connections) of type j arebeing served, is λj(ij). We may restrict the number ij of simultaneous calls of type j so that:
0 ≤ xj = ij · dj ≤ nj ≤ n .
It will be natural to require that xj is an integral multiple of di, i.e. xj/dj = ij. This modeldescribes for example the system shown in Fig. 7.5.
The above system fulfills the conditions for reversibility and product form:
p(x1, x2, · · · , xN) = p1(x1) · p2(x2) · . . . · pN(xN) ,
where the restrictions (7.18) and (7.19) must be fulfilled. Product-form is equivalent toindependence between the state probabilities of the traffic streams and therefore we may
7.4. CONVOLUTION ALGORITHM FOR LOSS SYSTEMS 201
convolve the traffic streams to get the global state probability. To aggregate the trafficstreams we should express the states in the same bandwidth unit which we call the BasicBandwidth Unit (BBU). Here a BBU is one channel.
We thus express the state probability of stream j as:
pj = pj(0), pj(1), pj(2), . . . , pj(nj) ,
where pj(i) = 0 when i 6= k · dj , k = 0, 1, . . . , bnj/djc .
The system mentioned above can be evaluated in an efficient way by the convolution algorithmfirst introduced in (Iversen, 1987 [45]).
7.4.1 The convolution algorithm
The algorithm is described by the following three steps:
• Step 1: One–dimensional state probabilities:Calculate the state probabilities of each traffic stream as if it is alone in the system, i.e.we consider classical loss systems as described in Chaps. 4 & 5. For traffic stream j wefind:
pj = pj(0), pj(1), . . . , pj(nj) , j = 1, 2, . . . , N . (7.21)
Only the relative values of pj(x) are of importance, so we may choose qj(0) = 1 andcalculate the values of qj(xj) relative to qj(0). If during the recursion a term qj(xj)becomes greater than K (e.g. 1010), then we may divide all values qj(xj), 0 ≤ xj ≤ x,by K and calculate the following values relatively to these re-scaled values. To avoidany numerical problems in the following it is advisable to normalize the relative stateprobabilities so that:
pj(x) =qj(x)
Qj
, x = 0, 1 . . . , nj , Qj =
nj∑
i=0
qj(i) .
As described in Sec. 4.4 we may normalize at each step to avoid any numerical problems.
• Step 2: Aggregation of traffic streams:By successive convolutions (convolution operator ∗) we calculate the aggregated stateprobabilities for the total system excepting traffic stream number j:
qN/j =qN/j(0), qN/j(1), . . . , qN/j(n)
.
= p1 ∗ p2 ∗ · · · ∗ pj−1 ∗ pj+1 ∗ · · · ∗ pN (7.22)
202 CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS
We first convolve p1 and p2 and obtain p12 which is convolved with p3 to obtain p123,and so on. Both the commutative and the associative laws are valid for the convolutionoperator, defined in the usual way (Sec. 2.3):
pi ∗ pj =
pi(0) · pj(0),
1∑
x=0
pi(x) · pj(1− x), · · · ,u∑
x=0
pi(x) · pj(u− x)
, (7.23)
where we stop atu = minni + nj, n . (7.24)
Notice, that we truncate the state space at state u. Even if pi and pj are normalized,then the result of a convolution is in general not normalized due to the truncation. Itis recommended to normalize after every convolution to avoid any numerical problemsboth during this step and the following.
• Step 3: Performance measures:Above we have reduced the state space to two traffic streams: pN/j and pj, and we have
product form between these. Thus the problem is reduced to a two-dimensional statetransition diagram as e.g shown in Fig. 7.3.
For stream j we know the state probabilities, arrival rate, and departure rate in everystate. For the aggregated stream pN/j we only know the state probabilities; the transi-tion rates between the states are complex and we don’t need them in the following. Wecalculate time congestion Ej, call congestion Bj, and traffic congestion Cj of streamj from the reduced two-dimensional state-transition diagram. This is done during theconvolution:
pN = qN/j ∗ pj .This convolution results in:
qN(x) =x∑
xj=0
qN/j(x− xj) · pj(xj) =x∑
xj=0
pj(xj | x) , (7.25)
where for pj(xj | x), x is the total number of busy channels, and xj is the number ofchannels occupied by stream j. Steps 2 – 3 are repeated for every traffic stream. In thefollowing we derive formulæ for Ej, Bj, and Cj.
Time congestion Ej for traffic stream j becomes:
Ej =1
Q·∑
x∈SEj
pj(xj | x) . (7.26)
whereSEj = (xj, x) | xj ≤ x ≤ n ∧ (xj > nj − dj) ∨ (x > n− di) ,
The summation over SEj is extended to all states (xj, x) where calls belonging to class j areblocked. The set xj > nj − dj corresponds to the states where traffic stream j has utilized
7.4. CONVOLUTION ALGORITHM FOR LOSS SYSTEMS 203
QN/j(n) p(0 | n) 0 0 . . . 0
QN/j(n− 1) p(0 | n− 1) p(1 | n) 0 . . . 0
QN/j(n− 2) p(0 | n− 2) p(1 | n− 1) p(2 | n) . . . 0
. . . . . . . . . . . . . . . . . .
. . . p(0 | n−nj+1) p(1 | n−nj+2) p(2 | n−nj+3) . . . 0
QN/j(n− nj) p(0 | n− nj) p(1 | n−nj+1) p(2 | n−nj+2) . . . p(nj | n)
. . . p(0 | n−nj−1) p(1 | n− nj) p(2 | n−nj+1) . . . p(nj | n− 1
. . . . . . . . . . . . . . . . . .
QN/j(2) p(0 | 2) p(1 | 3) p(2 | 4) . . . p(nj | nj+2)
QN/j(1) p(0 | 1) p(1 | 2) p(2 | 3) . . . p(nj | nj+1)
QN/j(0) p(0 | 0) p(1 | 1) p(2 | 2) . . . p(nj | nj)
pj(0) pj(1) pj(2) . . . pj(nj)
Table 7.4: Convolution algorithm. Exploiting product form we convolve QN (x) and pj(x) to obtainthe global distribution adding contributions in the diagonals and normalize. During this convolutionwe obtain the detailed performance measures for stream j. Rows have a fixed number of channelsoccupied by by the aggregated streams N/j and columns have a fixed number xj of channels occupiedby stream j.
its quota, and (x > n − dj) corresponds to states with less than dj idle channels. Q is thenormalization constant:
Q =n∑
i=0
qN(i) .
At this stage we usually have normalized the state probabilities so that Q = 1.The truncated state space is shown in Table 7.4, and the global state probability
qN(i) =i∑
k=0
qN/j(k) · qj(i− k)
is the total probability mass on diagonal i.
Call congestion Bj for traffic stream j is the ratio between the number of blocked callattempts for traffic stream j and the total number of call attempts for traffic stream j, bothfor example per time unit. We find:
Bj =
∑SEjλj(xj) · pj(xj | x)
∑nx=0
∑xxj=0 λj(xj) · pj(xj | x)
. (7.27)
204 CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS
Traffic congestion Cj for traffic stream j: We define as usual the offered traffic as thetraffic carried by an infinite trunk group. The carried traffic for traffic stream j is:
Yj =n∑
x=0
x∑
xj=0
xj · pj(xj | x) . (7.28)
Thus we find:
Cj =Aj − YjAj
.
Above we have included states which are outside the state space and takes the value zero.
Thus we can find the detailed performance measures for stream j because we know arrivalrate and service rate of stream j for every state in the reduced state transition diagram inTable 7.4. For the aggregated stream we are able to calculate the total carried traffic andthus the aggregated traffic congestion. But we are not able to calculate time congestion orcall congestion, because we don’t know state transitions for the aggregated stream N/j. Weonly know the state probabilities and that the product form is valid.
The algorithm was first implemented in the PC-tool ATMOS (Listov–Saabye & Iversen,1989 [83]). The storage requirements are proportional to n as we may calculate the stateprobabilities of a traffic stream when it is needed. In practice we use a storage proportionalwith n ·N , because we save intermediate results of the convolutions for later re-use. It can beshown (Iversen & Stepanov, 1997 [47]) that we need (4 ·N−6) convolutions when we calculatetraffic characteristics for all N traffic streams. Thus the calculation time is linear in N andquadratic in n.
Example 7.4.1: De-convolutionIn principle we may obtain qN/j from qN by de-convolving pj and then calculate the performancemeasures during the re-convolution of pj and qN/j . In this way we need not repeat all the convolu-tions (7.22) for each traffic stream. However, when implementing this approach we get numericalproblems. The convolution is from a numerical point of view very stable, and therefore the de-convolution will be unstable. Nevertheless, we may apply de-convolution in some cases, for instancewhen the traffic sources are on/off–sources. 2
Example 7.4.2: Three traffic streamsWe first illustrate the algorithm with a small example, where we go through the calculations inevery detail. We consider a system with 6 channels and 3 traffic streams. In addition to the twostreams in Example 7.3.3 we add a Pascal stream with class limitation as shown in Tab. 7.5 (cf.Example 5.7.1). We want to calculate the performance measures of traffic stream 3.
• Step 1: We calculate the state probabilities pj(x), (x = 1, 2, . . . , nj) of each traffic stream j(j = 1, 2, 3) as if it were alone. The results are given in Tab. 7.6.
7.4. CONVOLUTION ALGORITHM FOR LOSS SYSTEMS 205
Stream 3: Pascal traffic (Negative Binomial)
S3 = −2 sources
γ3 = −1/3 calls/time unit
µ3 = 1 (time unit−1)
β3 = γ3/µ3 = −1/3 erlang per idle source
Z3 = 1/(1 + β3) = 3/2
d3 = 1 channels/call
A3 = S3 · (1− Z3) = 1 erlang
n3 = 4 (max. # of simultaneous calls)
Table 7.5: A Pascal traffic stream (Example 5.7.1) is offered to the same trunk as the twotraffic streams of Tab. 7.1.
• Step 2: We evaluate the convolution of p1(x1) with p2(x2), p1 ∗ p2(x12), truncate the statespace at n = 6, and normalize the probabilities so that we obtain p
12shown in the Tab. 7.6.
Notice that this is the result obtained in Example 7.3.3.
• Step 3: We convolve p12(x12) with p3(x3), truncate at n, and obtain q123(x123) as shown inTab. 7.6.
State Probabilities q12(x) Normal. Prob. q123(x) Normal.
x p1(x) p2(x) p1 ∗ p2 p12(x) p3(x) p12 ∗ p3 p123(x)
0 0.1360 0.3176 0.0432 0.0496 0.4525 0.0224 0.02591 0.2719 0.0000 0.0864 0.0992 0.3017 0.0599 0.06892 0.2719 0.4235 0.1440 0.1653 0.1508 0.1122 0.12933 0.1813 0.0000 0.1727 0.1983 0.0670 0.1579 0.18194 0.0906 0.2118 0.1727 0.1983 0.0279 0.1825 0.21045 0.0363 0.0000 0.1459 0.1675 0.0000 0.1794 0.20676 0.0121 0.0471 0.1062 0.1219 0.0000 0.1535 0.1769
Total 1.0000 1.0000 0.8711 1.0000 1.0000 0.8678 1.0000
Table 7.6: Convolution algorithm applied to Example 7.4.2. The state probabilities for the indi-vidual traffic streams have been calculated in the examples 4.5.1, 5.5.1 and 5.7.1.
Time congestion E3 is obtained from the detailed state probabilities. Traffic stream 3 (single–slottraffic) experiences time congestion, both when all six channels are busy and when the traffic stream
206 CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS
occupies 4 channels (maximum allocation). From the detailed state probabilities we get:
E3 =q123(6) + p3(4) · p12(0) + p12(1)
0.8678
=0.1535 + 0.0279 · 0.0496 + 0.0992
0.8678,
E3 = 0.1817 .
Notice that the state p3(4) ·p12(2) is included in state q123(6). The carried traffic for traffic stream3 is obtained during the convolution of p3(i) and p12(j) and becomes:
Y3 =1
0.8678
4∑
x3=1
x3 · p3(x3)6−x12∑
x12=0
p12(j)
,
Y3 =0.61740.8678
= 0.7115 .
As the offered traffic is A3 = 1, we get:
Traffic congestion:
C3 =1− 0.7115
1,
C3 = 0.2885 .
The call congestion becomes:
B3 =x`xt,
where x` is the number of lost calls per time unit, and xt is the total number of call attempts pertime unit. Using the normalized probabilities from Tab. 7.6 we get λ3(i) = (S3−i) γ3:
x` = λ3(0) · p3(0) · p12(6)
+ λ3(1) · p3(1) · p12(5)
+ λ3(2) · p3(2) · p12(4)
+ λ3(3) · p3(3) · p12(3)
+ λ3(4) · p3(4) · p12(2) + p12(1) + p12(0) ,
7.4. CONVOLUTION ALGORITHM FOR LOSS SYSTEMS 207
x` = 0.2503 .
xt = λ3(0) · p3(0) ·6∑
j=0
p12(j)
+ λ3(1) · p3(1) ·5∑
j=0
p12(j)
+ λ3(2) · p3(2) ·4∑
j=0
p12(j)
+ λ3(3) · p3(3) ·3∑
j=0
p12(j)
+ λ3(4) · p3(4) ·2∑
j=0
p12(j) ,
xt = 1.1763 .
We thus get:B3 =
x`xt
= 0.2128 .
In a similar way by interchanging the order of convolving traffic streams we find the performancemeasures of stream 1 and 2. The total number of micro-states in this example is 47. By theconvolution method we reduce the number of states so that we never need more than two vectorsof each n+1 states, i.e. 14 states.
By using the ATMOS–tool we get the following results shown in Tab. 7.7 and Tab. 7.8. The totalcongestion can be split up into congestion due to class limitation (ni), and congestion due to thelimited number of channels (n). 2
Input Total number of channels n = 6
Offered Peaked Maximum Slot Mean hold- Sources betatraffic ness allocation size ding time
j Aj Zj nj dj µ−1j Sj βj
1 2.0000 1.00 6 1 1.00 ∞ 02 1.0000 0.75 6 2 1.00 4 0.33333 1.0000 1.50 4 1 1.00 -2 -0.3333
Table 7.7: Input data to ATMOS for Example 7.4.2 with three traffic streams.
208 CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS
Output Call congestion Traffic congestion Time congestion Carried traffic
j Bj Cj Ej Yj
1 1.769 200E-01 1.769 200E-01 1.769 200E-01 1.646 1602 3.346 853E-01 2.739 344E-01 3.836 316E-01 1.452 1313 2.127 890E-01 2.884 898E-01 1.817 079E-01 0.711 510
Total 2.380 397E-01 3.809 801
Table 7.8: Output data from ATMOS for the input data in Tab. 7.7.
Example 7.4.3: Large-scale exampleTo illustrate the tool “ATMOS” we consider in Tab. 7.9 and Tab. 7.10 an example with 1536 trunksand 24 traffic streams. We notice that the time congestion is independent of peakedness Zj andproportional to the slot-size dj , because we often have:
p(n) ≈ p(n− 1) ≈ . . . ≈ p(n− dj) for dj n . (7.29)
This is obvious as the time congestion only depends on the global state probabilities. The callcongestion is almost equal to the time congestion. It depends weakly upon the slot-size. This isalso to be expected, as the call congestion is equal to the time congestion with one source removed(arrival theorem). In the table with output data we have in the rightmost column shown therelative traffic congestion divided by (dj ·Zj), using the single-slot Poisson traffic as reference value(dj = Zj = 1). We notice that the traffic congestion is proportional to dj · Zj , which is the usualassumption when using the Equivalent Random Traffic (ERT) method (Sec. 6.4.3). The mean valueof the offered traffic increases linearly with the slot-size, whereas the variance increases with thesquare of the slot-size. The peakedness (variance/mean) ratio for multi-rate traffic thus increaseslinearly with the slot-size. We thus notice that the traffic congestion is much more relevant thanthe time congestion and call congestion for characterizing the performance of the system. Below inExample 7.5.1 we calculate the total traffic congestion using Fredericks & Hayward’s method formulti-rate traffic (Sec. 7.5). 2
7.5 Fredericks-Haywards’s method
Basharin & Kurenkov has extended Fredericks-Hayward’s method (Sec. 6.5) to include multi-slot (multi-rate) traffic. Let every connection require d channels during the whole holdingtime from start to termination. Then by splitting this traffic into d identical sub-streams(Sec. 6.4) each call will use a single channel in each of the d sub-groups, and we will get didentical systems with single-slot traffic.
If a call uses 1 channels instead of d channels, then the mean value becomes d times smaller andthe variance d 2 times smaller (change of scale, Example 2.3.3). Therefore, the peakednessbecomes d times smaller. If furthermore the arrival process has a peakedness Z, then by
7.5. FREDERICKS-HAYWARDS’S METHOD 209
Input Total # of channels n = 1536
Offered traf. Peakedness Max. sim. # Channels/call mht Sources
j Aj Zj nj dj µj Sj βj
1 64.000 0.200 1536 1 1.000 80.000 4.0002 64.000 0.500 1536 1 1.000 128.000 1.0003 64.000 1.000 1536 1 1.000 ∞ 0.0004 64.000 2.000 1536 1 1.000 -64.000 -0.5005 64.000 4.000 1536 1 1.000 -21.333 -0.7506 64.000 8.000 1536 1 1.000 -9.143 -0.875
7 32.000 0.200 1536 2 1.000 40.000 4.0008 32.000 0.500 1536 2 1.000 64.000 1.0009 32.000 1.000 1536 2 1.000 ∞ 0.00010 32.000 2.000 1536 2 1.000 -32.000 -0.50011 32.000 4.000 1536 2 1.000 -10.667 -0.75012 32.000 8.000 1536 2 1.000 -4.571 -0.875
13 16.000 0.200 1536 4 1.000 20.000 4.00014 16.000 0.500 1536 4 1.000 32.000 1.00015 16.000 1.000 1536 4 1.000 ∞ 0.00016 16.000 2.000 1536 4 1.000 -16.000 -0.50017 16.000 4.000 1536 4 1.000 -5.333 -0.75018 16.000 8.000 1536 4 1.000 -2.286 -0.875
19 8.000 0.200 1536 8 1.000 10.000 4.00020 8.000 0.500 1536 8 1.000 16.000 1.00021 8.000 1.000 1536 8 1.000 ∞ 0.00022 8.000 2.000 1536 8 1.000 -8.000 -0.50023 8.000 4.000 1536 8 1.000 -2.667 -0.75024 8.000 8.000 1536 8 1.000 -1.143 -0.875
Table 7.9: Input data for Example 7.4.3 with 24 traffic streams and 1536 channels. The maximumnumber of simultaneous calls of type j (nj) is in this example n = 1536 (full accessibility), and mhtis an abbreviation for mean holding time.
210 CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS
Output Call congestion Traffic congestion Time congestion Carried traffic Rel. value
j Bj Cj Ej Yj Cj/(djZj)
1 6.187 744E-03 1.243 705E-03 6.227 392E-03 63.920 403 0.99862 6.202 616E-03 3.110 956E-03 6.227 392E-03 63.800 899 0.99913 6.227 392E-03 6.227 392E-03 6.227 392E-03 63.601 447 1.00004 6.276 886E-03 1.247 546E-02 6.227 392E-03 63.201 570 1.00175 6.375 517E-03 2.502 346E-02 6.227 392E-03 62.398 499 1.00466 6.570 378E-03 5.025 181E-02 6.227 392E-03 60.783 884 1.0087
7 1.230 795E-02 2.486 068E-03 1.246 554E-02 63.840 892 0.99808 1.236 708E-02 6.222 014E-03 1.246 554E-02 63.601 791 0.99919 1.246 554E-02 1.246 554E-02 1.246 554E-02 63.202 205 1.000910 1.266 184E-02 2.500 705E-02 1.246 554E-02 62.399 549 1.003911 1.305 003E-02 5.023 347E-02 1.246 554E-02 60.785 058 1.008312 1.379 446E-02 1.006 379E-01 1.246 554E-02 57.559 172 1.0100
13 2.434 998E-02 4.966 747E-03 2.497245E-02 63.682128 0.997014 2.458 374E-02 1.244 484E-02 2.497 245E-02 63.203 530 0.999215 2.497 245E-02 2.497 245E-02 2.497 245E-02 62.401 763 1.002516 2.574 255E-02 5.019 301E-02 2.497 245E-02 60.787 647 1.007517 2.722 449E-02 1.006 755E-01 2.497 245E-02 57.556 771 1.010418 2.980 277E-02 1.972 682E-01 2.497 245E-02 51.374 835 0.9899
19 4.766 901E-02 9.911 790E-03 5.009 699E-02 63.365 645 0.994820 4.858 283E-02 2.489 618E-02 5.009 699E-02 62.406 645 0.999521 5.009 699E-02 5.009 699E-02 5.009 699E-02 60.793 792 1.005622 5.303 142E-02 1.007 214E-01 5.009 699E-02 57.553 828 1.010923 5.818 489E-02 1.981 513E-01 5.009 699E-02 51.318 316 0.994224 6.525 455E-02 3.583 491E-01 5.009 699E-02 41.065 660 0.8991
Total 5.950 135E-02 1444.605
Table 7.10: Output for Example 7.4.3 with input data given in Tab. 7.9. As mentioned earlier inExample 7.5.1, Fredericks-Hayward’s method results in a total congestion equal to 6.114 %. Thetotal traffic congestion 5.950 % is obtained from the total carried traffic and the offered traffic.
7.6. STATE SPACE BASED ALGORITHMS 211
splitting into d ·Z traffic streams the traffic process becomes a single-slot traffic process withpeakedness one, which we evaluate by Erlang’s B-formula.
(n, A, Z, d) ∼(n
dZ,A
dZ, 1, 1
)∼(n
d,A
d, Z, 1
)(7.30)
∼(n
Z,A
Z, 1, d
)∼(n,
A
Z, 1, d · Z
).
The last equivalence show that by increasing number the bandwidth d by the factor Z, wemay keep the number of channels n constant and get an arrival process with Z = 1. Ifwe have more traffic streams offered to the same group, then we may keep the number ofchannels fixed. The bandwidth d ·Z is in general not integral. Then we should choose a basicbandwidth unit (BBU)so that both n and d·Z approximately become integral multiples of thisunit. The smaller the bandwidth unit (granularity) is chosen, the better the approximationbecomes. However, it is recommended to aggregate all traffic streams into one single-slotPoisson traffic stream and calculate the total traffic congestion. Then this may be split upinto traffic congestion for each stream as shown in Example 7.4.3
Example 7.5.1: Multi-slot trafficIn example 7.4.3 we consider a trunk group with 1536 channels, which is offered 24 traffic streamswith individual slot-size and peakedness. The exact total traffic congestion is equal to 5.950%. If wecalculate the peakedness of the offered traffic by adding all traffic streams, then we find peakednessZ = 9.8125 and a total mean value equal to 1536 erlang. Fredericks & Hayward’s method resultsin a total traffic congestion equal to 6.114%, which thus is a conservative estimate (worst case) ofthe theoretical value 5.950 2
7.6 State space based algorithms
The convolution algorithm is based on aggregation of traffic streams, where we end up witha traffic stream which is the aggregation of all traffic streams except the one which we areinterested in. Another approach is to aggregate the state space into global state probabilities.
7.6.1 Fortet & Grandjean (Kaufman & Robert) algorithm
In case of Poisson arrival processes the algorithm becomes very simple by generalizing (7.11).Let pj(x) denote the contribution of stream j to the global state probability p(x):
p(x) =N∑
j=1
pj(x) . (7.31)
212 CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS
Thus the average number of channels occupied by stream j when the system is in global statex is x · pj(x). Let traffic stream j have the slot-size dj. Due to reversibility we will have localbalance for every traffic type. The local balance equation becomes:
λj · p(x− dj) =x
dj· pj(x) · µj , x = dj, dj + 1, . . . n . (7.32)
The left-hand is the flow from global state [x − dj ] to state [x ] due to arrivals of type j.The right-hand side is the flow from state [x ] to state [ x − dj ] due to departures of typej calls. The average number of channels occupied by stream j in global state x is not aninteger because it is a weighted sum over more state probabilities. From (7.32) we get:
pj(x) =1
xdj Aj · p(x− dj) . (7.33)
The total state probability p(x) is obtained by summing up over all traffic streams (7.31):
p(x) =1
x
N∑
j=1
dj Aj · p(x− dj) , p(x) = 0 for x < 0 . (7.34)
This is Fortet & Grandjean’s algorithm (Fortet & Grandjean, 1964 [33]) The algorithm isusually called Kaufman & Roberts’ algorithm, as it was re-discovered by these authors in1981 (Kaufman, 1981 [66]) (Roberts, 1981 [103]).
7.6.2 Generalized algorithm
The above model can easily be generalized to BPP-traffic (Iversen, 2005 [49])
x pj(x)
dj· µj = p(x− dj) · Sj γj − pj(x− dj) ·
x− djdj
· γj . (7.35)
On the right-hand side the first term assumes that all type j sources are idle during one timeunit. As we know
x−djdj· pj(x− dj)
type j sources on the average are busy in global state x − dj we reduce the first term withthe second term to get the right value. Thus we get:
p(x) =
0 x < 0
p(0) x = 0
N∑
j=1
pj(x) x = 1, 2, . . . , n
(7.36)
where pj(x) =djx· Sj γjµj· p(x− dj)−
x− djx· γjµj· pj(x− dj) (7.37)
pj(x) = 0 x < dj . (7.38)
7.6. STATE SPACE BASED ALGORITHMS 213
The state probability p(0) is obtained by the normalization condition:
n∑
i=0
p(i) = p(0) +n∑
i=1
N∑
i=1
pj(i) = 1 , (7.39)
as pj(0) = 0, whereas p(0) 6= 0. Above we have used the parameters (Sj, βj) to characterizethe traffic streams. Alternatively we may also use (Aj, Zj) related to (Sj, βj) by the formulæ(5.22) – (5.25). Then (7.37) becomes:
pj(x) =djx· AjZj· p(x− dj)−
x− djx· 1− Zj
Zj· pj(x− dj) (7.40)
For Poisson arrivals we of course get (7.34). In practical evaluation of the formula we willuse normalization in each step as described in Sec. 4.4.1. This results in a very accurate andeffective algorithm. In this way also the number of operations and the memory requirementsbecome very small, as we only need to store the di previous state probabilities of trafficstream i, and the maxdi previous values of the global state probabilities. The number ofoperations is linear in number of channels and number of traffic streams and thus extremelyeffective.
Performance measures
By this algorithm we are able to obtain performance measures for each individual trafficstream.
Time congestion:Call attempts of stream j require dj idle channel and will be blocked with probability:
Ej =n∑
i=n−dj+1
p(i) . (7.41)
Traffic congestion:From the state probabilities pj(x) we get the total carried traffic of stream j:
Yj =n∑
x=1
i pj(x) . (7.42)
Thus the traffic congestion of stream j becomes:
Cj =Aj · dj − YjAj · dj
. (7.43)
The total carried traffic is
Y =N∑
j=1
Yj , (7.44)
214 CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS
so the total traffic congestion becomes:
C =A− YA
, (7.45)
where A is the total offered traffic measured in channels:
A =N∑
j=1
dj Aj .
Call congestion:This is obtained from the traffic congestion by using (5.49):
Bj =(1 + βj)Cj1 + βj Cj
. (7.46)
The total call congestion cannot be obtained by this formula as we do not have a global valueof β. But from individual carried traffic and individual call congestion we may find the totalnumber of offered calls and accepted calls for each stream, and from this we get the total callcongestion.
Example 7.6.1: Generalized algorithmWe evaluate Example 7.3.3 by the general algorithm. For the Poisson traffic (stream 1) we haved = 1, A = 2, and Z = 1. We thus get:
q1(x) =2x· q1(x− 1) , q1(0) = 0 , q(0) = 1 .
The total relative state probability is q(x) = q1(x) + q2(x). For the Engset traffic (stream 2) wehave d = 2, A = 1, and Z = 0.75. We then get:
q2(x) =2x· 1
0.75· q(x− 2)− x− 2
x· 1
3· q2(x− 2) , q2(0) = q2(1) = 0 .
Table 7.11 shows the non-normalized relative state probabilities when we let state zero equal to one.Table 7.12 shows the normalized state probabilities and the carried traffic of each stream in eachstate.In a computer program we would normalize state probabilities after each iteration (increasing num-ber of channels by one) and calculate the aggregated carried traffic for each stream. This trafficvalue should of course also be normalized in each step. In this way we only need to store the previousdi values and the carried traffic of each traffic stream. We get the following performance measures,which of course are the same as obtained by convolution algorithm.
E1 = p(6) = 0.1219
E2 = p(5) + p(6) = 0.2894
C1 = 2·1−1.75622·1 = 0.1219
C2 = 1·2−1.63061·2 = 0.1847
B1 = (1+0)·0.12191+0·0.1219 = 0.1219
B1 = (1+1/3)·0.18471+(1/3)·0.1847 = 0.2320
7.6. STATE SPACE BASED ALGORITHMS 215
State Poisson Engset Total
x q1(x) q2(x) q(x)
2x· q(x−1) = q1(x) 2
x· 4
3· q(x−2) – x−2
x· 1
3· q2(x−2) = q2(x)
0 0 0 1
1 21· 1 = 2 0 2
2 22· 2 = 2 2
2· 4
3· 1 – 0
2· 1
3· 0 = 4
3103
3 23· 10
3= 20
923· 4
3· 2 – 1
3· 1
3· 0 = 16
94
4 24· 4 = 2 2
4· 4
3· 10
3– 2
4· 1
3· 4
3= 2 4
5 25· 4 = 8
525· 4
3· 4 – 3
5· 1
3· 16
9= 16
915245
6 26· 152
45= 152
13526· 4
3· 4 – 4
6· 1
3· 2 = 180
135332135
Total 2723135
Table 7.11: Example 7.6.1: relative state probabilities for Example 7.3.3 evaluated by thegeneralized algorithm.
State Poisson Engset Total
x p1(x) x · p1(x) p2(x) x · p2(x) p(x) x · p(x)
0 0.0000 0.0000 0.0000 0.0000 0.0496 0.0000
1 0.0992 0.0992 0.0000 0.0000 0.0992 0.0992
2 0.0992 0.1983 0.0661 0.1322 0.1653 0.3305
3 0.1102 0.3305 0.0881 0.2644 0.1983 0.5949
4 0.0992 0.3966 0.0992 0.3966 0.1983 0.7932
5 0.0793 0.3966 0.0881 0.4407 0.1675 0.8373
6 0.0558 0.3349 0.0661 0.3966 0.1219 0.7315
Total 1.7562 1.6306 1.0000 3.3867
Table 7.12: Example 7.6.1: absolute state probabilities and carried traffic yi(x) = x · pi(x)for Example 7.3.3 evaluated by the generalized algorithm.
216 CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS
2
7.6.3 Batch Poisson arrival process
When we have more traffic streams the state-based algorithm is modified by exploiting theanalogy with the Pascal distribution. Inserting A (5.76) and Z (5.77) we get:
pj(x) =djx· λjµj· p(x−dj) + (1− pj) ·
x−djx· pj(x− dj) , 0 ≤ x ≤ n , (7.47)
where pj, λj and µj are parameters of the Batched Poisson process. Thus the state-basedalgorithm for BPP (Binomial, Poisson, Pascal) is generalized to include Batch Poisson processin a simple way.This section is to be elaborated in further details, in particular the performance measures.
7.7 Final remarks
The convolution algorithm for loss systems was first published in (Iversen, 1987 [45]). Asimilar approach to a less general model was published in two papers by Ross & Tsang(1990 [105]), (1990 [106]) without reference to this original paper from 1987 even though itwas known by the authors.
The generalized algorithm in Sec. 7.6.2 is new (Iversen, 2007 [50]) and includes Delbrouck’salgorithm (Delbrouck, 1983 [23]) which is more complex to evaluate. Compared with allother algorithms the generalized algorithm requires much less memory and operations toevaluate. By normalizing the state probabilities in each iteration we get a very accurate andsimple algorithm. In principle, we may apply the generalized algorithm for BPP–traffic tocalculate the global state probabilities for (N−1) traffic streams and then use the convolutionalgorithm to calculate the performance measures for the remaining traffic stream we want toevaluate.
The convolution algorithm allows for minimum and maximum allocation of channels to eachtraffic stream, but it does not allow for restrictions based on global states. It also allows forarbitrary state-dependent arrival processes.
The generalized algorithm does not keep account of the number of calls of the individualtraffic stream, but allows for restrictions based on global states, e.g. trunk reservation.
Updated 2010-03-23
Chapter 8
Dimensioning of telecom networks
Network planning includes designing, optimizing, and operating telecommunication networks.In this chapter we will consider traffic engineering aspects of network planning. In Sec. 8.1we introduce traffic matrices and the fundamental double factor method (Kruithof’s method)for updating traffic matrices according to forecasts. The traffic matrix contains the basicinformation for choosing the topology (Sec. 8.2) and traffic routing (Sec. 8.3).
In Sec. 8.4 we consider approximate calculation of end-to-end blocking probabilities, anddescribe the Erlang fix-point method (reduced load method). Sec. 8.5 generalizes the con-volution algorithm introduced in Chap. 7 to networks with exact calculation of end-to-endblocking in virtual circuit switched networks with direct routing. The model allows for multi-slot BPP traffic with minimum and maximum allocation. The same model can be applied tohierarchical cellular wireless networks with overlapping cells and to optical WDM networks.In Sec. 8.6 we consider service-protection mechanisms. Finally, in Sec. 8.7 we consider opti-mizing of telecommunication networks by applying Moe’s principle.
8.1 Traffic matrices
To specify the traffic demand in an area with K exchanges we should know K2 traffic valuesAij(i, j = 1, . . . , K), as given in the traffic matrix shown in Tab. 8.1. The traffic matrixassumes we know the location areas of exchanges. Knowing the traffic matrix we have thefollowing two interdependent tasks:
• Decide on the topology of the network (which exchanges should be interconnected ?)
• Decide on the traffic routing (how do we exploit a given topology ?)
217
218 CHAPTER 8. DIMENSIONING OF TELECOM NETWORKS
TO
FROM 1 · · · i · · · j · · · K Ai · =K∑
k=1
Aik
1 A11 · · · A1i · · · A1j · · · A1K A1 ·...
... · · · ... · · · ... · · · ......
i Ai1 · · · Aii · · · Aij · · · AiK Ai ·...
... · · · ... · · · ... · · · ......
j Aj1 · · · Aji · · · Ajj · · · AjK Aj ·...
... · · · ... · · · ... · · · ......
K AK1 · · · AKi · · · AKj · · · AKK AK ·
A· j =K∑
k=1
Akj A· 1 · · · A· i · · · A· j · · · A·K
K∑
i=1
Ai · =K∑
j=1
A· j
The traffic matrix has the following elements:
Aij = is the traffic from i to j.
Aii = is the internal traffic in exchange i.
Ai· = is the total outgoing (originating) traffic from i.
A·j = is the total incoming (terminating) traffic to j.
Table 8.1: A traffic matrix. The total incoming traffic is equal to the total outgoing traffic.
8.1.1 Kruithof’s double factor method
Let us assume we know the actual traffic matrix and that we have a forecast for futurerow sums O(i) (Originating) and column sums T (i) (Terminating), i.e. the total outgoingand incoming traffic for each exchange. This traffic prognosis may be obtained from sub-scriber forecasts for the individual exchanges. By means of Kruithof’s double factor method(Kruithof, 1937 [77]) we are able to estimate the future individual values Aij of the trafficmatrix. The procedure is to adjust the individual values Aij, so that they agree with the newrow/column sums:
Aij ⇐ Aij ·S1
S0
, (8.1)
where S0 is the actual sum and S1 is the new sum of the row/column considered. If we startby adjusting Aij with respect to the new row sum Si, then the row sums will agree, but thecolumn sums will not agree with the wanted values. Therefore, next step is to adjust theobtained values Aij with respect to the column sums so that these agree, but this implies that
8.1. TRAFFIC MATRICES 219
the row sums no longer agree. By alternatively adjusting row and column sums the valuesobtained will after a few iterations converge towards unique values. The procedure is bestillustrated by an example given below.
Example 8.1.1: Application of Kruithof’s double factor methodWe consider a telecommunication network having two exchanges. The present traffic matrix is givenas:
1 2 Total
1 10 20 302 30 40 70
Total 40 60 100
The prognosis for the total originating and terminating traffic for each exchange is:
1 2 Total
1 452 105
Total 50 100 150
The task is then to estimate the individual values of the matrix by means of the double factormethod.
Iteration 1: Adjust the row sums. We multiply the first row by (45/30) and the second row by(105/70) and get:
1 2 Total
1 15 30 452 45 60 105
Total 60 90 150
The row sums are now correct, but the column sums are not.
Iteration 2: Adjust the column sums:
1 2 Total
1 12.50 33.33 45.832 37.50 66.67 104.17
Total 50.00 100.00 150.00
220 CHAPTER 8. DIMENSIONING OF TELECOM NETWORKS
We now have the correct column sums, whereas the column sums deviate a little. We continue byalternately adjusting the row and column sums:
Iteration 3:
1 2 Total
1 12.27 32.73 45.002 37.80 67.20 105.00
Total 50.07 99.93 150.00
Iteration 4:
1 2 Total
1 12.25 32.75 45.002 37.75 67.25 105.00
Total 50.00 100.00 150.00
After four iterations both the row and the column sums agree with two decimals. 2
There are other methods for estimating the future individual traffic values Aij, but Kruithof’sdouble factor method has some important properties (Bear, 1988 [5]):
• Uniqueness. Only one solution exists for a given forecasts.
• Reversibility. The resulting matrix can be reversed to the initial matrix with the sameprocedure.
• Transitivity. The resulting matrix is the same independent of whether it is obtainedin one step or via a series of intermediate transformations, (for instance one 5-yearforecast, or five 1-year forecasts).
• Invariance as regards the numbering of exchanges. We may change the numbering ofthe exchanges without influencing the results.
• Fractionizing. The single exchanges can be split into sub-exchanges or be aggregatedinto larger exchanges without influencing the result. This property is not exactly ful-filled for Kruithof’s double factor method, but the deviations are small.
8.2. TOPOLOGIES 221
8.2 Topologies
In Chap. 1 we have described the basic topologies as star net, mesh net, ring net, hierarchicalnet and non-hierarchical net.
8.3 Routing principles
This is an extensive subject including i.a. alternative traffic routing, load balancing, etc. In(Ash, 1998 [3]) there is a detailed description of this subject.
8.4 Approximate end-to-end calculations methods
If we assume the links of a network are independent, then it is easy to calculate the end-to-endblocking probability. By means of the classical formulæ we calculate the blocking probabilityof each link. If we denote the blocking probability of link i by Ei, then we find the end-to-endblocking probability for a call attempt on route j as follows:
Ej = 1−∏
i∈R(1− Ei) , (8.2)
where R is the set of links included in the route of the call. This value will be worst case,because the traffic is smoothed by the blocking on each link, and therefore experience lesscongestion on the last link of a route.
For small blocking probabilities we have:
Ej ≈∑
i∈REi . (8.3)
8.4.1 Fix-point method
A call will usually occupy channels on more links, and in general the traffic on the individuallinks of a network will be correlated. The blocking probability experienced by a call attempton the individual links will therefore also be correlated. Erlang’s fix-point method is anattempt to take this into account.
222 CHAPTER 8. DIMENSIONING OF TELECOM NETWORKS
8.5 Exact end-to-end calculation methods
Circuit switched telecommunication networks with direct routing have the same complexityas queueing networks with more chains. (Sec. 12.8) and Tab. 12.3). It is necessary to keepaccount of the number of busy channels on each link. Therefore, the maximum number ofstates becomes:
K∏
i=1
(ni + 1) . (8.4)
Route Number ofLink
1 2 · · · N channels
1 d11 d21 · · · dN1 n1
2 d12 d22 · · · dN2 n2
· · · · ·· · · · · · · · · · · · · · ·· · · · ·K d1K d2K · · · dNK nK
Table 8.2: In a circuit switched telecommunication network with direct routing dij denotedthe slot-size (bandwidth demand) of route j upon link i (cf. Tab. 12.3).
8.5.1 Convolution algorithm
The convolution algorithm described in Chap. 7 can directly be applied to networks withdirect routing, because there is product form among the routes. The convolution becomesmulti-dimensional, the dimension being the number of links in the network. The truncationof the state space becomes more complex, and the number of states increases very much.
8.6 Load control and service protection
In a telecommunication network with many users competing for the same resources (multipleaccess) it is important to specify service demands of the users and ensure that the GoS isfulfilled under normal service conditions. In most systems it can be ensured that preferen-tial subscribers (police, medical services, etc.) get higher priority than ordinary subscriberswhen they make call attempts. During normal traffic conditions we want to ensure that allsubscribers for all types of calls (local, domestic, international) have approximately the same
8.6. LOAD CONTROL AND SERVICE PROTECTION 223
service level, e.g. 1 % blocking. During overload situations the call attempts of some groupsof subscribers should not be completely blocked and other groups of subscribers at the sametime experience low blocking. We aim at “the collective misery”.
Historically, this has been fulfilled because of the decentralized structure and the applicationof limited accessibility (grading), which from a service protection point of view still areapplicable and useful.
Digital systems and networks have an increased complexity and without preventive measuresthe carried traffic as a function of the offered traffic will typically have a form similar to theAloha system (Fig. 3.6). To ensure that a system during overload continues to operate atmaximum capacity various strategies are introduced. In stored program controlled systems(exchanges) we may introduce call-gapping and allocate priorities to the tasks (Chap. 10).In telecommunication networks two strategies are common: trunk reservation and virtualchannels protection.
single choice route
A
T C
B
primary route = high usage route
service protectingroute
last choice route
Figure 8.1: Alternative traffic routing (cf. example 8.6.2). Traffic from A to B is partly carriedon the direct route (primary route = high usage route), partly on the secondary route viathe transit exchange T.
8.6.1 Trunk reservation
In hierarchical telecommunication networks with alternative routing we want to protect theprimary traffic against overflow traffic. If we consider part of a network (Fig. 8.1), then thedirect traffic AT will compete with the overflow traffic from AB for idle channels on thetrunk group AT . As the traffic AB already has a direct route, we want to give the trafficAT priority to the channels on the link AT. This can be done by introducing trunk (channel)reservation. We allow the AB–traffic to access the AT–channels only if there are more than rchannels idle on AT (r = reservations parameter). In this way, the traffic AT will get higherpriority to the AT–channels. If all calls have the same mean holding time (µ1 = µ2 = µ) and
224 CHAPTER 8. DIMENSIONING OF TELECOM NETWORKS
PCT-I traffic with single slot traffic, then we can easily set up a state transition diagram andfind the blocking probability.
If the individual traffic streams have different mean holding times, or if we consider Binomial& Pascal traffic, then we have to set up an N -dimensional state transition diagram which willbe non-reversible. In some states calls of a type having been accepted earlier in lower statesmay depart but not be accepted, and thus the process is non-reversible. We cannot applythe convolution algorithm developed in Sec. 7.4 for this case, but the generalized algorithmin Sec. 7.6.2 can easily be modified by letting pi(x) = 0 when x ≥ n−ri.
An essential disadvantage by trunk reservation is that it is a local strategy, which only considerone trunk group (link), not the total end-to-end connection. Furthermore, it is a one-waymechanism which protect one traffic stream against the other, but not vice-versa. Therefore,it cannot be applied to mutual protection of connections and services in broadband networks.
Example 8.6.1: Guard channelsIn a wireless mobile communication system we may ensure lower blocking probability to hand-overcalls than experienced by new call attempts by reserving the last idle channel (called guard channel)to hand-over calls. 2
8.6.2 Virtual channel protection
In a service-integrated system it is necessary to protect all services mutually against eachother and to guarantee a certain grade-of-service. This can be obtained by (a) a certain min-imum allocation of bandwidth which ensures a certain minimum service, and (b) a maximumallocation which both allows for the advantages of statistical multiplexing and ensures thata single service do not dominate. This strategy has the fundamental product form, and thestate probabilities are insensitive to the service time distribution. Also, the GoS is guaranteednot only on a link basis, but end-to-end.
8.7 Moe’s principle
Theorem 8.1 Moe’s principle: the optimal resource allocation is obtained by a simulta-neous balancing of marginal incomes and marginal costs over all sectors.
In this section we present the basic principles published by Moe in 1924. We consider asystem with some sectors which consume resources (equipment) for producing items (traffic).The problem can be split into two parts:
8.7. MOE’S PRINCIPLE 225
a. Given that a limited amount of resources are available, how should we distribute theseamong the sectors?
b. How many resources should be allocated in total?
The principles are applicable in general for all kind of productions. In our case the resourcescorrespond to cables and switching equipment, and the production consists in carried traffic.
A sector may be a link to an exchange. The problem may be dimensioning of links between acertain exchange and its neighbouring exchanges to which there are direct connections. Theproblem then is:
a. How much traffic should be carried on each link, when a total fixed amount of traffic iscarried?
b. How much traffic should be carried in total?
Question a is solved in Sec. 8.7.1 and question b in Sec. 8.7.2. We carry through the derivationsfor continuous variables because these are easier to work with. Similar derivations can becarried through for discreet variables, corresponding to a number of channels. This is Moe’sprinciple (Jensen, 1950 [58]).
8.7.1 Balancing marginal costs
Let us from a given exchange have direct connections to k other exchanges. The cost of aconnection to an exchange i is assumed to to be a linear function of the number of channels:
Ci = c0i + ci · ni, i = 1, 2, . . . , k . (8.5)
The total cost of cables then becomes:
C (n1, n2, . . . , nk) = C0 +k∑
i=1
ci · ni , (8.6)
where C0 is a constant.
The total carried traffic is a function of the number of channels:
Y = f (n1, n2, . . . , nk) . (8.7)
As we always operate with limited resources we will have:
∂f
∂ni= Dif > 0 . (8.8)
226 CHAPTER 8. DIMENSIONING OF TELECOM NETWORKS
In a pure loss system Dif corresponds to the improvement function, which is always positivefor a finite number of channels because of the convexity of Erlang’s B–formula.
We want to minimize C for a given total carried traffic Y :
minC given Y = f (n1, n2, . . . , nk) . (8.9)
By applying the Lagrange multiplier (shadow prices) ϑ, where we introduce G = C − ϑ · f ,this is equivalent to:
min G (n1, n2, . . . , nk) = min C (n1, n2, . . . , nk)− ϑ [f (n1, n2, . . . , nk)− Y ] (8.10)
A necessary condition for the minimum solution is:
∂G
∂ni= ci − ϑ
∂f
∂ni= ci − ϑDif = 0, i = 1, 2, . . . , k , (8.11)
or1
ϑ=
D1f
c1
=D2f
c2
= · · · = Dkf
ck. (8.12)
A necessary condition for the optimal solution is thus that the marginal increase of the carriedtraffic when increasing the number of channels (improvement function) divided by the costfor a channel must be identical for all trunk groups (4.49).
It is possible by means of second order derivatives to set up a set of necessary conditions toestablish sufficient conditions, which is done in “Moe’s Principle” (Jensen, 1950 [58]). Theimprovement functions we deal with will always fulfil these conditions.
If we also have different incomes gi for the individual trunk groups (directions), then we haveto include an additional weight factor, and in the results (8.12) we shall replace ci by ci/gi.
8.7.2 Optimum carried traffic
Let us consider the case where the carried traffic, which is a function of the number of channels(8.7) is Y . If we denote the revenue with R(Y ) and the costs with C(Y ) (8.6), then the profitbecomes:
P (Y ) = R(Y )− C(Y ) . (8.13)
A necessary condition for optimal profit is:
dP (Y )
dY= 0 ⇒ dR
dY=dC
dY, (8.14)
i.e. the marginal income should be equal to the marginal cost.
8.7. MOE’S PRINCIPLE 227
Using:
P (n1, n2, . . . , nk) = R (f (n1, n2, . . . , nk))−C0 +
k∑
i=1
ci · ni, (8.15)
the optimal solution is obtained for:
∂P
∂ni=dR
dY·Dif − ci = 0, i = 1, 2, . . . , k , (8.16)
which by using (8.12) gives:dR
dY= ϑ . (8.17)
The factor ϑ given by (8.12) is the ratio between the cost of one channel and the traffic whichcan be carried additionally if the link in extended by one channel. Thus we shall add channelsto the link until the marginal income equals the marginal cost ϑ (4.51).
Example 8.7.1: Optimal capacity allocationWe consider two links (trunk groups) where the offered traffic is 3 erlang, respectively 15 erlang.The channels for the two systems have the same cost and there is a total of 25 channels available.How should we distribute the 25 channels among the two links?
From (8.12) we notice that the improvement functions should have the same values for the twodirections. Therefore we proceed using a table:
A1 = 3 erlang A2 = 15 erlang
n1 F1,n(A1) n2 F1,n(A2)
3 0.4201 17 0.40484 0.2882 18 0.33715 0.1737 19 0.27156 0.0909 20 0.21087 0.0412 21 0.1573
For n1 = 5 and n2 = 20 we use all 25 channels. This results in a congestion of 11.0%, respectively4.6%, i.e. higher congestion for the smaller trunk group. 2
Example 8.7.2: Triangle optimizationThis is a classical optimization of a triangle network using alternative traffic routing (Fig. 8.1). FromA to B we have a traffic demand equal to A erlang. The traffic is partly carried on the direct route(primary route) from A to B, partly on an alternative route (secondary route) A → T → B, whereT is a transit exchange. There are no other routing possibilities. The cost of a direct connection iscd, and for a secondary connection ct.
How much traffic should be carried in each of the two directions? The route A → T → B alreadycarries traffic to and from other destinations, and we denote the marginal utilization for a channel
228 CHAPTER 8. DIMENSIONING OF TELECOM NETWORKS
on this route by a. We assume it is independent of the additional traffic, which is blocked fromA → B.
According to (8.12), the minimum conditions become:
F1,n(A)cd
=a
ct.
Here, n is the number of channels in the primary route. This means that the costs should be thesame when we route an “additional” call via the direct route and via the alternative route.If one route were cheaper than the other, then we would route more traffic in the cheaper direction.
2
As the traffic values applied as basis for dimensioning are obtained by traffic measurementsthey are encumbered with unreliability due to a limited sample, limited measuring period,measuring principle, etc. As shown in Chap. 13 the unreliability is approximately proportionalto the measured traffic volume. By measuring the same time period for all links we get thehighest uncertainty for small links (trunk groups), which is partly compensated by the above-mentioned overload sensitivity, which is smallest for small trunk groups. As a representativevalue we typically choose the measured mean value plus the standard deviation multipliedby a constant, e.g. 1.0.
To make sure, it should further be emphasized that we dimension the network for the trafficwhich shall be carried 1–2 years from now. The value used for dimensioning is thus addi-tionally encumbered by a forecast uncertainty. We has not included the fact that part of theequipment may be out of operation because of technical errors.
ITU–T recommends that the traffic is measured during all busy hours of the year, and thatwe choose n so that by using the mean value of the 30 largest, respectively the 5 largestobservations, we get the following blocking probabilities:
En(A30
)≤ 0.01 ,
En(A5
)≤ 0.07 . (8.18)
The above service criteria can directly be applied to the individual trunk groups. In practise,we aim at a blocking probability from A-subscriber to B-subscriber which is the same for alltypes of calls. With stored program controlled exchanges the trend is a continuous supervisionof the traffic on all expensive and international routes.
In conclusion, we may say that the traffic value used for dimensioning is encumbered withuncertainty. In large trunk groups the application of a non-representative traffic value mayresult in serious consequences for the grade-of-service level. During later years, there has beenan increasing interest for adaptive traffic controlled routing (traffic network management),which can be introduce in stored program control digital systems. By this technology we mayin principle choose the optimal strategy for traffic routing during any traffic scenario.
Chapter 9
Markovian queueing systems
In this chapter we consider traffic to a system with n identical servers, full accessibility, andan a queue with an infinite number of waiting positions. When all n servers are busy, anarriving customer joins the queue and waits until a server becomes idle. No customers canbe in queue when a server is idle (full accessibility). We consider the same two traffic modelsas in Chaps. 4 & 5.
1. Poisson arrival process (an infinite number of sources) and exponentially distributedservice times (PCT-I). This is the most important queueing system, called Erlang’sdelay system. In this system the carried traffic will be equal to the offered traffic asno customers are blocked. The probability of delay, mean queue length, mean wait-ing time, carried traffic per channel, and improvement functions will be dealt with inSec. 9.2. In Sec. 9.3 Moe’s principle is applied for optimizing the system. The waitingtime distribution is derived for the basic queueing discipline, First–Come First–Served(FCFS) in Sec. 9.4. In Sec. 9.5 we summarize the results for the important single-serversystem M/M/1.
2. A limited number of sources and exponentially distributed service times (PCT-II).This is Palm’s machine repair model (the machine interference problem) which is dealtwith in Sec. 9.6. This model is widely applied for dimensioning of computer systems,terminal systems, flexible manufacturing system (FMS), etc. Palm’s machine repairmodel is optimized in Sec. 9.7. The waiting time distribution for Palm’s model withFCFS queueing discipline is derived in Sec. 9.8.
9.1 Erlang’s delay system M/M/n
Let us consider a queueing system M/M/n with Poisson arrival process (M), exponentialservice times (M), n servers, and an infinite number of waiting positions. The state of thesystem is defined as the total number of customers in the system (either being served or
229
230 CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS
........
.................................
....................................................................................................................................................................................... ........
...................
..................................................................................................................................................................................................... ........
...................
..................................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
.......................................................................................................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
0 1 i n+1n· · · · · · · · ·
λ λ λ λ λ λ λ
µ 2 µ i µ (i+1) µ n µ n µ n µ
Figure 9.1: State transition diagram of the M/M/n delay system having n servers and anunlimited number of waiting positions.
waiting in the queue). We are interested in the steady state probabilities of the system. Bythe procedure described in Sec. 4.4 we set up the state transition diagram shown in Fig. 9.1.Assuming statistical equilibrium, the cut equations become:
λ · p(0) = µ · p(1) ,
λ · p(1) = 2µ · p(2) ,
......
...
λ · p(i) = (i+1)µ · p(i+1) ,
......
... (9.1)
λ · p(n−1) = nµ · p(n) ,
λ · p(n) = nµ · p(n+1) ,
......
...
λ · p(n+ j) = nµ · p(n+j+1) .
......
...
As A = λ/µ is the offered traffic, we get:
p(i) =
p(0) · Ai
i!, 0 ≤ i ≤ n ,
p(n) ·(A
n
)i−n= p(0) · Ai
n! · n i−n , i ≥ n .
(9.2)
By normalization of the state probabilities we obtain p(0) :
1 =∞∑
i=0
p(i) ,
9.1. ERLANG’S DELAY SYSTEM M/M/N 231
1 = p(0) ·
1 +A
1+A2
2!+ · · ·+ An
n!
(1 +
A
n+A2
n2+ . . .
).
The innermost brackets have a geometric progression with quotient A/n. Statistical equilib-rium is only obtained for:
A < n . (9.3)
Otherwise, the queue will continue to increase towards infinity. We obtain:
p(0) =1
n−1∑
i=0
Ai
i!+An
n!
n
n− A
, A < n , (9.4)
and equations (9.2) and (9.4) yield the steady state probabilities p(i), i > 0.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 150.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
.........
.........
.........
.........
.........
.........
.........
..........
.........
.........
.........
.........
.........
.........
..........
.........
.........
.........
.........
.........
.........
..........
.........
.........
.........
.........
.........
.........
..........
.........
.........
.........
.........
.........
.........
..........
.........
.........
.........
.........
.........
.........
..........
.........
.........
.........
.........
.........
.........
..........
.........
.........
.........
.........
.........
.........
..........
.........
.........
.........
.........
.........
.........
..........
.........
.........
.........
.........
.........
.........
..........
.........
.........
.........
.........
.........
.........
..........
.........
.........
.........
.........
.........
.........
..........
.........
......
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..................................................................................................................
..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.......................................................................................................................................................................................................................................................................................................................................................................
..............................................
..................................
...................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.............................................................
.........................................
...................................
....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Offered traffic A
nE2 (A) Probability of delay
1 2 5 10 15
Figure 9.2: Erlang’s C–formula for the delay system M/M/n. The probability E2,n(A) for apositive waiting time is shown as a function of the offered traffic A for different values of thenumber of servers n.
232 CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS
9.2 Traffic characteristics of delay systems
For evaluation of the performance of the system, several characteristics have to be considered.They are expressed by the steady-state probabilities.
9.2.1 Erlang’s C-formula
The stationary Poisson arrival process is independent of the state of the system, and thereforethe probability that an arbitrary arriving customer has to wait in the queue is equal to theproportion of time all servers are occupied (PASTA–property: Poisson Arrivals See TimeAverages). The waiting time is a random variable denoted by W . For an arbitrary arrivingcustomer we have:
E2,n(A) = pW > 0
=
∞∑
i=n
λ p(i)
∞∑
i=0
λ p(i)
=∞∑
i=n
p(i)
= p(n) · n
n− A . (9.5)
Erlang’s C–formula (1917):
E2,n(A) =
An
n!
n
n− A1 +
A
1+A2
2!+ · · ·+ An−1
(n− 1)!+An
n!
n
n− A
, A < n . (9.6)
This probability of delay depends only upon A = λ/µ, not upon the parameters λ and µindividually. The formula has several names: Erlang’s C–formula, Erlang’s second formula,or Erlang’s formula for waiting time systems. It has various notations in literature:
E2,n(A) = D = Dn(A) = pW > 0 .
As customers are either served immediately or put into queue, the probability that a customeris served immediately becomes:
Sn = 1− E2,n(A) = p(0) + p(1) + . . .+ p(n− 1) .
9.2. TRAFFIC CHARACTERISTICS OF DELAY SYSTEMS 233
The carried traffic Y equals the offered traffic A, as no customers are rejected and the arrivalprocess is a Poisson process:
Y =n∑
i=1
i · p(i) +∞∑
i=n+1
n · p(i) (9.7)
=n∑
i=1
λ
µ· p(i−1) +
∞∑
i=n+1
λ
µ· p(i−1) =
λ
µ·∞∑
i=0
p(i) ,
Y =λ
µ= A .
Here we have exploited the cut balance equation between state [i− 1] and state [i].
The queue length is a random variable L. The probability of having customers in queue at arandom point of time is:
pL > 0 =∞∑
i=n+1
p(i) = p(n) ·An
1− An
,
pL > 0 =A
n− A · p(n) =A
n· E2,n(A) , (9.8)
where we have used (9.5).
9.2.2 Numerical evaluation
Erlang’s C-formula (9.6) is similar to Erlang’s B-formula (4.10) except for the factor n/(n−A)in the last term. As we have very accurate recursive algorithm for numerical evaluation ofErlang’s B-formula (4.29) we use the following relationship for obtaining numerical values ofthe C-formula:
E2,n(A) =n · E1,n(A)
n− A (1− E1,n(A)), A < n (9.9)
=E1,n(A)
1− y ,
where y is the carried traffic per channel in the corresponding loss system (4.13):
y =A 1− En(A)
n> 0 .
We notice that:E1,n(A) < E2,n(A) .
234 CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS
0 4 8 12 16 20 24 280.0
0.2
0.4
0.6
0.8
1.0
.....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...............................
................................
...................................
...................................
......................................
........................................
...........................................
.............................................
..................................................
.....................................................
..........................................................
.............................................................
...................................................................
.........................................................................
................
............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...............................
.................................
..................................
.....................................
......................................
........................................
.............................................
................................................
...................................................
........................................................
..........................................................
.................................................................
........................................................................
...............................................................
.................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...............................
...............................
...................................
.....................................
.......................................
..........................................
............................................
..................................................
.....................................................
..........................................................
..............................................................
......................................................................
.............................................................................
..........................................
..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..............................
.................................
..................................
......................................
........................................
...........................................
..............................................
...................................................
.......................................................
.............................................................
...................................................................
..........................................................................
...................................................................................
........................
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..............................
................................
..................................
.....................................
.......................................
..........................................
...............................................
..................................................
........................................................
.............................................................
....................................................................
.............................................................................
......................................................................................
.............................................
......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...............................
................................
....................................
......................................
..........................................
..............................................
.....................................................
........................................................
.................................................................
........................................................................
...................................................................................
.................................................................................................
.........................................................
............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.................................
....................................
..........................................
.............................................
.................................................
........................................................
................................................................
.........................................................................
.....................................................................................
...................................................................................................
...................................................................................................................
.....
...................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
................................
...................................
.......................................
............................................
..................................................
..........................................................
....................................................................
.................................................................................
.................................................................................................
.....................................................................................................................
......................................................................................................................................
..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..................................
.......................................
...............................................
..........................................................
...........................................................................
.....................................................................................................
..........................................................................................................................................
.........................................................................................................................................................................................................
............................................................................................................................................
Offered traffic A
E2(A)Utilization y
0.0010.0020.0050.01
0.02
0.05
0.1
0.2
0.5
Figure 9.3: The average utilization per channel y for a fixed probability of delay E2,n(A) asa function of the number of channels n.
For A ≥ n, we have E2,n(A) = 1 as all customers are delayed.
By using the general approach described in Sec. 4.4.1 we observe from the denominator of(9.6) that the first terms for state [0] to state [n− 1] are the same as for Erlang’s loss system.The last term which includes all classes from state [n] to ∞ is obtained from state [n-1] bymultiplying by
A
n· n
n− a =A
n− A .
So a direct recurrence is obtained by using the recursion for Erlang-B up to state [n− 1] and
9.2. TRAFFIC CHARACTERISTICS OF DELAY SYSTEMS 235
then find E2,n(A) by the final step:
E2,n(A) =A
n−A · E1,n−1(A)
1 + An−A · E1,n−1(A)
,
E2,n(A) =A · E1,n−1(A)
n− A(1− E1,n−1(A)). (9.10)
Thus we use the same recursion as for Erlang-B formula except for the last step. The twoformuæ (9.9) and (9.10) are of course equivalent, but the last one requires one iteration less.
Erlang’s C-formula may in an elegant way be expressed by the B-formula as noticed byB. Sanders:
1
E2,n(A)=
1
E1,n(A)− 1
E1,n−1(A). (9.11)
Erlang’s C-formula has been tabulated in many books and tables, i.a. in Moe’s Principle(Jensen, 1950 [58]) and is shown in Fig. 9.2, Fig. 9.3, and Fig. 9.4. We notice that for a givenvalue of E2,n(A), the utilization of each channel increases as number of channels n increases(economy of scale).
9.2.3 Mean queue lengths
We distinguish between the queue length at an arbitrary point of time and the queue lengthwhen there are customers waiting in the queue.
Mean queue length at a random point of time
The queue length L at an arbitrary point of time is called the virtual queue length. Thisis the queue length experienced by an arbitrary customer as the PASTA-property is validdue to the Poisson arrival process (time average = call average). We obtain the mean queuelength Ln = EL at an arbitrary point of time from the state probabilities:
Ln = 0 ·n∑
i=0
p(i) +∞∑
i=n+1
(i− n) · p(i)
=∞∑
i=n+1
(i− n) · p(n)
(A
n
)i−n
236 CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
............................................................................................................................
............................................................
................................................
..........................................
......................................
..................................
.................................
...............................
.............................
.....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.....................................................................................................................................................................................................................................................................................................................
................................................................
...............................................
.......................................
..................................
................................
............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...............................................................
...........................................
.....................................
...............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..................................................
....................................
..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.......................................
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Offered traffic per channel A/n
E2,n(A) Probability of delay Number of channels n
1
2
510
2050
100
Figure 9.4: Erlang’s C–formula for the delay system M/M/n. The probability E2,n(A) for apositive waiting time is shown as a function of the offered traffic A/n per channel for differentvalues of the number of servers n. This figure is a re-scaling of Fig. 9.2.
Ln = p(n) ·∞∑
i=1
i ·(A
n
)i
= p(n) · An
∞∑
i=1
∂
∂(A/n)
(A
n
)i.
As we have A/n ≤ c < 1, the series is uniformly convergent, and the differentiation operatormay be put outside the summation:
Ln = p(n) · An· ∂
∂(A/n)
A/n
1− (A/n)
= p(n) · A/n
1− (A/n)2
= p(n) · n
n− A ·A
n− A ,
Ln = E2,n(A) · A
n−A . (9.12)
9.2. TRAFFIC CHARACTERISTICS OF DELAY SYSTEMS 237
The average queue length is the traffic carried by the queueing positions and therefore it isalso called the waiting time traffic.
Mean queue length, given the queue is greater than zero
The time average is also in this case equal to the call average. The conditional mean queuelength becomes:
Lnq =
∑∞i=n+1(i−n) p(i)∑∞
i=n+1 p(i)
=p(n) · n
n−A · An−A
p(n) · An−A
=n
n− A (9.13)
By applying (9.8) and (9.12), this is of course the same as:
Lnq =Ln
pL > 0 ,
where L is the random variable for queue length.
9.2.4 Mean waiting times
Also here two items are of interest: the mean waiting time W for all customers, and themean waiting time w for customers experiencing a positive waiting time. The first one is anindicator for the service level of the whole system, whereas the second one is of importancefor the customers, which are delayed. Time averages will be equal to call averages because ofthe PASTA-property.
Mean waiting time W for all customers
Little’s theorem tells that the average queue length is equal to the arrival intensity multipliedby the mean waiting time:
Ln = λ ·Wn , (9.14)
where Ln = Ln(A), and Wn = Wn(A). Inserting Ln from (9.12) we get:
Wn =Lnλ
=1
λ· E2,n(A) · A
n− A .
238 CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS
As A = λ s, where s = 1/µ is the mean service time, we get:
Wn = E2,n(A) · s
n− A . (9.15)
Mean waiting time w for delayed customers
The total waiting time is constant and may either be averaged over all customers (Wn) oronly over customers, which experience a positive waiting time wn (2.28):
Wn = wn · E2,n(A) , (9.16)
wn =s
n− A . (9.17)
Example 9.2.1: Mean waiting time w when A→ 0Notice, that as A → 0, we get wn = s/n (9.17). If a customer experiences waiting time (whichseldom happens when A→ 0), then this customer will be the only one in the queue. The customermust wait until a server becomes idle. This happens after an exponentially distributed time intervalwith mean value s/n. So wn never becomes less than s/n. 2
9.2.5 Improvement functions for M/M/n
The marginal improvement when we add one server can be expressed in several ways:
• The decrease in the proportion of total traffic (= the proportion of all customers) thatexperience delay is given by:
F2,n(A) = A E2,n(A)− E2,n+1(A) . (9.18)
• The decrease in mean queue length (traffic carried by the waiting positions) becomes:
FL,n(A) = Ln(A)− Ln+1(A) . (9.19)
• The decrease in mean waiting time Wn(A) for all customers:
FW,n(A) = Wn(A)−Wn+1(A)
=1
λ· FL,n(A) , (9.20)
where we have used Little’s law (9.14). If we choose the mean service time as time unit,then λ = A. We consider Wn(A) below.
9.3. MOE’S PRINCIPLE FOR DELAY SYSTEMS 239
Both (9.18) and (9.19) are tabulated in Moe’s Principle (Jensen, 1950 [58]) and are simpleto evaluate by a calculator or computer.
9.3 Moe’s principle for delay systems
Moe first derived his principle for queueing systems. He studied the subscribers waiting timesfor an operator at the manual exchanges in Copenhagen Telephone Company.
Let us consider k independent queueing systems. A customer being served by all k systemshas the total average waiting time W =
∑iWi, where Wi is the mean waiting time of i’th
system, which has ni servers and is offered the traffic Ai. The cost of a channel is ci, eventuallyplus a constant cost, which is included in the constant C0 below. Thus the total costs forchannels becomes:
C = C0 +k∑
i=1
ni ci .
If the waiting time also is considered as a cost, then the total costs to be minimized becomesf = f(n1, n2, . . . , nk). This is to be minimized as a function of number of channels ni in theindividual systems. The allocation of channels to the individual systems is determined by:
minf(n1, n2, . . . , nk
)= min
C0 +
∑
i
ni ci + ϑ ·(∑
i
Wi −W)
. (9.21)
where ϑ (theta) is Lagrange’s multiplier (shadow prices).
As ni are integers, a necessary condition for minimum, which in this case can be shown alsoto be a sufficient condition, becomes:
0 < f(n1, n2, . . . , ni−1, . . . , nk)− f(n1, n2, . . . , ni, . . . , nk) ,
0 ≥ f(n1, n2, . . . , ni, . . . , nk)− f(n1, n2, . . . , ni+1, . . . , nk) , (9.22)
which corresponds to:
Wni−1 (Ai)−Wni (Ai) >ciϑ,
Wni (Ai)−Wni+1 (Ai) ≤ciϑ, (9.23)
where Wni(Ai) is given by (9.15).
Expressed by the improvement function for the waiting time FW,n(A) (9.20) the optimalsolution becomes:
FW,ni−1(A) >ciϑ≥ FW,ni(A) , i = 1, 2, . . . k . (9.24)
240 CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS
The function FW,n(A) is tabulated in Moe’s Principle (Jensen, 1950 [58]). Similar optimiza-tions can be carried out for other improvement functions.
Example 9.3.1: Delay systemWe consider two different M/M/n queueing systems. The first one has a mean service time of 100 sand the offered traffic is 20 erlang. The cost-ratio c1/ϑ is equal to 0.01. The second system has amean service time equal to 10 s and the offered traffic is 2 erlang. The cost ratio equals c2/ϑ = 0.1.A table of the improvement function FW,n(A) gives:
n1 = 32 channels and
n2 = 5 channels.
The mean waiting times are:
W1 = 0.075 s.
W2 = 0.199 s.
This shows that a customer, who is served at both systems, experience a total mean waiting timeequal to 0.274 s, and that the system with less channels contributes more to the mean waiting time.
2
The cost of waiting is related to the cost ratio. By investing one monetary unit more in theabove system, we reduce the costs by the same amount independent of in which queueingsystem we increase the investment (capacity). We should go on investing more as long as wemake profit.Moe’s investigations during 1920’s showed that the mean waiting time for subscribers atsmall exchanges with few operators should be larger than the mean waiting time at largerexchanges with many operators.
9.4 Waiting time distribution for M/M/n, FCFS
Queueing systems, where the service discipline only depends upon the arrival times, all havethe same mean waiting times. In this case the strategy has only influence upon the distri-bution of waiting times among the individual customer. The derivation of the waiting timedistribution is simple in the case of ordered queue, FCFS = First–Come First–Served. Thisdiscipline is also called FIFO, First–In First–Out. Customers arriving first to the system willbe served first, but if there are multiple servers they may not necessarily leave the server first.So FIFO refers to the time for leaving the queue and initiating service.
Let us consider an arbitrary customer. Upon arrival to the system, the customer is eitherserved immediately or has to wait in the queue (9.6).
9.4. WAITING TIME DISTRIBUTION FOR M/M/N, FCFS 241
We now assume that the customer considered has to wait in the queue, i.e. the system maybe in state [n+ k], (k = 0, 1, 2, . . .), where k is the number of occupied waiting positions justbefore the arrival of the customer.
Our customer has to wait until k + 1 customers have completed their service before an idleserver becomes accessible. When all n servers are working, the system completes customerswith a constant rate nµ, i.e. the departure process is a Poisson process with this intensity.
We exploit the relationship between the number representation and the interval representation(3.4): The probability pW ≤ t = F (t) of experiencing a positive waiting time less than orequal to t is equal to the probability that in a Poisson arrival process with intensity (nµ) atleast (k+1) customers depart during the interval t (3.21):
F (t | k) =∞∑
i=k+1
(nµt)i
i!· e−nµt . (9.25)
The above was based on the assumption that our customer has to wait in the queue. Theconditional probability that our customer when arriving observes all n servers busy and kwaiting customers (k = 0, 1, 2, · · · ) is:
pw(k) =λ · p(n+ k)
λ ·∞∑
i=0
p(n+ i)
=
p(n) ·(A
n
)k
p(n) ·∞∑
i=0
(A
n
)i
=
(1− A
n
) (A
n
)k, k = 0, 1, . . . . (9.26)
This is a geometric distribution including the zero class (Tab. 3.1). The unconditional waitingtime distribution then becomes:
F (t) =∞∑
k=0
pw(k) · F (t | k) , (9.27)
F (t) =∞∑
k=0
(1− A
n
)(A
n
)k·∞∑
i=k+1
(nµt)i
i!e−nµt
= e−nµt∞∑
i=1
(nµt)i
i!·i−1∑
k=0
(1− A
n
)(A
n
)k,
as we may interchange the two summations when all terms are positive probabilities. The
242 CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS
inner summation is a geometric progression:i−1∑
k=0
(1− A
n
)(A
n
)k=
(1− A
n
)·i−1∑
k=0
(A
n
)k
=
(1− A
n
)· 1 · 1− (A/n)i
1− (A/n)
= 1−(A
n
)i.
Inserting this we obtain:
F (t) = e−nµt ·∞∑
i=1
(nµt)i
i!
1−
(A
n
)i
= e−nµt ∞∑
i=0
(nµt)i
i!−∞∑
i=0
(nµt)i
i!
(A
n
)i
= e−nµt
enµt − enµt · A/n,
F (t) = 1− e−(n− A)µt ,
F (t) = 1− e− (nµ− λ) t , n > A , t > 0 . (9.28)
i.e. an exponential distribution.
Apparently we have a paradox: when arriving at a system with all servers busy one may:
1. Count the number k of waiting customers ahead. The total waiting time will then beErlang–(k+1) distributed.
2. Close the eyes. Then the waiting time becomes exponentially distributed.
The interpretation of this is that a weighted sum of Erlang distributions with geometricallydistributed weight factors is equivalent to an exponential distribution. In Fig. 9.6 the phase-diagram for (9.27) is shown, and we notice immediately that it can be reduced to a singleexponential distribution (Sec. 2.3.3 & Fig. 2.12). Formula (9.28) confirms that the meanwaiting time wn for customers who have to wait in the queue becomes as shown in (9.17).
The waiting time distribution for all (an arbitrary customer) becomes (2.27):
Fs(t) = 1− E2,n(A) · e−(n−A)µ t , A < n , t ≥ 0 , (9.29)
and the mean value of this distribution is Wn in agreement with (9.15). The results may bederived in an easier way by means of generation functions.
9.5. SINGLE SERVER QUEUEING SYSTEM M/M/1 243
0 5 10 15 20 25 30 35 40 45 50 55 6010−4
2
5
10−3
2
5
10−2
2
5
10−1
2
5
1Density function
....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
.......................... ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. .............
..................................................... . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A = 8n = 10
Waiting time
SIROFCFSLCFS
Figure 9.5: Density function for the waiting time distribution for the queueing disciplineFCFS, LCFS, and SIRO (RANDOM). For all three cases the mean waiting time for delayedcalls is 5 time-units. The form factor is 2 for FCFS, 3.33 for LCFS, and 10 for SIRO. Thenumber of servers is 10 and the offered traffic is 8 erlang. The mean service time is s = 10time-units.
9.5 Single server queueing system M/M/1
This is the system appearing most often in the literature. The state probabilities (9.2) aregiven by a geometric series:
p(i) = (1− A) · Ai, i = 0, 1, 2, . . . , (9.30)
as p(0) = 1−A. The mean value of state probabilities is m1 = A/(1− A).
The probability of delay become:
E2,1(A) = A .
The mean queue length Ln (9.12) and the mean waiting time for all customers Wn (9.15)
244 CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS
................................................................................... .......................... ................................................................................... ..........................
.............................................................................................. .......................... ........................................................................................................................................... .......................... ........................................................................................................................................... .......................... ..................................................................................................................... ............................................................................................................................................................................................................................................................................................................
........
..................
................................................................................................................................................................................................................................................................
..................
........
..................
................................................................................................................................................................................................................................................................
..................
........
..................
................................................................................................................................................................................................................................................................
..................
........
..................
.............................................................................................. .......................... ..................................................................................................................................................................................................................................... .......................... .................................................................................................................................................................. ............................................................................................................................................................................................................................................................... ..........................
........................................................................................................................................... ..........................· · ·· · ·
· · ·· · ·
· · ·
· · ·
n µ− λ ⇐⇒
n µ n µ n µAn
An
An
An
1−An 1−A
n 1−An 1−A
n
Figure 9.6: The waiting time distribution for M/M/n–FCFS becomes exponentially dis-tributed with intensity (nµ−λ). The phase-diagram to the left corresponds to a weightedsum of Erlang-k distributions (Sec. 2.3.3) as the termination rate out of all phases isnµ · (1− A
n) = nµ− λ.
become:
L1 =A2
1−A , (9.31)
W1 =A s
1−A . (9.32)
From this we observe that an increase in the offered traffic results in an increase of Ln by thethird power, independent of whether the increase is due to an increased number of customers(λ) or an increased service time (s). The mean waiting time Wn increases by the third powerof s, but only by the second power of λ. The mean waiting time wn for delayed customersincreases with the second power of s, and the first power of λ. An increased load due to morecustomers is thus better than an increased load due to longer service times. Therefore, it isimportant that the service times of a system do not increase during overload.
Y
jλ
µ
Y
j
µ
Y
j
µY
j
µ
3210
λ λ λ
.....
Figure 9.7: State transition diagram for M/M/1.
9.5.1 Sojourn time for a single server
When there is only one server, the state probabilities (9.2) are given by a geometric series(9.30) for all i ≥ 0. Every customer spends an exponentially distributed time interval withintensity µ in every state. A customer who finds the system in state [ i ] shall stay in
9.6. PALM’S MACHINE REPAIR MODEL 245
the system an Erlang–(i+1) distributed time interval. Therefore, the sojourn time in thesystem (waiting time + service time), which also is called the response time, is exponentiallydistributed with intensity (µ− λ) (cf. Fig. 2.12):
F (t) = 1− e−(µ− λ)t , µ > λ , t ≥ 0 . (9.33)
This is identical with the waiting time distribution of delayed customers. The mean sojourntime may be obtained directly using W1 from (9.32) and the mean service time s:
m1 = W1 + s =As
1− A + s ,
m1 =s
1− A =1
µ− λ , (9.34)
where µ = 1/s is the service rate. We notice that mean sojourn time is equal to mean waitingtime for delayed customers (9.17). The mean sojourn time is by Little’s law also equal to themean value of state probabilities divided by λ.
9.6 Palm’s machine repair model
This model belongs to the class of cyclic queueing systems and corresponds to a pure delaysystem with a limited number of customers (cf. Engset case for loss systems).
The model was first considered by the Russian Gnedenko in 1933 and published in 1934. Itbecame widely known when C. Palm published a paper in 1947 [93] in connection with atheoretical analysis of manpower allocation for servicing automatic machines. A number of Smachines, which usually run automatically, are serviced by n repairmen. The machines maybreak down and then they have to be serviced by a repairman before running again. Theproblem is to adjust the number of repairmen to the number of machines so that the totalcosts are minimized (or the profit optimized). The machines may be textile machines whichstop when they run out of thread; the repairmen then have to replace the empty spool of amachine with a full one.
This Machine-Repair model or Machine Interference model was also considered by Feller(1950 [32]). The model corresponds to a simple closed queueing network and is successfullyapplied to solve traffic engineering problems in computer systems. By using Kendall’s nota-tion (Sec. 10.1) the queueing system is denoted by M/M/n/S/S, where S is the number ofcustomers, and n is the number of servers.
The model is widely applicable. In the Web, the machines correspond to clients whereasthe repairmen correspond to servers. In computer terminal systems the machines correspondto terminals and a repairman corresponds to a computer managing the terminals. In a
246 CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS
computer system the machines may correspond to disc storages and the repairmen correspondto input/output (I/O) channels. In the following we will consider a computer terminal systemas the background for the development of the theory.
9.6.1 Terminal systems
Time division is an aid in offering optimal service to a large group of customers using forexample terminals connected to a mainframe computer. The individual user should feel thathe is the only user of the computer (Fig. 9.8).
..................................................................................................................................................
..............................................................................
................................................................................................................................................
................................................................................
................................................................................................................................................
................................................................................
.................................................................................................................................................
...............................................................................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................ ........................................................................................ ................ ........................................................................................ ................ ........................................................................................ ................ ...............................................................................................................................................................................
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.................................................... ................
.................................................... ................
.................................................... ................
· · ·· · ·· · ·
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.............................................................................................................................................................................................................................................................................................................................................................................................................................................. ...........................................................................................................
............................................................................................................................................................................................................................................................
................................
................................
................................
................................
.............
........................................................................................................................................................................................................................
..............................................................................................................................................................................................................................................................................................
.............................
.............................
.............................
........
........
........
........
........
........
........
.
........
........
........
........
........
........
........
.
........
........
........
........
........
........
........
.
........
........
........
........
........
........
........
.
........
........
........
........
........
........
........
.
........
........
........
........
........
........
........
.
............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............................................................................................................................................................................................................................................................................................................................................................................
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
..
1
2
S
µ
γ
γ
γ
Queue Computer
Terminals Queueing system
Figure 9.8: Palm’s machine-repair model. A computer system with S terminals (an inter-active system) corresponds to a waiting time system with a limited number of sources.
The individual terminal all the time changes between two states (interact) (Fig. 9.9):
• the user is thinking (working), or
• the user is waiting for a response from the computer.
The time interval the user is thinking is a random variable Tt with mean value mt. The timeinterval, when the user is waiting for the response from the computer, is called the responsetime R. This includes both the time interval Tw (mean value mw), where the job is waitingfor getting access to the computer, and the service time itself Ts (mean value ms).
Tt +R is called the circulation time (Fig. 9.9). At the end of this time interval the terminalreturns to the same state as it left at the beginning of the interval (recurrent event). Inthe following we are mainly interested in mean values, and the derivations are valid for allwork-conserving queueing disciplines (Sec. 10.6.2).
9.6. PALM’S MACHINE REPAIR MODEL 247
................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................ ...................................................................................................................................................................................................................................................................................................................................................................................................
..........................
.................................................................................. .................................................................. ................ .................................................................................. .................................................................. ................ ............................................................. ............................................. ................
.................................................................................................................................... .................................................................................................................... ................
...................................................................................................................... ...................................................................................................... ................
............. ...........
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
............. ............. .............
.............
............. .......................................
........
.....
........
.....
........
.....
Tt Tw Ts
R
Circulation time
Time
Terminal state
Thinking
Waiting
Service
Figure 9.9: The individual terminal may be in three different states. Either the user isworking actively at the terminal (thinking), or he is waiting for response from the computer.The latter time interval (response time) is divided into two phases: a waiting phase and aservice phase.
9.6.2 State probabilities – single server
We consider now a system with S terminals, which are connected to one computer (n = 1).The thinking time for each thinking terminal is so far assumed to be exponentially distributedwith intensity γ = 1/mt, and the service (execution) time at the computer is also assumed tobe exponentially distributed with intensity µ = 1/ms. When there is queue at the computer,the terminals have to wait for service. Terminals being served or waiting in queue have arrivalintensity zero.
State [ i ] is defined as the state, where there are i terminals in the queueing system (Fig. 9.8),i.e. the computer is either idle (i = 0) or working (i > 0), and (i−1) terminals are waitingwhen (i > 0).
The queueing system can be modeled by a pure birth & death process, and the state transitiondiagram is shown in Fig. 9.10. Statistical equilibrium always exists (ergodic system). Thearrival intensity decreases as the queue length increases and becomes zero when all terminalsare inside the queueing system.
The steady state probabilities are found by applying cut equations to Fig. 9.10 and expressingall states in terms of state S:
(S − i) γ · p(i) = µ · p(i+ 1), i = 0, 1, . . . , S − 1 . (9.35)
By the additional normalization constraint that the sum of all probabilities must be equal to
248 CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS
.................................................................................................................................................
............................................................................... .................................................................................................................................................
............................................................................... .................................................................................................................................................
............................................................................... .................................................................................................................................................
............................................................................... .................................................................................................................................................
...............................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
0 1 2 S−1 S· · ·
S γ (S−1) γ (S−2) γ 2 γ γ
µ µ µ µ µ
Figure 9.10: State transition diagram for the queueing system shown in 9.8. State [ i ] denotesthe number of terminals being either served or waiting, i.e. S − i denotes the number ofterminals thinking.
one we find, introducing % = µ/γ:
p(S − i) =%i
i!p(S)
=
%i
i!S∑
j=0
%j
j!
, i = 0, 1, . . . , S , (9.36)
p(0) = E1,S (%) . (9.37)
This is the truncated Poisson distribution (4.9).
We may interpret the system as follows. A trunk group with S trunks (the terminals) is offeredcalls from the computer with the exponentially distributed inter-arrival times (intensity µ).When all S trunks are busy (thinking), the computer is idle and the arrival intensity is zero,but we might just as well assume it still generates calls with intensity µ which are lost oroverflow to another trunk group (the exponential distribution has no memory). The computerthus offers the traffic % = µ/γ to S trunks, and we have the formula (9.37). Erlang’s B-formulais valid for arbitrary holding times (Sec. 4.6.2) and therefore we have:
Theorem 9.1 The state probabilities of the machine repair model (9.36)(9.37) with one com-puter and S terminals is valid for arbitrary thinking time distributions when the service timeof the computer are exponentially distributed. Only the mean thinking time is of importance.
The ratio % = µ/γ between the time a terminal on average is thinking 1/γ and the timethe computer on average serves a terminal 1/µ, is called the service ratio. The service ratiocorresponds to the offered traffic A in Erlang’s B-formula. The state probabilities are thusdetermined by the number of terminals S and the service ratio %. The numerical evaluationof (9.36) & (9.37) is of course as for Erlang’s B-formula (4.29).
Example 9.6.1: Information systemWe consider an information system which is organized as follows. All information is kept on 6 discs
9.6. PALM’S MACHINE REPAIR MODEL 249
which are connected to the same input/output data terminal, a multiplexer channel. The averageseek time (positioning of the seek-arm) is 3 ms and the average latency time to locate the file is1 ms, corresponding to a rotation time of 2 ms. The time required for reading a file is exponentiallydistributed with a mean value 0.8 ms. The disc storage is based on rotational positioning sensing,so that the channel is busy only during the reading. We want to find the maximum capacity of thesystem (number of requests per second). We can never get a higher utilization for this system
The thinking time is 4 ms and the service time is 0.8 ms. The service ratio thus becomes 5, andErlang’s B-formula gives the value:
1− p(0) = 1− E1,6(5) = 0.8082 .
This corresponds to γmax = 0.8082/0.0008 = 1010 requests per second. This utilization cannot beexceeded. 2
9.6.3 Terminal states and traffic characteristics
The performance measures are easily obtained from the analogy with Erlang’s classical losssystem (9.37). Replacing p(0) by E1,S(%) the computer is working with the probability 1−E1,S(%). We then have that the average number of terminals being served by the computer(utilization of computer) is given by:
ns = 1− E1,S(%) . (9.38)
The average number of thinking terminals corresponds to the traffic carried in Erlang’s losssystem:
nt =µ
γ1− E1,S(%) = % 1− E1,S(%). (9.39)
The average number of waiting terminals becomes:
nw = S − ns − nt (9.40)
= S − 1− E1,S(%) − % · 1− E1,S(%)
= S − 1− E1,S(%)1 + % . (9.41)
If we consider a random terminal at a random point of time, we get:
pterminal served = ps =nsS
=1− E1,S(%)
S, (9.42)
pterminal thinking = pt =ntS
=% (1− E1,S(%))
S, (9.43)
pterminal waiting = pw =nwS
= 1− 1− E1,S(%)1 + %S
. (9.44)
250 CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS
We are also interested in the response time R which has the mean value mr = mw +ms. Byapplying Little’s theorem L = λW to terminals, waiting positions and computer, respectively,we obtain (denoting the circulation rate of jobs by λ):
1
λ=mt
nt=mw
nw=ms
ns=
mr
nw + ns, (9.45)
or
mr =nw + nsns
·ms =S − ntns
·ms .
Making use of (9.45) and (9.38)
ntns
=mt
ms
we get:
mr =S
ns·ms −mt
mr =S
1− E1,S(%)·ms −mt . (9.46)
Thus the mean response time is insensitive to the time distributions as it is based on (9.38)and (9.45) (Little’s Law). However, E1,S(%) will depend on the types of distributions in thesame way as the Erlang-B formula. If the service time of the computer is exponentiallydistributed (mean value ms = 1/µ), then E1,S(%) will be given by (9.37). Fig. 9.11 shows theresponse time as a function of the number of terminals in this case.
If all time intervals are constant, the computer may work all the time serving K terminalswithout any delay when:
K =mt +ms
ms
= %+ 1 . (9.47)
K is a suitable parameter to describe the point of saturation of the system. The averagewaiting time for an arbitrary terminal is obtained from (9.46):
mw = mr −ms
Example 9.6.2: Time sharing computerIn a terminal system the computer sometimes becomes idle (waiting for terminals) and the terminalssometimes wait for the computer. Few terminals result in a low utilization of the computer, whereasmany terminals connected will waste the time of the users.Fig. 9.12 shows the waiting time traffic in erlang, both for the computer and for a single terminal.An appropriate weighting by costs and summation of the waiting times for both the computer andfor all terminals gives the total costs of waiting.
9.6. PALM’S MACHINE REPAIR MODEL 251
0 10 20 30 40 500
5
10
15
20
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
Number of terminals S
Mean response time [µ−1]
% = 30
.................................................................................................................................................................................................................................................
.............................................................................................................................
...................................................................................
.................................................................
....................................................
...........................................
.....................................
.................................
..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. .............
..................................................................................................................................................................................................................................................................................................................................................................................................................................................
Figure 9.11: The actual average response time experienced by a terminal as a function of thenumber of terminals. The service-factor is % = 30. The average response time converges to astraight line, cutting the x-axes in S = 30 terminals. The average virtual response time for asystem with S terminals is equal to the actual average response time for a system with S+ 1terminals (the Arrival theorem, theorem 5.1).
For the example in Fig. 9.12 we obtain the minimum total delay costs for about 45 terminals whenthe cost of waiting for the computer is hundred times the cost of one terminal. At 31 terminalsboth the computer and each terminal spends 11.4 % of the time for waiting. If the cost ratio is 31,then 31 is the optimal number of terminals. However, there are several other factors to be takeninto consideration. 2
Example 9.6.3: Traffic congestionWe may define the traffic congestion in the usual way (Sec. 1.9). The offered traffic is the trafficcarried when there is no queue. The offered traffic per source is (5.10):
a =β
1 + β=
ms
mt +ms
The carried traffic per source is:y =
ms
mt +mw +ms.
252 CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS
The traffic congestion becomes:
C =a− ya
= 1− mt +ms
mt +mw +ms=
mw
mt +mw +ms,
C = pw
In this case with finite number of sources the traffic congestion becomes equal to the proportionof time spent waiting. For Erlang’s waiting time system the traffic congestion is zero because alloffered traffic is carried. 2
0 10 20 30 40 50 60 700.0
0.2
0.4
0.6
0.8
1.0 ........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
....................................................................................................
..............................................................
...............................................
......................................
...................................
...............................
............................................................................................................................................................................................................................................................
..............................
...............................
....................................
....................................
......................................
.........................................
............................................
..................................................
.............................................
Waiting time traffic [erlang]
Computer
% = 30
Per terminal
Number of terminals S
Figure 9.12: The waiting time traffic (the proportion of time spend waiting) measured inerlang for the computer, respectively the terminals in an interactive queueing system (Servicefactor % = 30).
9.6. PALM’S MACHINE REPAIR MODEL 253
9.6.4 Machine–repair model with n servers
The above model is easily generalized to n computers. The transition diagram is shown inFig. 9.13.
.................................................................................................................................................
............................................................................... .................................................................................................................................................
............................................................................... .................................................................................................................................................
............................................................................... .................................................................................................................................................
............................................................................... .................................................................................................................................................
............................................................................... .................................................................................................................................................
...............................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..........................
...............................................................................
.........................................................................................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
0 1 n−1 n n+1 S· · · · · ·
S γ (S−1)γ (S−n+2)γ (S−n+1)γ (S−n)γ (S−n−1)γ γ
µ 2µ (n−1)µ nµ nµ nµ nµ
Figure 9.13: State transition diagram for the machine-repair model with S terminals and ncomputers.
The steady state probabilities become:
p(i) =
(S
i
)(γ
µ
)ip(0) , 0 ≤ i ≤ n ,
p(i) =(S − n)!
(S − i)!
(γ
nµ
)i−n· p(n) , n ≤ i ≤ S . (9.48)
where we have the normalization constraint:
S∑
i=0
p(i) = 1 . (9.49)
We can show that the state probabilities are insensitive to the thinking time distribution asin the case with one computer (we get a state-dependent Poisson arrival process).
An arbitrary terminal is at a random point of time in one of the three possible states:
ps = p the terminal is served by a computer,
pw = p the terminal is waiting for service,
pt = p the terminal is thinking.
We have:
ps =1
S
n∑
i=0
i · p(i) +S∑
i=n+1
n · p(i), (9.50)
pt = ps ·µ
γ, (9.51)
pw = 1− ps − pt . (9.52)
254 CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS
The mean utilization of the computers becomes:
α =psn· S =
nsn. (9.53)
The mean waiting time for a terminal becomes:
W =pwps· 1
µ. (9.54)
Sometimes pw is called the loss coefficient of the terminals, and similarly (1−α) is called theloss coefficient of the computers (Fig. 9.12).
Example 9.6.4: Numerical example of scale of economyThe following numerical examples illustrate that we obtain the highest utilization for large valuesof n (and S). Let us consider a system with S/n = 30 and µ/γ = 30 for a increasing number ofcomputers (in this case pt = α).
n 1 2 4 8 16
ps 0.0289 0.0300 0.0307 0.0313 0.0316pw 0.1036 0.0712 0.0477 0.0311 0.0195pt 0.8675 0.8989 0.9215 0.9377 0.9489a 0.8675 0.8989 0.9215 0.9377 0.9489
W[µ−1
]3.5805 2.3754 1.5542 0.9945 0.6155
2
9.7 Optimizing the machine-repair model
In this section we optimise the machine/repair model in the same way as Palm did in 1947.We have noticed that the model for a single repair-man is identical with Erlang’s loss system,which we optimized in Chap. 4. We will thus see that the same model can be optimized inseveral ways.
We consider a terminal system with one computer and S terminals, and we want to find anoptimal value of S. We assume the following structure of costs:
ct = cost per terminal per time unit a terminal is thinking,
cw = cost per terminal per time unit a terminal is waiting,
cs = cost per terminal per time unit a terminal is served,
ca = cost of the computer per time unit.
9.7. OPTIMIZING THE MACHINE-REPAIR MODEL 255
The cost of the computer is supposed to be independent of the utilization and is split uniformlyamong all terminals.
0 10 20 30 40 50 600
5
10
15
20
25
30Total costs C0 [×100]
Number of terminals S
.................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..........................................................................
.....................................................................
.....................................................................
....................................................................
................................................................
............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ...
........
.....
........
.....
........
.....
........
.....
........
.....
........
.....
........
.....
........
.....
Figure 9.14: The machine/repair model. The total costs given in (9.58) are shown as a func-tion of number of terminals for a service ratio % = 25 and a cost ratio r = 1/25 (cf. Fig. 4.7).
The outcome (product) of the process is a certain thinking time at the terminals (productiontime).
The total costs c0 per time unit a terminal is thinking (producing) becomes:
pt · c0 = pt · ct + ps · cs + pw · cw +1
S· ca . (9.55)
We want to minimize c0. The service ratio % = mt/ms is equal to pt/ps. Introducing the costratio r = cw/ca , we get:
c0 = ct +pspt· cs +
pw · cw + 1S· ca
pt
= ct +1
%· cs + ca ·
r · pw + (1/S)
pt, (9.56)
256 CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS
which is to be minimized as a function of S. Only the last term depends on the number ofterminals and we get:
minSc0 = min
S
r · pw + (1/S)
pt
= minS
r · (nw/S) + (1/S)
nt/S
= minS
r · nw + 1
nt
(9.57)
= minS
r [S − 1− E1,S(%) 1 + %] + 1
1− E1,S(%) · %
= minS
r · S + 1
1− E1,S(%) · % + 1 +1
%
, (9.58)
where E1,S(%) is Erlang’s B-formula (9.36).
We notice that the minimum is independent of ct and cs, and that only the ratio r = cw/caappears. The numerator corresponds to (4.47), whereas the denominator corresponds to thecarried traffic in the corresponding loss system. Thus we minimize the cost per carried erlangin the corresponding loss system. In Fig. 9.14 an example is shown. We notice that the resultdeviates from the result obtained by using Moe’s Principle for Erlang’s loss system (Fig. 4.7),where we optimize the profit.
9.8 Waiting time distribution for M/M/n/S/S–FCFS
We consider a finite-source system with S sources and n channels. Both thinking time andservice time are assumed to be exponential distributed with rate γ, respectively µ. Due tothe arrival theorem an arriving call observes the the state probabilities of a system with S−1sources. We renumber the states so that the state is defined as number of thinking sources.We denote the state probabilities of a system with S−1 sources by:
pS−1(i) , i = 0, 1, . . . , S − 1 . (9.59)
9.8. WAITING TIME DISTRIBUTION FOR M/M/N/S/S–FCFS 257
.................................................................................................................................
......................................................................... .................................................................................................................................
......................................................................... .................................................................................................................................
......................................................................... .................................................................................................................................
......................................................................... .................................................................................................................................
......................................................................... .................................................................................................................................
......................................................................... .................................................................................................................................
.........................................................................
...........................
...................................................................
..............................................................................................
...........................
...................................................................
..............................................................................................
...........................
...................................................................
..............................................................................................
...........................
...................................................................
..............................................................................................
...........................
...................................................................
..............................................................................................
...........................
...................................................................
..............................................................................................
...........................
...................................................................
..............................................................................................
...........................
...................................................................
..............................................................................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
0 1 S−n−1 S−n S−n+1 S−1 S· · · · · ·
n μ n μ n μ n μ n μ (n−1)μ 2 μ μ
γ 2γ (S−n−1)γ (S−n)γ (S−n+1)γ (S−n+2)γ (S−1)γ Sγ
..................................................................................................................................
........................................................................ ..................................................................................................................................
........................................................................ ..................................................................................................................................
........................................................................ ..................................................................................................................................
........................................................................ ..................................................................................................................................
........................................................................ ..................................................................................................................................
........................................................................
...........................
...................................................................
..............................................................................................
...........................
...................................................................
..............................................................................................
...........................
...................................................................
..............................................................................................
...........................
...................................................................
..............................................................................................
...........................
...................................................................
..............................................................................................
...........................
...................................................................
..............................................................................................
...........................
...................................................................
..............................................................................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
0 1 S−n−1 S−n S−n+1 S−1· · · · · ·
n μ n μ n μ n μ (n−1)μ (n−2)μ μ
γ 2γ (S−n−1)γ (S−n)γ (S−n+1)γ (S−n+2)γ (S−1)γ
................................................................................................................................
.......................................................................... ................................................................................................................................
.......................................................................... ................................................................................................................................
..........................................................................
............................
..................................................................
..............................................................................................
............................
..................................................................
..............................................................................................
............................
..................................................................
..............................................................................................
..................... ................
.....................................
..................... ................
.....................................
..................... ................
.....................................
0 1 S−n−1· · ·
n μ n μ n μ
γ 2γ (S−n−1)γ
Figure 9.15: The upper part shows the state transition diagram for the machine-repair modelwith S terminals and n computers. The state of the system is defined as the number ofthinking customers (cf. Fig. 9.13). The middle diagram shows the same model with S − 1sources, i.e. the states seen by an arriving customer according to the arrival theorem. Thelower part shows a subset of the states from state [0] to state [S−n] which corresponds tothe diagram for an Erlang loss system with S−n channels.
The probability of delay pw, respectively the probability of immediate service px (pw+px = 1),becomes:
pw =S−n−1∑
i=0
pS−1(i) . (9.60)
px =S−1∑
i=S−npS−1(i) . (9.61)
We consider only delayed calls, i.e. an arriving call observes a system with S−1 sources andwill be delayed if he observe one of the states 0, 1, . . . , S−n−1 (9.60) where all servers areoccupied. This part of the state transition diagram corresponds to an Erlang loss system witharrival rate nµ, service rate γ, i.e. an offered traffic A = nµ/γ, and S−n−1 servers. Theseprobabilities may be calculated accurately as described in Sec. 4.4. Thus these conditionalstate probabilities are given by the truncated Poisson distribution (4.9):
pS−1,w(i) =
Ai
i !
1 + A+A2
2 !+ . . .+
AS−1−n
(S−1−n) !
, i = 0, 1, . . . , S−1−n , (9.62)
where A = nµ/γ. The state probabilities (9.62) are a subset of (9.59). In state [0] nocustomers arrive as they all are waiting or being served. In state [1] all servers are busy and
258 CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS
S−n−2 customers are waiting. Thus the waiting time will be Erlang-(S−n−1) distributed.In state [S−n−1] all servers are busy but no one is waiting, so the waiting time becomesErlang-1 distributed. In general in state i (0 ≤ i ≤ S − 1 − n) the waiting time becomesErlang–(S − n− i) distributed.
The Erlang-k distribution with intensity nµ is (3.21):
Fk(t) =
∫ t
x=0
(nµx)k−1
(k − 1)!nµ · e−nµx dx
=∞∑
j=k
(nµt)j
j!· e−nµt
= 1−k−1∑
j=0
(nµt)j
j!· e−nµt . (9.63)
Thus for a given value t we can calculate the distribution function F (t) by calculating thefirst k terms (0, 1, . . . , k−1) of a Poisson distribution with parameter nµt. For small meanvalues this can be done directly. For large mean values this can be done in a numerical stableway as for example shown in Example 4.4.1. The mean value of this Erlang-k distribution isk/(nµ).
The compound waiting time distribution for delayed customers is obtained by summationover all states:
Fw(t) =S−1−n∑
i=0
pS−1,w(i) · FS−n−i(t) , (9.64)
where pi,w(t) is given by (9.62) and Fk(t) is given by (9.63). Both of these can be calculatedaccurately, and thus the waiting time distribution is obtained by a finite number of terms.
The mean waiting time w for a delayed customer becomes:
w =S−n−1∑
i=0
pS−1,w(i) · S−n−in µ
=S − nnµ
− 1
nµ·S−n−1∑
i=0
pS−1,w(i) · i ,
w =(S − n)− Y
nµ, (9.65)
where Y is the traffic carried in the above Erlang loss system (9.62) with S−n−1 servers.The mean waiting time for all customers then becomes:
W = pw · w , (9.66)
9.8. WAITING TIME DISTRIBUTION FOR M/M/N/S/S–FCFS 259
where pw is given above (9.60).
Example 9.8.1: Mean waiting times for (n, S,A) = (2, 60, 60)We consider a system with n = 2 servers, S = 60 sources, and A = 60 erlang. We choose meanservice time as time unit, 1/µ = 1. Thus 1/γ = 30 [time units]. From (9.65) we get:
w =(60− 2)− 60 · (1− E57(60))
2
=(60− 2)− 60 · (1− 0.128376)
2,
w = 2.851280 [mean service times] .
An arriving customer is either delayed or served immediately. Above we considered states (0, 1, . . . , 57),where a customer is delayed. These state probabilities add to one, when p(57) = E57(60) = 0.128376.We now find states p(58) and p(59) expressed by state probability p(57):
p(58) = p(57) · 6058
= 0.132803 ,
p(59) = p(58) · 3059
= 0.067527 .
Thus the state probabilities now add to 1.200330, and the normalized probabilities of delay beforeservice, respectively immediate service becomes:
pw =1
1.200330= 0.833105 ,
px =0.2003301.200330
= 0.166895 .
The the mean waiting time for all customers then becomes (9.66):
W = 2.375414 [time units] ,
which is in agreement with Example 9.6.4. The circulation time becomes
tc = idle time + waiting time + service time
= 30 + 2.375414 + 1 = 33.375414 [time units] ,
and the state probabilities (time averages) (ps, pw, pt) becomes as given in Example 9.6.4. 2
Example 9.8.2: Mean waiting times for (n, S,A) = (2, 10, 10)We consider a system with n = 2 servers, S = 10 sources, and A = 10 erlang. We choose meanservice time as time unit, 1/µ = 1. Thus 1/γ = 5 [time units]. From (9.65) we get:
w =(10− 2)− 10 · (1− E7(10))
2
=(10− 2)− 10 · (1− 0.409041)
2,
w = 1.045205 [mean service times] .
260 CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS
An arriving customer is either delayed or served immediately. Above we considered states (0, 1, . . . , 7),where a customer is delayed. These state probabilities add to one when p(7) = E7(10) = 0.409041.We now find states p(8) and p(9) expressed by state probability p(7):
p(8) = p(7) · 108
= 0.511301 ,
p(9) = p(8) · 59
= 0.284056 .
Thus the state probabilities now add to 1.795358, and the normalized probabilities of delay beforeservice, respectively immediate service becomes:
pw =1
1.795358= 0.556992 ,
px =0.795361.79536
= 0.443008 .
The the mean waiting time for all customers then becomes (9.66):
W = 0.582171 [time units] .
The circulation time becomes
tc = idle time + waiting time + service time
= 5 + 0.582171 + 1 = 6.582171 [time units] .
2
2010-04-13
Chapter 10
Applied Queueing Theory
So far we have considered classical queueing systems, where all traffic processes are birth anddeath processes. They play a key role in queueing theory. The theory of loss systems hasbeen successfully applied for many years within the field of telephony, whereas the theory ofdelay systems has been applied within the field of data and computer systems. To find ansimple analytical solution we have to assume either a Poisson arrival process or exponentiallydistributed service times. In this chapter, we mainly focus on the single server queue.
In Sec. 10.1 we introduce Kendall’s notation for queueing systems, and describe queueing dis-ciplines and priority systems. Sec. 10.2 mentions some general results and concepts as Little’slaw, work conservation, and load function. The important Pollaczek-Khintchine formula forM/G/1 is derived in Sec. 10.3, where we also list some results for busy period and momentsof waiting time distributions. State probabilities for a finite buffer systems are obtained byKeilson’s formula from infinite buffer state probabilities.
The first paper on queueing theory was published by Erlang in 1908 and dealt with queueingsystems with constant service time M/D/n. This is more complex than Markovian systems.In Sec. 10.4 we deal with this system in details and derive state probabilities and the waitingtime distribution for FCFS expressed by state probabilities. A system with Erlang-k arrivalprocess, constant service time and r servers is equivalent to a system with Poisson arrivalprocess, constant service time, and k ·r servers. In Sec. 10.5 we consider single-server systemswith exponential service times and general renewal arrival processes.
Sec. 10.6 considers more classes of customers with different priorities and different servicetime distributions. In Sec. 10.6.1 parameters of individual arrival processes and the totalarrival process are described. Kleinrock’s conservation law is derived in Sec. 10.6.2. Meanwaiting times assuming non-preemptive disciplines are derived in Sec. 10.6.6. As a specialcase we find the mean waiting time for shortest job first queueing discipline (Sec. 10.6.4). Forpreemptive-resume queueing discipline mean waiting times are derived in Sec. refsec:14.4.7.Finally, we consider round robin and processor sharing queueing disciplines in Sec. 10.7.
261
262 CHAPTER 10. APPLIED QUEUEING THEORY
10.1 Kendall’s classification of queueing models
In this section we shall introduce a compact notations for queueing systems, called Kendall’snotation.
10.1.1 Description of traffic and structure
D.G. Kendall (1951 [69]) introduced the following notation for queueing models:
A/B/nwhere
A = arrival process,B = service time distribution,n = number of servers.
For traffic processes we use the following standard notations (cf. Sec. 2.4):
M ∼ Markov. Exponential time intervals (Poisson arrival process,exponentially distributed service times).
D ∼ Deterministic. Constant time intervals.
Ek ∼ Erlang-k distributed time intervals (E1 = M).
Hn ∼ Hyper-exponential of order n distributed time intervals.
Cox ∼ Cox-distributed time intervals.
Ph ∼ Phase-type distributed time intervals.
GI ∼ General Independent time intervals, renewal arrival process.
G ∼ General. Arbitrary distribution of time intervals (may include correlation).
Example 10.1.1: Ordinary queueing models
M/M/n : is a pure delay system with Poisson arrival process, exponentially distributed servicetimes, and n servers. This is the classical Erlang delay system (Chap. 9).
GI/G/1 : is a general delay system with only one server.2
The above mentioned notation is widely used in the literature. For a complete specificationof a queueing system more information is required:
A/B/n/K/S/X
where:
10.1. KENDALL’S CLASSIFICATION OF QUEUEING MODELS 263
K = the total capacity of the system, or only the number of waiting positions,S = the population size (number of customers),X = queueing discipline (Sec. 10.1.2).
K = n corresponds to a loss system, which is often denoted as A/B/n–Loss. A superscript bon A, respectively B, indicates group arrival (bulk arrival, batch arrival), respectively groupservice. Index c (clocked) may indicate that the system operates in discrete time. Fullaccessibility is usually assumed.
10.1.2 Queueing strategy: disciplines and organization
Customers waiting in a queue to be served can be selected for service according to manydifferent principles. We first consider the three classical queueing disciplines:
FCFS: First Come – First Served.It is also called an ordered queue, and this discipline is the standard discipline inreal-life where customers are human beings. It is also denoted as FIFO: First In –First Out. Note that FIFO refers to the queue only, not to the total system. If wehave more than one server, then a customer with a short service time may overtakea customer with a long waiting time even if we have FIFO queue.
LCFS: Last Come – First Served.This corresponds to the stack principle. It is for instance used in storages, onshelves of shops etc. This discipline is also denoted as LIFO: Last In – First Out.
SIRO: Service In Random Order.All customers waiting in the queue have the same probability of being chosen forservice. This is also called RANDOM or RS (Random Selection).
The first two disciplines only take arrival times into considerations, while the third doesnot consider any criteria at all and thus does not require any memory (contrary to the firsttwo). They can be implemented in simple technical systems. Within an electro-mechanicaltelephone exchange the queueing discipline SIRO was often used as it corresponds (almost)to sequential hunting without homing. The total waiting time for all customers and thusthe mean waiting time is the same in the three above-mentioned disciplines The queueingdiscipline only decides how the waiting time is distributed among customers. In for examplea stored-program-controlled system there may be more complicated queueing disciplines. Inqueueing theory we in general assume that the total offered traffic is independent of thequeueing discipline.
We often try to reduce the total waiting time. This can be done by using the service time ascriterion:
264 CHAPTER 10. APPLIED QUEUEING THEORY
SJF: Shortest Job First (SJN = Shortest Job Next, SPF = Shortest Processing timeFirst). This discipline assumes that we know the service time in advance and itminimizes the total waiting time for all customers.
The above mentioned disciplines take account of either the arrival times or the service times.A compromise between these disciplines is obtained by the following disciplines:
RR: Round Robin.A customer served is given at most a fixed service time (time slice or slot). If theservice is not completed during this interval, the customer returns to the queuewhich is FCFS. When the time slice converges to zero we get:
PS: Processor Sharing.All customers share the service capacity equally.
FB: Foreground – Background.This discipline attempts to implement SJF without knowing the service times inadvance. The server will offer service to the customer who so far has received theleast amount of service. When all customers have obtained the same amount ofservice, FB becomes identical with PS.
The last mentioned disciplines are dynamic as the queueing disciplines depend on the amountof time spent in the queue.
10.1.3 Priority of customers
In real life customers are often divided into N priority classes, where a customer belonging toclass p has higher priority than a customer belonging to class p+1. We distinguish betweentwo types of priority:
Non-preemptive = HOL:A new customer waits until a server becomes idle even if it is serving a customer withlower priority. Furthermore it also waits until all customers with higher priority andcustomers arriving earlier with same priority have been served). This discipline is alsocalled HOL = Head-Of-the-Line.
Preemptive:A customer being served having lower priority than a new arriving customer is inter-rupted. We distinguish between:
– Preemptive resume = PR:The service is continued from, where it was interrupted,
– Preemptive without re-sampling:The service restarts from the beginning with the same service time, and
10.2. GENERAL RESULTS IN THE QUEUEING THEORY 265
– Preemptive with re-sampling:The service starts again with a new service time.
The two latter disciplines are applied in for example manufacturing systems and reliability.Within a single class, we have the disciplines mentioned in Sec. 10.1.2. In queueing literaturewe meet many other strategies and symbols. GD denotes an arbitrary queueing discipline(general discipline).
The behavior of customers is also subject to modeling:
– Balking refers to queueing systems, where customers with a queue-length dependent prob-ability may give up joining the queue.
– Reneging or time-out refers to systems with impatient customers which abandon the queuewithout being served.
– Jockeying refers to the systems where the customers may jump from one (e.g. long) queueto another (e.g. shorter) queue to obtain faster service.
Thus by combining all options there are many possible models. In this chapter we shall onlydeal with the most important ones. We mainly consider systems with one server.
Example 10.1.2: Stored Program Controlled (SPC) switching systemIn SPC–systems tasks of the processors may for example be divided into ten priority classes. Thepriority is updated for example every 5th millisecond. Error messages from a processor have thehighest priority, whereas routine tasks of control have the lowest priority. Serving accepted calls hashigher priority than detection of new call attempts. 2
10.2 General results in the queueing theory
As mentioned earlier there are many different queueing models, but unfortunately there areonly few general results in the queueing theory. The literature is very extensive, because manyspecial cases are important in practice. In this section we shall look at the most importantgeneral results.
Little’s theorem presented in Sec. 3.3 is the most general result which is valid for an arbitraryqueueing system. The theorem is easy to apply and very useful in many cases.
Classical queueing models play a key role in the queueing theory, because other systems oftenconverge to these when the number of servers increases (Palm’s theorem 3.1 in Sec. 3.7).Systems that deviate most from the classical models are the systems with a single server.However, these systems are also the simplest to deal with.
266 CHAPTER 10. APPLIED QUEUEING THEORY
For waiting time systems we also distinguish between call averages and time averages. Thevirtual waiting time is the waiting time a customer experiences if the customer arrives ata random point of time (time average). The actual waiting time is the waiting time, thereal customers experiences (call average). When we consider systems with FCFS queueingdiscipline and Poisson arrival processes, the virtual waiting time will be equal to the actualwaiting time due to the PASTA property: time averages are equal to call averages).
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
.............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
....................................................................................................................................................................................................................................................
T1 T2 T3 T4 T5 T6 T7
s1
s2
s3
s4
s5
s6
s7
0
Time
U(t) Load function
Figure 10.1: Load function U(t) for the single server queueing system GI/G/1.
10.2.1 Load function and work conservation
We introduce two concepts which are widely used in queueing theory.
Work conserving A system is said to be work conserving if no servers are idle when there isat last one job waiting, and the service time are independent of the service disciplines.This will not always be fulfilled in real systems. If the server is a human being, theservice rate will often increase with the length of the queue, but after some time theserver may become exhausted and the service rate decreases.
Load function. U(t) denotes the time, it will require to serve the customers, which are inthe system at time t (Fig. 10.1). At a time of arrival U(t) increases with a jump equalto the service time of the arriving customer, and between arrivals U(t) decreases with
10.3. POLLACZEK-KHINTCHINE’S FORMULA FOR M/G/1 267
a slope depending on the number of working servers until 0, where it stays until nextarrival time. The mean value of the load function is denoted by U = EU(t).
In a GI/G/1 queueing system U(t) will be independent of the queueing discipline, if it is workconserving. For FCFS queueing systems the waiting time is equal to the load function at thetime of arrival. If we denote the inter-arrival time Ti+1 − Ti by ai, then we have Lindley’sequation:
Ui+1 = max0, Ui + si − ai , (10.1)
where Ui is the value of the load function at time Ti.
10.3 Pollaczek-Khintchine’s formula for M/G/1
In general, the mean waiting time for M/G/1 is given by:
Theorem 10.1 Pollaczek-Khintchine’s formula (1930–32):
W =A · s
2(1− A)· ε , (10.2)
W =V
1− A , (10.3)
where
V = A · s2· ε =
λ
2·m2 . (10.4)
W is the mean waiting time for all customers, s is the mean service time, A is the offeredtraffic, and ε is the form factor of the holding time distribution (2.13).
The more regular the service process is, the smaller the mean waiting time will be. Thecorresponding results for the arrival process is studied in Sec. 10.5. In real telephone trafficthe form factor will often be 4 − 6, in data traffic 10 − 100. Formula (10.2) is one of themost important results in queueing theory, and we will study it carefully. As a special casewe have earlier derived the mean waiting time for M/M/1, where ε = 2 (Sec. 9.2.4). Laterwe consider M/D/1 where ε = 1 (Sec. 10.4).
10.3.1 Derivation of Pollaczek-Khintchine’s formula
We consider the queueing system M/G/1 and we wish to find the mean waiting time for anarbitrary customer. It is independent of the queueing discipline, and therefore we may in the
268 CHAPTER 10. APPLIED QUEUEING THEORY
following assume FCFS. Due to the Poisson arrival process (PASTA–property) the actualwaiting time of a customers is equal to the virtual waiting time.
The mean waiting time W for an arbitrary customer can be split up into two parts:
1. The average time it takes for a customer under service to be completed. When the newcustomer we consider arrives at a random point of time, the residual mean service timegiven by (2.34):
m1,r =s
2· ε ,
where s and ε have the same meaning as in (10.2). When the arrival process is a Poissonprocess, the probability of finding a customer being served is equal to A because for asingle server system we always have p0 = 1− A (offered traffic = carried traffic).
The contribution to the mean waiting time from a customer under service thereforebecomes:
V = (1− A) · 0 + A · s2· ε
=λ
2·m2 .
2. The waiting time due to customers waiting in the queue (FCFS) which already arrived.On the average the queue length is L. By Little’s theorem we have
L = λ ·W ,
where L is the average number of customers in the queue at an arbitrary point of time,λ is the arrival intensity, and W is the mean waiting time which we look for. For everycustomer in the queue we shall on an average wait s time units. The mean waiting timedue to the customers in the queue therefore becomes:
L · s = λ ·W · s = A ·W .
We thus have the total waiting time (10.3) & (10.5):
W = V + AW ,
W =V
1− A
=A · s
2(1− A)· ε ,
which is Pollaczek-Khintchine’s formula (10.2). W is the mean waiting time for all customers,whereas the mean waiting time for delayed customers w becomes (A = D = the probabilityof delay) (2.28):
w =W
D=
s
2(1− A)· ε . (10.5)
10.3. POLLACZEK-KHINTCHINE’S FORMULA FOR M/G/1 269
The above-mentioned derivation is correct since the time average is equal to the call averagewhen the arrival process is a Poisson process (PASTA–property). It is interesting, because itshows how ε enters into the formula.
10.3.2 Busy period for M/G/1
A busy period of a queueing system is the time interval from the instant all servers becomebusy until a server becomes idle again. For M/G/1 it is easy to calculate the mean value ofa busy period.
At the instant the queueing system becomes empty, it has lost its memory due to the Poissonarrival process. These instants are regeneration points (equilibrium points), and next eventoccurs according to a Poisson process with intensity λ.
We need only consider one cycle from the instant the server changes state from idle to busytill the next time it again changes state from idle to busy. This cycle includes a busy periodof duration T1 and an idle period of duration T0. Fig. 10.2 shows an example with constantservice time. The proportion of time the system is busy then becomes:
.................................................... ................
........
........
........
........
........
........
........
........
........................
................
........
........
........
........
........
........
........
........
........................
................
........
........
........
........
....................
................
........
........
........
........
....................
................
............................................................................................................................................................................... ............................................................................................................................................................... ................ ...................................................................................... ...................................................................... ................
.................................................................... .................................................... ................
T1 T0
h
Time
Arrivals
Idle
Busy
State
Figure 10.2: Example of a sequence of events for the system M/D/1 with busy period T1 andidle period T0.
mT1
mT0+T1
=mT1
mT0 +mT1
= A = λ · s .
From mT0 = 1/λ, we get:
mT1 =s
1−A . (10.6)
During a busy period at least one customer is served.
270 CHAPTER 10. APPLIED QUEUEING THEORY
10.3.3 Moments of M/G/1 waiting time distribution
If we only consider customers, which are delayed, we are able to find the moments of thewaiting time distribution for the classical queueing disciplines (Abate & Whitt, 1997 [1]).
FCFS : Denoting the i’th moment of the service time distribution by mi, we can find thek’th moment of the waiting time distribution by the following recursion formula,where the mean service time is chosen as time unit (m1 = s = 1):
mk,F =A
1− Ak∑
j=1
(k
j
)· mj+1
j + 1·mk−j,F , m0,F = 1 . (10.7)
LCFS : From the above moments mk,F of the FCFS–waiting time distribution we can findthe moments mk,L of the LCFS–waiting time distribution. The three first momentsbecome:
m1,L = m1,F , m2,L =m2,F
1−A , m3,L =m3,F + 3 ·m1,F ·m2,F
(1−A)2. (10.8)
10.3.4 Limited queue length: M/G/1/k
In real systems the queue length, for example the size of a buffer, will always be finite.Arriving customers are blocked when the buffer is full. For example in the Internet, thisstrategy is applied in routers and is called the drop tail strategy. There exists a simplerelation between the state probabilities p(i) (i = 0, 1, 2, . . .) of the infinite system M/G/1and the state probabilities pk(i), (i = 0, 1, 2, . . . , k) of M/G/1/k, where the total number ofpositions for customers is k, including the customer being served (Keilson, 1966 [67]):
pk(i) =p(i)
(1− A ·Qk), i = 0, 1, . . . , k−1 , (10.9)
pk(k) =(1− A) ·Qk
(1− A ·Qk), (10.10)
where A < 1 is the offered traffic, and:
Qk =∞∑
j=k
p(j) . (10.11)
There exists algorithms for calculating p(i) for arbitrary holding time distributions (M/G/1)based on imbedded Markov chain analysis (Kendall, 1953 [70]), where the same approach isused for (GI/M/1).
10.4. QUEUEING SYSTEMS WITH CONSTANT HOLDING TIMES 271
We notice that p(i) only exists for A < 1, but for a finite buffer we also obtain statisticalequilibrium for A > 1. In this case we cannot use the approach described in this section.For M/M/1/k we can use the finite state transition diagram, and for M/D/1/k we describea simple approach in Sec. 10.4.8, which is applicable for general holding time distributions.
10.4 Queueing systems with constant holding times
In this section we focus upon the queueing system M/D/n, FCFS. Systems with constantservice times have the particular property that the customers leave the servers in the sameorder in which they are accepted for service.
10.4.1 Historical remarks on M/D/n
The first paper at all on queueing theory was published by Erlang (1909 [29]). He dealtwith a system with Poisson arrival Process and constant service times. Intuitively, one wouldthink that it is easier to deal with constant service times than with exponentially distributedservice times, but this is definitely not the case. The exponential distribution is easy to dealwith due to its lack of memory: the remaining life-time has the same distribution as thetotal life-time (Sec. 2.1.1), and therefore we can forget about the epoch (point of time) whenthe service time starts. Constant holding times require that we remember the exact startingtime.
Erlang was the first to analyse M/D/n, FCFS (Brockmeyer & al., 1948 [12]):
Erlang: 1909 n = 1 errors for n > 1,
Erlang: 1917 n = 1, 2, 3 without proof,
Erlang: 1920 n arbitrary explicit solutions for n = 1, 2, 3.
Erlang derived the waiting time distribution, but did not consider the state probabilities. Fry(1928 [35]) also dealt with M/D/1 and derived the state probabilities (Fry’s equations of state)by using Erlang’s principle of statistical equilibrium, whereas Erlang himself applied moretheoretical methods based on generating functions. Erlang did not derive state probabilities,med looked for the waiting time distribution.
Crommelin (1932 [21], 1934 [22]), a British telephone engineer, presented a general solutionto M/D/n. He generalized Fry’s equations of state to an arbitrary n and derived the waitingtime distribution, now named Crommelin’s distribution.
Pollaczek (1930-34) presented a very general time-dependent solution for arbitrary servicetime distributions. Under the assumption of statistical equilibrium he was able to obtain
272 CHAPTER 10. APPLIED QUEUEING THEORY
explicit solutions for exponentially distributed and constant service times. Also Khintchine(1932 [71]) dealt with M/D/n and derived the waiting time distribution.
10.4.2 State probabilities of M/D/1
Under the assumption of statistical equilibrium we now derive the state probabilities forM/D/1 in a simple way. The arrival intensity is denoted by λ and the constant holding timeby h. As we consider a pure waiting time system with a single server we have:
Offered traffic = Carried traffic = λ · h < 1 ,
i.e.A = Y = λ · h = 1− p(0) , (10.12)
as in every state except state zero the carried traffic is equal to one erlang.
To study this system, we consider two epochs (points of time) t and t + h at a distance ofh. Every customer being served at epoch t (at most one) has left the server at epoch t + h.Customers arriving during the interval (t, t+h) are still in the system at epoch t+h (waitingor being served).
The arrival process is a Poisson process. Hence we have a Poisson distributed number ofarrivals in the time interval (t, t+ h) of duration h:
p(j, h) = pj calls within h =(λh)j
j!· e−λh , j = 0, 1, 2 . . . . (10.13)
The probability of being in a given state at epoch t+ h is obtained from the state at epoch tby taking account of all arrivals and departures during (t, t+h). By looking at these epochswe obtain a Markov Chain embedded in the original traffic process (Fig. 10.3).
We obtain Fry’s equations of state for n = 1 (Fry, 1928 [35]):
pt+h(i) = pt(0) + pt(1) p(i, h) +i+1∑
j=2
pt(j) · p(i−j+1, h) , i = 0, 1, . . . . (10.14)
Above (10.12 we found:p(0) = 1− A ,
and under the assumption of statistical equilibrium pt(i) = pt+h(i), we find by successivelyletting i = 0, 1 . . . :
p(1) = (1− A) ·
eA − 1,
p(2) = (1− A) ·−eA · (1 + A) + e2A
,
10.4. QUEUEING SYSTEMS WITH CONSTANT HOLDING TIMES 273
t
i
Arrivalt+h
Departure Arrival
State
Time
State Arrivals in(t, t+h)
i−1
i
i+1
i+2
0
1
2
3
Figure 10.3: Illustration of Fry’s equations of state for the queueing system M/D/1.
and in general:
p(i) = (1− A) ·i∑
j=1
(−1)i−j · ejA ·
(jA)i−j
(i− j)! +(jA)i−j−1
(i− j − 1)!
, i = 2, 3, . . . (10.15)
The last term corresponding to j = i always equals eiA, as (−1)! ≡ ∞. In principle p(0)can also be obtained by requiring that all state probabilities must add to one, but this is notnecessary in this case where we know p(0).
10.4.3 Mean waiting times and busy period of M/D/1
For a Poison arrival process the probability of delay D is equal to the probability of not beingin state zero (PASTA property):
D = A = 1− p(0) . (10.16)
W denotes the mean waiting time for all customers and w denotes the mean waiting time forcustomers experiencing a positive waiting time. We have for any queueing system (2.28):
w =W
D. (10.17)
W and w are easily obtained by using Pollaczek-Khintchine’s formula (10.2):
W =A · h
2(1− A), (10.18)
w =h
2(1− A). (10.19)
274 CHAPTER 10. APPLIED QUEUEING THEORY
The mean value of a busy period was obtained for M/G/1 in (10.6) and illustrated for constantservice times in Fig. 10.2:
mT1 =h
1− A . (10.20)
The mean waiting time for delayed customers are thus half the busy period. It looks likecustomers arrive at random during the busy period, but we know that are no customers arriveduring the last service time of a busy period.
The distribution of the number of customer arriving during a busy period can be shown tobe given by a Borel distribution:
B(i) =(i A)i−1
i!e−i A, i = 1, 2, . . . (10.21)
10.4.4 Waiting time distribution: M/D/1, FCFS
This can be shown to be:
pW ≤ t = 1− (1− λ) ·∞∑
j=1
λ(j − τ)T+j
(T + j)!· e−λ(j−τ) , (10.22)
where h = 1 is chosen as time unit, t = T + τ , T is an integer, and 0 ≤ τ < 1.
The graph of the waiting time distribution has an irregularity every time the waiting timeexceeds an integral multiple of the constant holding time. An example is shown in Fig. 10.4.
Formula (10.22) is not suitable for numerical evaluation. It can be shown (Iversen, 1982 [44])that the waiting time can be written in a closed form, as given by Erlang in 1909:
pW ≤ t = (1− λ) ·T∑
j=0
λ(j − t)jj!
· e−λ(j−t) , (10.23)
which is fit for numerical evaluation for small waiting times.
For larger waiting times we are usually only interested in integral values of t. It can be shown(Iversen, 1982 [44]) that for an integral value of t we have:
pW ≤ t = p(0) + p(1) + · · ·+ p(t) . (10.24)
The state probabilities p(i) are calculated accurately by using a recursive formula based onFry’s equations of state (10.15):
p(i+ 1) =1
p(0, h)
p(i)− p(0) + p(1) · p(i, h)−
i∑
j=2
p(j) · p(i−j+1, h)
. (10.25)
10.4. QUEUEING SYSTEMS WITH CONSTANT HOLDING TIMES 275
0 1 2 3 4 5 60.001
0.002
0.005
0.01
0.02
0.05
0.1
0.2
0.5
1.0P(W>t) Complementary waiting time distribution
..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
M/D/1
M/M/1
A = 0.5
t [s−1]
Figure 10.4: The complementary waiting time distribution for all customers in the queueingsystem M/M/1 and M/D/1 for ordered queue (FCFS). Time unit = mean service time. Wenotice that the mean waiting time for M/D/1 is only half of that for M/M/1.
For non-integral waiting-times we are able to express the waiting time distribution in termsof integral waiting times.
If we let h = 1, then by a Binomial expansion (10.23) may be written in powers of τ , where
t = T + τ, T integer, 0 ≤ τ < 1 .
We find:
pW ≤ T + τ = eλτT∑
j=0
(−λτ)j
j!· pW ≤ T − j , (10.26)
where pW ≤ T − j is given by (10.24).
The numerical evaluation is very accurate when using (10.24), (10.25) and (10.26).
276 CHAPTER 10. APPLIED QUEUEING THEORY
10.4.5 State probabilities: M/D/n
When setting up Fry’s equations of state (10.14) we obtain more combinations:
pt+h(i) =
n∑
j=0
pt(j)
p(i, h) +
n+i∑
j=n+1
pt(j) · p(n+ i− j, h) . (10.27)
On the assumption of statistical equilibrium (A < n) we can leave out of account the absolutepoints of time:
p(i) =
n∑
j=0
p(j)
p(i, h) +
n+i∑
j=n+1
p(j) · p(n+ i− j, h), i = 0, 1, . . . (10.28)
The system of equations (10.28) can only be solved directly by substitution, if we know thefirst n state probabilities p(0), p(1), . . . , p(n−1). In practice we may obtain numerical valuesby guessing an approximate set of values for p(0), p(1), . . . , p(n−1), then substitute thesevalues in the recursion formula (10.28) and obtain new values. After a few approximationswe obtain the exact values.
The explicit mathematical solution is obtained by means of generating functions (The Erlangbook, [12] pp. 75–83).
10.4.6 Waiting time distribution: M/D/n, FCFS
The waiting time distribution is given by Crommelin’s distribution:
pW ≤ t = 1−n−1∑
i=0
i∑
k=0
p(k) ·∞∑
j=1
A(j − τ)(T+j+1)n−1−i
(T + j + 1)n− 1− i! , (10.29)
where A is the offered traffic and
t = T · h+ τ, 0 ≤ τ < h . (10.30)
Formula (10.29) can be written in a closed form in analogy with (10.23):
pW ≤ t =n−1∑
i=0
i∑
k=0
p(k)T∑
j=0
A(j − t)j·n+n−1−i
j · n+ n− 1− i! · e−A(j−t) . (10.31)
For integral values of the waiting time t we have:
pW ≤ t =
n(t+1)−1∑
j=0
p(j) . (10.32)
10.4. QUEUEING SYSTEMS WITH CONSTANT HOLDING TIMES 277
For non-integral waiting times t = T + τ, T integer, 0 ≤ τ < 1, we are able to express thewaiting time distribution in terms of integral waiting times as for M/D/1:
pW ≤ t = pW ≤ T + τ = eλτk∑
j=0
(−λτ)j
j!·k−j∑
i=0
p(i)
, (10.33)
where k = n(T + 1)−1 and p(i) is the state probability (10.28).
The exact mean waiting time of all customers W is difficult to derive. An approximation wasgiven by Molina:
W ≈ n
n+ 1· E2,n(A) · h
n− A ·1−
(An
)n+1
1−(An
)n . (10.34)
For any queueing system with infinite queue we have (2.28):
w =W
D,
where for all values of n:
D = 1−n−1∑
j=0
p(j) .
10.4.7 Erlang-k arrival process: Ek/D/r
Let us consider a queueing system with n = r ·k servers (r, k integers), general arrival processGI, constant service time and ordered (FCFS) queueing discipline. Customers arriving duringidle periods choose servers in cyclic order
1, 2, . . . , n− 1, n, 1, 2, . . .
Then a certain server will serve just every n′th customers as the customers due to the constantservice time depart from the servers in the same order as they arrive at the servers. Nocustomer can overtake another customer.
A group of r servers made up from the servers:
x, x+ k, x+ 2 · k, . . . , x+ (r − 1) · k , 0 < x ≤ k . (10.35)
will serve just every k′th customer. If we consider the servers (10.35), then considered as asingle group they are equivalent to the queueing system GI k∗/D/r, where the arrival processGIk∗ is a convolution of the arrival time distribution by itself k times.
The same goes for the k−1 other systems. The traffic in these k systems is mutually correlated,but if we only consider one system at a time, then this is a GI k∗/D/n, FCFS queueing system.
278 CHAPTER 10. APPLIED QUEUEING THEORY
The assumption about cyclic hunting of the servers is not necessary within the individualsystems (10.35). State probabilities and mean waiting times are independent of the queueingdiscipline, which is of importance for the waiting time distribution only.
If we let the arrival process GI be a Poisson process, then GI k∗ becomes an Erlang-k arrivalprocess. We thus find that the following systems are equivalent with respect to the waitingtime distribution:
M/D/r·k, FCFS ≡ Ek/D/r, FCFS .
Ek/D/r may therefore be dealt with by tables for M/D/n.
Example 10.4.1: Regular arrival processesIn general we know that for a given traffic per server the mean waiting time decreases when thenumber of servers increases (economy of scale, convexity). For the same reason the mean waitingtime decreases when the arrival process becomes more regular. This is seen directly from the abovedecomposition, where the arrival process for Ek/D/r becomes more regular for increasing k (rconstant). For A = 0.9 erlang per server (L = mean queue length) we find:
E4/E1/2: L = 4.5174 ,
E4/E2/2: L = 2.6607 ,
E4/E3/2: L = 2.0493 ,
E4/D/2: L = 0.8100 .2
10.4.8 Finite queue system: M/D/1/k
In real systems we always have a finite queue. In computer systems the size of the storageis finite and in ATM systems we have finite buffers. The same goes for waiting positions inFMS (Flexible Manufacturing Systems).
As mentioned in Sec. 10.3.4 the state probabilities pk(i) of the finite buffer system are obtainedfrom the state probabilities p(i) of the infinite buffer system by using (10.9) & (10.10). Integralwaiting times are obtained from the state probabilities, and non-integral waiting times fromintegral waiting times as shown above (Sec. 10.4.4).
For the infinite buffer system the state probabilities only exist when the offered traffic is lessthan the capacity (A < n). But for a finite buffer system the state probabilities also exist forA > n, but we cannot obtain them by the above-mentioned method.
For M/D/1/k the finite buffer state probabilities pk(i) can be obtained for any offered trafficin the following way. In a system with one server and (k−1) queueing positions we have (k+1)states (0, 1, · · · , k). Fry’s balance equations for state probabilities pk(i), i = 0, 1, . . . , k−2,
10.5. SINGLE SERVER QUEUEING SYSTEM: GI/G/1 279
yielding k−1 linear equations between the states pk(0), pk(1), . . . , pk(k−1). But it is notpossible to write down simple time-independent equations for state k−1 and k. However, thefirst (k − 1) equations (10.14) together with the normalization requirement
k∑
j=0
pk(j) = 1 (10.36)
and the fact that the offered traffic equals the carried traffic plus the rejected traffic (PASTAproperty):
A = 1− pk(0) + A · pk(k) (10.37)
results in (k+ 1) independent linear equations, which are easy to solve numerically. The twoapproaches yields of course the same result. The first method is only valid for A < 1, whereasthe second is valid for any offered traffic.
Example 10.4.2: Leaky BucketLeaky Bucket is a mechanism for control of cell (packet) arrival processes from a user (source) inan ATM–system. The mechanism corresponds to a queueing system with constant service time(cell size) and a finite buffer. If the arrival process is a Poisson process, then we have an M/D/1/ksystem. The size of the leak corresponds to the long-term average acceptable arrival intensity,whereas the size of the bucket describes the excess (burst) allowed. The mechanism operates as avirtual queueing system, where the cells either are accepted immediately or are rejected accordingto the value of a counter which is the integral value of the load function (Fig. 10.1). In a contractbetween the user and the network an agreement is made on the size of the leak and the size of thebucket. On this basis the network is able to guarantee a certain grade-of-service. 2
10.5 Single server queueing system: GI/G/1
In Sec. 10.3 we showed that the mean waiting time for all customers in queueing systemM/G/1 is given by Pollaczek-Khintchine’s formula:
W =A · s
2(1− A)· ε (10.38)
where ε is the form factor of the holding time distribution.
We have earlier analyzed the following cases:
M/M/1 (Sec. 9.2.4): ε = 2:
W =A · s
(1− A), Erlang 1917. (10.39)
280 CHAPTER 10. APPLIED QUEUEING THEORY
M/D/1 (Sec. 10.4.3): ε = 1:
W =A · s
2(1− A), Erlang 1909. (10.40)
It shows that the more regular the holding time distribution, the less becomes the waitingtime traffic. (For loss systems with limited accessibility it is the opposite way: the biggerform factor, the less congestion).
In systems with non-Poisson arrivals, moments of higher order will also influence the meanwaiting time.
10.5.1 General results
We have till now assumed that the arrival process is a Poisson process. For other arrivalprocesses it is seldom possible to find an exact expression for the mean waiting time exceptin the case where the holding times are exponentially distributed. In general we may require,that either the arrival process or the service process should be Markovian. Till now there isno general accurate formulae for e.g. M/G/n.
For GI/G/1 it is possible to give theoretical upper limits for the mean waiting time. Denotingthe variance of the inter-arrival times by va and the variance of the holding time distributionby vd, Kingman’s inequality (1961) gives an upper limit for the mean waiting time:
GI/G/1: W ≤ A · s2(1− A)
·va + vds2
. (10.41)
This formula shows that it is the stochastic variations, that results in waiting times.
Formula (10.41) gives the upper theoretical boundary. A realistic estimate of the actual meanwaiting time is obtained by Marchal’s approximation (Marchal, 1976 [86]):
W ≈ A · s2(1− A)
·va + vds2
·s2 + vda2 + vd
. (10.42)
where a is the mean inter-arrival time (A = s/a). The approximation is a scaling of Kingman’sinequality so it agrees with the Pollaczek-Khintchine’s formula for the case M/G/1.
10.5.2 State probabilities: GI/M/1
As an example of a non-Poisson arrival process we shall analyse the queueing system GI/M/1,where the distribution of the inter-arrival times is a general distribution given by the densityfunction f(t). Service times are exponentially distributed with rate µ.
10.5. SINGLE SERVER QUEUEING SYSTEM: GI/G/1 281
If the system is considered at an arbitrary point of time, then the state probabilities will notbe described by a Markov process, because the probability of an arrival will depend on thetime interval since the last arrival. The PASTA property is not valid.
However, if the system is considered immediately before (or after) an arrival epoch, thenthere will be independence in the traffic process since the inter-arrival times are stochasticindependent the holding times are exponentially distributed. The arrival epochs are equilib-rium points (regeneration points, Sec. 3.2.2), and we consider the so-called embedded Markovchain.
The probability that we immediately before an arrival epoch observe the system in state j isdenoted by π(j). In statistical equilibrium it can be shown that we will have the followingresult (D.G. Kendall, 1953 [70]):
π(i) = (1− α)αi , i = 0, 1, 2, . . . (10.43)
where α is the positive real root satisfying the equation:
α =
∫ ∞
0
e−µ(1−α)tf(t) dt . (10.44)
The steady state probabilities can be obtained by considering two successive arrival epochst1 and t2 (similar to Fry’s state equations, Sec. 10.4.5).
As the departure process is a Poisson process with the constant intensity µ when there arecustomers in the system, then the probability p(j) that j customers complete service betweentwo arrival epochs can be expressed by the number of events in a Poisson process during astochastic interval (the inter-arrival time). We can set up the following state equations:
πt2(0) =∞∑
j=0
πt1(j) ·
1−j∑
i=0
p(i)
,
πt2(1) =∞∑
j=0
πt1(j) · p(j) , (10.45)
......
πt2(i) =∞∑
j=0
πt1(j) · p(j−i+1) .
The normalization condition is as usual:
∞∑
i=0
πt1(i) =∞∑
j=0
πt2(j) = 1 . (10.46)
282 CHAPTER 10. APPLIED QUEUEING THEORY
It can be shown that the above-mentioned geometric distribution is the only solution to thissystem of equations (Kendall, 1953 [70]).
In principle, the queueing system GI/M/n can be solved in the same way. The state prob-ability p(j) becomes more complicated since the departure rate depends on the number ofbusy channels.
Notice that π(i) is not the probability of finding the system in state i at an arbitrary point oftime (time average), but the probability of finding the system in state i immediately beforean arrival (call average).
10.5.3 Characteristics of GI/M/1
The probability of immediate service becomes:
pimmediate = π(0) = 1− α . (10.47)
The corresponding probability of being delayed the becomes:
D = pdelay = α . (10.48)
The average number of busy servers at a random point of time (time average) is equal to thecarried traffic (= the offered traffic A < 1).
The average number of waiting customers, immediately before the arrival of a customer, isobtained via the state probabilities:
L1 =∞∑
i=1
(1− α)αi (i− 1) ,
L1 =α2
1− α . (10.49)
The average number of customers in the system before an arrival epoch is:
L2 =∞∑
i=0
(1− α)αi · i
=α
1− α . (10.50)
The average waiting time for all customers then becomes:
W =1
µ· α
1− α . (10.51)
10.6. PRIORITY QUEUEING SYSTEMS: M/G/1 283
The average queue length taken over the whole time axis (the virtual queue length) thereforebecomes (Little’s theorem):
L = A · α
1− α . (10.52)
The mean waiting time for customers, who experience a positive waiting times, becomes
w =W
D,
w =1
µ· 1
1− α . (10.53)
Example 10.5.1: Mean waiting times GI/M/1For M/M/1 we find α = αm = A. For D/M/1 α = αd is obtained from the equation:
αd = e− (1− αd) /A ,
where αd must be within (0,1). It can be shown that 0 < αd < αm < 1 . Thus the queueing systemD/M/1 will always have less mean waiting time than M/M/1.
For A = 0.5 erlang we find the following mean waiting times for all customers (10.51):
M/M/1: α = 0.5 , W = 1 , w = 2 .
D/M/1: α = 0.2032 , W = 0.2550 , w = 1.3423 .
where the mean holding time is used as the time unit (µ = 1). The mean waiting time is thus farfrom proportional with the form factor of the distribution of the inter-arrival time. 2
10.5.4 Waiting time distribution: GI/M/1, FCFS
When a customer arrives at the queueing system, the number of customers in the systemis geometric distributed, and the customer therefore, under the assumption that he gets apositive waiting time, has to wait a geometrically distributed number of exponential phases.This will result in an exponentially distributed waiting time with a parameter given in (10.53),when the queueing discipline is FCFS (Sec. 9.4 and Fig. 2.12).
10.6 Priority queueing systems: M/G/1
The time period a customer is waiting usually means an inconvenience or expense to the cus-tomer. By different strategies for organizing the queue, the waiting times can be distributedamong the customers according to our preferences.
284 CHAPTER 10. APPLIED QUEUEING THEORY
10.6.1 Combination of several classes of customers
The customers are divided into N classes (traffic streams). Customers of class i are assumedto arrive according to a Poisson process with intensity λi [customers per time unit] and themean service time is si [time units]. The offered traffic is Ai = λi · si. The second moment ofthe service time distribution is denoted by m2i.
In stead of considering the individual arrival processes, we may consider the total arrivalprocess, which also is a Poisson arrival process with intensity:
λ =N∑
i=1
λi . (10.54)
The resulting service time distribution then becomes a weighted sum of service time distri-butions of the individual classes (Sec. 2.3.2: combination in parallel). The total mean servicetime becomes (2.59):
s =N∑
i=1
λiλ· si , (10.55)
and the total second moment is (2.58):
m2 =N∑
i=1
λiλ·m2i . (10.56)
The total offered traffic becomes:
A =N∑
i=1
Ai =N∑
i=1
λi · si = λ s . (10.57)
The remaining mean service time at a random point of time becomes (10.4):
V1,N =1
2· λ ·m2 (10.58)
=1
2· A · m2
s
=1
2· A ·
∑Ni=1
λiλ·m2i∑N
i=1λiλ· si
=1
2· A ·
∑Ni=1 λi ·m2i∑N
i=1Ai
V1,N =N∑
i=1
1
2· λi ·m2i (10.59)
V1,N =N∑
i=1
Vi , (10.60)
where index (1, N) indicates that we include all streams from 1 to N .
10.6. PRIORITY QUEUEING SYSTEMS: M/G/1 285
10.6.2 Kleinrock’s conservation law
We now consider a system with several classes of customers. We assume that the queueingdiscipline is independent of the service time. This excludes preemptive resume queueingdiscipline as the probability of preemption increases with the service time. The waiting timeis composed of a contribution V from the remaining service time of a customer being served,if any, and a contribution from customers waiting in the queue. The mean waiting timebecomes:
W = V1,N +N∑
i=1
Li · si .
Li is the average queue length for customers of type i. By applying Little’s law we get:
W = V1,N +N∑
i=1
λi ·Wi · si
W = V1,N +N∑
i=1
Ai ·Wi . (10.61)
We may also combine all customer classes into one and apply Pollaczek-Khintchine’s formulato get the same mean waiting time (10.5):
W = V1,N + AW , (10.62)
Under these general assumptions we get Kleinrock’s conservation law (Kleinrock, 1964 [73]):
Theorem 10.2 Kleinrock’s conservation law:N∑
i=1
Ai ·Wi = A ·W =A · V1,N
1− A constant. (10.63)
The average waiting time for all classes weighted by the traffic (load) of the mentioned class,is independent of the queue discipline.
For the total traffic process we have Pollaczek-Khintchine’s formula. We may thus give asmall proportion of the traffic a very low mean waiting time, without increasing the averagewaiting time of the remaining customers very much. By various strategies we may allocatewaiting times to individual customers according to our preferences.
10.6.3 Non-preemptive queueing discipline
In the following we look at M/G/1 priority queueing systems, where customers are dividedinto N priority classes so that a customer with the priority p has higher priority than cus-tomers with priority p+1. In a non-preemptive system a service in progress is not interrupted.
286 CHAPTER 10. APPLIED QUEUEING THEORY
The customers in class p are assumed to have the mean service time sp and the arrival intensityλp. In Sec. 10.6.1 we derived parameters for the total traffic process.
The total average waiting time Wp of a class p customers is made up of the following threecontributions:
a) The residual service time V1,N for the customer under service.
b) The waiting time, due to the customers in the queue with priority p or higher, whichalready are in the queues (Little’s theorem):
p∑
i=1
si · (λi ·Wi) .
c) The waiting time due to customers with higher priority, which overtake the customerwe consider while this is waiting:
p−1∑
i=1
si · Li =
p−1∑
i=1
si · λi ·Wp .
In total we get:
Wp = V1,N +
p∑
i=1
si · λi ·Wi +
p−1∑
i=1
si · λi ·Wp . (10.64)
For customers of class one, highest priority, we get under the assumption of FCFS:
W1 = V1,N + L1 · s1 (10.65)
= V1,N + A1 ·W1 ,
W1 =V1,N
1− A1
. (10.66)
V1,N is the residual service time for the customer being served when the customer we considerarrives (10.59):
V =N∑
i=1
λi2·m2i , (10.67)
where m2i is the second moment of the service time distribution of the i’th class.
For class two customers we find (10.64):
W2 = V1,N + L1 · s1 + L2 · s2 + s1 · λ1 ·W2 .
10.6. PRIORITY QUEUEING SYSTEMS: M/G/1 287
Inserting W1 (10.65), we get:
W2 = W1 + A2 ·W2 + A1 ·W2 ,
W2 =W1
1− A1 − A2
, (10.68)
W2 =V1,N
1− A1 1− (A1 + A2) . (10.69)
In general we find (Cobham, 1954 [15]):
Wp =V1,N
1− A0,p−1 1− A0,p, (10.70)
where:
A0,p =
p∑
i=0
Ai , A0 = 0 . (10.71)
The structure of formula (10.70) can be interpreted directly. All customers wait until theservice in progress is completed V1,N no matter which class they belong to,. Furthermore,the waiting time is due to already arrived customers of at least have the same priority A0,p,and customers with higher priority arriving during the waiting time A0,p−1.
Example 10.6.1: SPC-systemWe consider a computer which serves two types of customers. The first type has the constantservice time of 0.1 second, and the arrival intensity is 1 customer/second. The other type has theexponentially distributed service time with the mean value of 1.6 second and the arrival intensity is0.5 customer/second.
The load from the two types customers is then A1 = 0.1 erlang, respectively A2 = 0.8 erlang.From (10.67) we find:
V =12· (0.1)2 +
0.52· 2 · (1.6)2 = 1.2850 s .
Without any priority the mean waiting time becomes by using Pollaczek-Khintchine’s formula (10.2):
W =1.2850
1− (0.8 + 0.1)= 12.85 s .
By non-preemptive priority we find:
Type one highest priority:
W1 =1.285
1− 0.1= 1.43 s ,
W2 =W1
1− (A1 +A2)= 14.28 s .
288 CHAPTER 10. APPLIED QUEUEING THEORY
Type two highest priority:
W2 = 6.43 s ,
W1 = 64.25 s .
This shows that we can upgrade type one customers almost without influencing type two. Howeverthe inverse is not the case. The constant in the Conservation law (10.63) becomes the same withoutpriority (Pollaczek-Khintchine formula) as with non–preemptive priority:
0.9 · 12.85 = 0.1 · 1.43 + 0.8 · 14.28 = 0.8 · 6.43 + 0.1 · 64.25 = 11.57 .
2
10.6.4 SJF-queueing discipline: M/G/1
By the SJF-queueing discipline the shorter the service time of a customer is, the higher is thepriority. The SJF discipline results in the lowest possible total waiting time. By introducingan infinite number of priority classes,
(0,∆t), (∆t, 2∆t), (2∆t, 3∆t), . . .
we obtain from the formula (10.70) that a customer with the service time t has the meanwaiting time Wt (Phipps 1956):
Wt =V0,∞
(1− A0,t)2 , (10.72)
where A0,t is load from the customers with service time less than or equal to t. When ∆t issmall A0,t ≈ A0,t+∆t.
If these different priority classes have different costs per time unit when they wait, so thatclass j customers have the mean service time sj and pay cj per time unit when they wait, thenthe optimal strategy (minimum cost) is to assign priorities 1, 2, . . . according to increasingratio sj/cj.
Example 10.6.2: M/M/1 with SJF queue disciplineWe consider exponentially distributed holding times with the mean value 1/µ which are chosenas time unit (M/M/1). Even though there are few very long service times, then they contributesignificantly to the total traffic (Fig. 2.3).
The contribution to the total traffic A from the customers with service time ≤ t is obtained from(2.31) multiplied by A = λ · µ:
A0,t = A
1− e−µt(µt+ 1).
Inserting this in (10.72) we find Wt as illustrated in Fig. 10.5, where the FCFS–strategy (same meanwaiting time as LCFS and SIRO) is shown for comparison as function of the actual holding time.
10.6. PRIORITY QUEUEING SYSTEMS: M/G/1 289
0 1 2 3 4 5 6 7 8 9 10 11 12 13 140
10
20
30
40
50
60
70
80
90
100
....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..............................................................................................................................
.........................................
...................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.....................................
.......................................................
..........................................................................................................................................................
.....................................................................................................................................................................................................................................
W [s 1] Mean Waiting time
Actual Service Time t [s 1]
SJF
FCFS
A=0.9
Figure 10.5: The mean waiting time Wt is a function of the actual service time in a M/M/1–system for SJF and FCFS disciplines, respectively. The offered traffic is 0.9 erlang and themean service time is chosen as time unit. Notice that for SJF the minimum average waitingtime is 0.9 time units, because an eventual job being served must first be finished. Themaximum mean waiting time is 90 time units. In comparison with FCFS, by using SJF 93.6% of the jobs get shorter mean waiting time. This corresponds to jobs with a service timeless than 2.747 mean service times (time units). The offered traffic may be greater than oneerlang, but then only the shorter jobs get a finite waiting time.
290 CHAPTER 10. APPLIED QUEUEING THEORY
The mean waiting time for all customers is less for SJF than for FCFS, but this is not obvious fromthe figure. The mean waiting time for SJF becomes:
WSJF =∫ ∞
0Wt · f(t) dt
=∫ ∞
0
V0,∞(1−A0,t)
2 · f(t) dt
=∫ ∞
0
A · e−µt dt1−A (1− e−µt (µt+ 1))2
which it is not elementary to calculate. 2
10.6.5 M/M/n with non-preemptive priority
We may generalize the above to Erlang’s classical waiting time system M/M/n with non–preemptive queueing disciplines, when all classes of customers have the same exponentiallydistributed service time distribution with mean value s = µ−1. Denoting the arrival intensityfor class i by λi, we have the mean waiting time Wp for class p:
Wp = V1,N +
p∑
i=1
s
n· Li +Wp
p−1∑
i=1
s
nλi ,
Wp = E2,n(A) · sn
+
p∑
i=1
s λin·Wi +Wp
p−1∑
i=1
s
nλi .
A is the total offered traffic for all classes. The probability E2,n(A) for waiting time is givenby Erlang’s C-formula, and when all servers are busy customers are terminated with the meaninter-departure time s/n. For highest priority class p = 1 we find:
W1 = E2,n(A)s
n+
1
nA1 ·W1 ,
W1 = E2,n(A) · s
n− A1
. (10.73)
10.6. PRIORITY QUEUEING SYSTEMS: M/G/1 291
For p = 2 we find in a similar way:
W2 = E2,n(A) · sn
+1
nA1 ·W1 +
1
nA2 ·W2 +W2 ·
sn· λ1
= W1 +1
n· A2W2 +
1
n· A1 ·W2 ,
W2 =n · s · E2,n(A)
n− A1 n− (A1 + A2) . (10.74)
In general we find (Cobham, 1954 [15]):
Wp =n · s · E2,n(A)
n− A0,p−1 n− A0,p. (10.75)
10.6.6 Preemptive-resume queueing discipline
We now assume that a customer being served is interrupted by the arrival of a customer withhigher priority. Later on, the service continues from where it was interrupted. This situationis typical for computer systems. For a customer with priority p, the customers with lowerpriority do no exist. The mean waiting time Wp for a customer in class p consists of twocontributions.
a) Waiting time due to customers with higher or same priority, who are already in thequeueing system. This is the waiting time experienced by a customer in a systemwithout priority where only the first p classes exists:
V1,p
1− A0,p
, where V1,p =
p∑
i=1
λi2·m2,i , (10.76)
is the expected remaining service time due to customers with higher or same priority,and A0,p is given by (10.71).
b) Waiting time due to the customers with higher priority who arrive during the waitingtime or service time and interrupt the customer considered:
(Wp + sp)
p−1∑
i=1
si · λi = (Wp + sp) · A0,p−1 .
We thus get:
Wp =V1,p
1− A0,p
+ (Wp + sp) · A0,p−1 .
292 CHAPTER 10. APPLIED QUEUEING THEORY
This can be rewritten as follows:
Wp(1− A0,p−1) =V1,p
1− A0,p+ sp · A0,p−1 ,
resulting in:
Wp =V1,p
(1− A0,p−1) (1− A0,p)+
A0,p−1
1− A0,p−1
· sp . (10.77)
For highest priority customers we get Pollaczek-Khintchine’s formula for this class alone, asthey are not disturbed by lower priorities (V1,1 = V1):
W1 =V1
1− A1
. (10.78)
The total response time becomes:
Tp = Wp + sp .
In a similar way as in Sec. 10.6.4 we may write down the formula for average waiting timefor SJF–queueing discipline with preemptive resume.
Example 10.6.3: SPC–system (example 10.6.1 continued)We now assume the computer system is working with the discipline preemptive-resume and find:
Type one highest priority:
W1 =12 · (0.1)2
1− 0.1= 0.0056 s ,
W2 =1.2850
(1− 0.1)(1− 0.9)+
0.11− 0.1
· 1.6 = 14.46 s .
Type two highest priority:
W2 =12 · 0.5 · 2 · (1.6)2
1− 0.8+ 0 = 6.40 s ,
W1 =1.2850
(1− 0.8)(1− 0.9)+
0.81− 0.8
· 0.1 = 64.65 s .
This shows that by upgrading type one to the highest priority, we can give these customers a veryshort waiting time, without disturbing type two customers, but the inverse is not the case.The conservation law is only valid for preemptive queueing systems if the preempted service timesare exponentially distributed. In the case with general service time distribution (G) a job may bepreempted several times, and therefore the remaining service time will not be given by V . 2
10.7. FAIR QUEUEING: ROUND ROBIN, PROCESSOR-SHARING 293
10.6.7 M/M/n with preemptive-resume priority
For M/M/n the case of preemptive resume is more difficult to deal with. All customers musthave the same mean service time. Mean waiting time can be obtained by first consideringclass one alone (9.15), then consider class one and two together, which implies the waitingtime for class two, etc. The conservation law is valid when all customers have the sameexponentially distributed service time.
10.7 Fair Queueing: Round Robin, Processor-Sharing
The Round Robin (RR) queueing model (Fig. 10.6) is a model for a time-sharing computersystem, where we want a fast response time for short jobs. This queueing discipline is alsocalled fair queueing because the available resources are equally distributed among the jobs(customers) in the system.
.................................................................................................................................................
.......................................................................................................................................................................................................... ................
.................................................................................................................................................................................................. ................
...................................................................................................................
........
........
........
........................................................................................................................... ................ ........................................................................................ ................ ............................................................................................................................................................... ...........................................................................................................................................................
.................................................................................................................................................................................................................................................................................................................................................................
.............................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
.........................................................................................................................................................................................................................
∆s
QueueNew jobs CPU Completed jobs
Non-completed jobs
p
1−p
Figure 10.6: Round robin queueing system. A task is allocated a time slice ∆s (at most)every time it is served. If the task is not finished during this time slice, it is returned to aFCFS queue, where it waits on equal terms with new tasks. If we let ∆s decrease to zero weobtain the PS (Processor Sharing) queueing discipline.
New jobs are placed in a FCFS–queue, where they wait until they obtain service limited toone time slice (slot) ∆s which is the same for all jobs. If a job is not completed within a timeslice, the service is interrupted, and the job is placed at the end of the FCFS–queue. Thiscontinues until the required total service time is obtained.
We assume that the queue is unlimited, and that new jobs arrive according to a Poissonprocess (arrival rate λ). The service time distribution can be a general distribution withmean value s.
The size of the time slice can vary. If it becomes infinite, all jobs will be completed the firsttime, and we have an M/G/1 queueing system with FCFS discipline. If we let the time slice
294 CHAPTER 10. APPLIED QUEUEING THEORY
decrease to zero, then we get the PS = Processor-Sharing model, which has a number ofimportant analytical properties.
The Processor-Sharing model can be interpreted as a queueing system where all jobs areserved continuously by the server (time sharing). If there are x jobs in the system, each ofthem obtain the fraction 1/x of the capacity of the computer. So there is no real queue, asall jobs are served all the same, eventually at a lower rate.
In next chapter we deal with processor sharing systems in more detail. The state transitiondiagrams are identical for the classical M/M/1 system and for the M/M/1–PS system andthus the performance measures based on state probabilities are identical for the two systems.When the offered traffic A = λ ·s is less than one, we show that the steady state probabilitiesare given by (9.30):
p(i) = (1− A) · Ai , i = 0, 1, . . . , (10.79)
i.e. a geometric distribution with mean value A/(1 − A). The mean sojourn time (averageresponse time = time in system) for jobs with duration t becomes:
Rt =t
1− A . (10.80)
If this job was alone in the system, then its holding time would be t. Even if there is noqueue, we may talk about an average virtual delay for jobs with duration t:
Wt = Rt − t
=A
1− A · t . (10.81)
The corresponding mean values for a random job (mean service time s) becomes:
R =s
1− A , (10.82)
W =A
1− A · s . (10.83)
This shows that we obtain the same mean values as for M/M/1 (Sec. 9.2.4). But the actualmean waiting time becomes proportional to the duration of the job, which is often a desirableproperty. We don’t assume any knowledge in advance about the duration of the job. Themean waiting time becomes proportional to the mean service time. The proportionality shouldnot be understood in the way that two jobs of the same duration have the same waiting time;it is only valid on the average. In comparison with the results we obtained earlier for M/G/1(Pollaczek-Khintchine’s formula (10.2)) the results may surprise our intuition.
A very useful property of the Processor-Sharing model is that the departure process is aPoisson process like the arrival process , i.e. we have a reversible system. The Processor-Sharing model is very useful for analyzing time-sharing systems and for modeling queueingnetworks (Chap. 12). In Chap. ?? we study reversible systems in more details.
2010-04-20
Chapter 11
Multi-service queueing systems
In this chapter we consider queueing systems with more than one type of customers. It isanalogue to Chap. 7 where we considered loss systems with more types of customers andnoticed that the product form was maintained between streams so that the convolutionalgorithm could be applied. We are only interested in reversible systems, where the departureprocess is of the same type as the arrival process, in our case Poisson processes. Then wemay combine several queueing systems into a network of queueing systems. In queueingterminology, customers of same type (class, service, stream) belong to a specific chain, anda queueing system is a node in a queueing network which will be dealt with in (Chap. 12).In this chapter customers in some way share the available capacity, and therefore they areserved all the time but they may obtain less capacity than requested, resulting is an increasedservice time. The sojourn time is not split up into separate waiting time and service time asin previous chapters, Thus in this chapter and next chapter on queueing networks we use thedefinitions:
Waiting time W is defined as the total sojourn time, including the service time.Queue length L is defined as total number of customers (served & waiting).
As an example we may think of the time required to transfer a file in the Internet. If theavailable bandwidth is at least equal to the bandwidth requested, then the mean service timesj for a customer of type j is defined as the mean transfer (sojourn) time. If the availablebandwidth is less than the bandwidth requested, then the mean transfer time W j will bebigger than sj, and the increase
Wj = W j − sj , (11.1)
is defined as the mean virtual waiting time. We shall introduce the virtual waiting time asthe increase in service time due to limited capacity. In a similarly way we define the meanvirtual queue length as
Lj = Lj − Aj , (11.2)
where Aj is the offered traffic of type j.
295
296 CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS
The systems considered in this chapter are reversible, but do not have product form. InSec. 11.1 we consider single server systems with multiple services. The derivations are verysimple and worked out in details for two services, and then generalized to more services. InSec. 11.2 we consider systems with more servers and multiple services. As in Sec. 7.3.3 wechoose a Basic Bandwidth Unit (BBU) and split the available bandwidth into n BBUs. TheBBU is a common name for a channel, a slot, a server, etc. The smaller the basic bandwidthunit, i.e. the finer the granularity, the more accurate we may model the traffic of differentservices, but the bigger the state space becomes. Finally, in Sec. 11.3 we consider waiting timesystems with more servers and multiple multi-rate services. In service-integrated systems thebandwidth requested depends on the type of service. The approach is new and very simple.It allows for very general results, including all classical Markovian loss and delay models, andit is applicable to digital broadband systems, for example Internet traffic.
.................................................................................................................................................
...............................................................................
................................................................................................................................................................................................................ ................
..........................................
..........................................
..........................................
..........................................
........................................................ ........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
.................................................................................................................................................................................................. ................
λ1
λ2
µ1
µ2
A1 = λ1/µ1
A2 = λ2/µ2
Figure 11.1: An Σ2j=1 Mj/Mj/1–queueing system with two classes of customers.
11.1 Reversible multi-chain single-server systems
In Fig. 11.1 we consider a single-server queueing system with N = 2 streams of customers,i.e. two chains. Customers belonging to chain j arrive to the node according to a Poissonarrival process with intensity λj (j = 1, 2). State [x1, x2] is defined as a state with x1 chain 1customers and x2 chain 2 customers. By the notation ΣN
j=1Mj/Mj/1 we indicate that we haveN different PCT-1 arrival processes (chains) with individual values of arrival rates and meanservice times. In the following we use index i for the state space and index j for the service(traffic stream).
If the number of servers were infinite, then we would get the state transition diagram shownin Fig. 11.2, and state probabilities will be given by (7.13). However, the capacity is limitedto one server, so somehow we have to reduce the service rates in all states where more thanone server is requested.
11.1.1 Reduction factors for single-server system
So far we have only one server (n = 1) which is shared by all customers. In state (x1, x2)we reduce the service rate of chain one customers by a factor g1(x1, x2) so that the customer
11.1. REVERSIBLE MULTI-CHAIN SINGLE-SERVER SYSTEMS 297
.............................................................................................................................................................................................................................................................................................................................................................. ......................
........................................................................................................................................................................................................................................................................................................................................................................
......................................................................... ................
.........................................................................................
......................................................................... ................
.........................................................................................
........................................................................................................................................................................................................................................................................................................................................................ ................
........................................................................................................................................................................................................................................................................................................................................................................
......................................................................... ................
.........................................................................................
......................................................................... ................
.........................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.....................
.............................................................................................................................................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.....................
.............................................................................................................................................................................................................................................................................................................................................
x1−1, x2−1 x1, x2−1
x1−1, x2 x1, x2
· · ·· · ·
· · ·· · ·
· · ·· · ·
· · ·· · ·
......
......
......
......
λ2
x1 µ1
λ2
x1 µ1
λ1
x2 µ2
λ1
x2 µ2
Figure 11.2: State transition diagram for the system in Fig. 11.1 with two classes (chains) ofcustomers and infinite number of servers (cf. Chap. 7).
.............................................................................................................................................................................................................................................................................................................................................................. ......................
........................................................................................................................................................................................................................................................................................................................................................................
......................................................................... ................
.........................................................................................
......................................................................... ................
.........................................................................................
........................................................................................................................................................................................................................................................................................................................................................ ................
........................................................................................................................................................................................................................................................................................................................................................................
......................................................................... ................
.........................................................................................
......................................................................... ................
.........................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.....................
.............................................................................................................................................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.....................
.............................................................................................................................................................................................................................................................................................................................................
x1−1, x2−1 x1, x2−1
x1−1, x2 x1, x2
· · ·· · ·
· · ·· · ·
· · ·· · ·
· · ·· · ·
......
......
......
......
λ2
g1(x1, x2−1) · x1 µ1
λ2
g1(x1, x2) · x1 µ1
λ1
g2(x1−1, x2) · x2 µ2
λ1
g2(x1, x2) · x2 µ2
Figure 11.3: State transition diagram for the system in Fig. 11.1 with two types (chains) ofcustomers and a single server. In state (x1, x2) the requested service rate xj · µj for type jis reduced by a factor gj(x1, x2) for chain j (j = 1, 2) as compared with Fig. 11.2. As forexample g2(x1 − 1, x2) and g2(x1, x2) are different, the system does not have product form.
298 CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS
don’t get x1 servers, but only x1 · g1(x1, x2) servers which in general will be a non-integernumber of servers. So the service rate is reduced from xj ·µj to g1(x1, x2) ·xj ·µj. In a similarway, chain-two customers are reduced by a factor g2(x1, x2). The result is shown in Fig. 11.3.The aim is to obtain a reversible multi-dimensional system. For x1 + x2 ≤ n the system issimilar to the models in Chap. 7. For x1 + x2 ≥ n we construct a reversible system usingall n servers. The reduction factors gi(x1, x2) can be specified for various parts of the statetransition diagram as follows.
1. Non-feasible states: x1 < 0 and/or x2 < 0 :
gj(x1, x2) = 0 , j = 1, 2. (11.3)
The reduction factors are undefined for these states which have probability zero. Bychoosing the value zero, the recursion formulæ derived below (11.9, 11.10) are correctlyinitiated.
2. States with demand less than capacity: xj ≥ 0, j = 1, 2 and 0 < x1 + x2 ≤ 1 :
gj(x1 + x2) = 1 , j = 1, 2. (11.4)
Every call get the capacity required and there is no reduction of service rates.
3. States with only one service:
x2 = 0 and x1 ≥ 1 : g1(x1, 0) = 1/x1 , x1 ≥ 1 , (11.5)
x1 = 0 and x2 ≥ 1 : g2(0, x2) = 1/x2 , x2 ≥ 1 . (11.6)
Along the axes we have a classical M/M/1-system with only one type of customers,and we assume customers share the capacity equally as they all are identical. The statetransition diagram is as for M/M/1–PS (PS = Processor sharing).
4. States with demand bigger than capacity: xj > 0, j = 1, 2 and x1 + x2 > 1.This is states with both types of customers requiring more servers than available.If possible, we want to choose gj(x1, x2) so that:
• Flow balance: The state transition diagram is constructed to be reversible: Weconsider four states including (x1, x2) and neighboring states below (Fig. 11.3):
(x1 − 1, x2 − 1), (x1, x2 − 1), (x1, x2), (x1 − 1, x2) .
By applying the Kolmogorov cycle requirement for reversibility (Sec. 7.2), we getafter canceling out the arrival and service rates (Fig. 11.3):
g2(x1, x2) · g1(x1, x2 − 1) = g1(x1, x2) · g2(x1 − 1, x2) . (11.7)
11.1. REVERSIBLE MULTI-CHAIN SINGLE-SERVER SYSTEMS 299
• Normalization: All capacity is used. This requirement implies for n = 1 server:
x1 · g1(x1, x2) + x2 · g2(x1, x2) = 1 , x1 + x2 ≥ 1 , (11.8)
In state [x1, x2] we would like to use x1 + x2 servers, but this is reduced to oneserver by the reduction factors (11.8).
We have two independent equations (11.7) (11.8) with two unknown reduction factors. As-sume that we know the reduction factors g1(x1, x2−1) and g2(x1−1, x2), then we are able tofind a unique solution for the reduction factors g1(x1, x2) and g2(x1, x2) (Fig. 11.3). Solvingthe equations, we get:
g1(x1, x2) =1 · g1(x1, x2 − 1)
x1 · g1(x1, x2 − 1) + x2 · g2(x1 − 1, x2)=
1
x1 + x2 · g2(x1−1,x2)g1(x1,x2−1)
, (11.9)
g2(x1, x2) =1 · g2(x1 − 1, x2)
x1 · g1(x1, x2 − 1) + x2 · g2(x1 − 1, x2)=
1
x1 · g1(x1,x2−1)g2(x1−1,x2)
+ x2
. (11.10)
From the initial values specified above (11.3–11.6), we may by these recursion formulæ calcu-late all reduction factors. From g1(1, 0) and g2(0, 1) we calculate g1(1, 1) and g2(1, 1). Thenwe may calculate g1(2, 1) and g2(2, 1), and in this way we horizontally calculate all reductionfactors for g1(x1, 1) and g2(x1, 1). From these we may then calculate all g1(x1, 2) and g2(x1, 2)reduction factors, and so on. Alternatively, we may use the recursion vertically or diagonally.We notice that the reduction factors are independent of the traffic parameters.
Using the known initial values we find a simple unique solution:
gj(x1, x2) =1
x1 + x2
, x1 + x2 ≥ 1 , j = 1, 2 . (11.11)
Thus the two chains (services) are reduced by the same factor and all customers share thecapacity equally. The reversible state transition diagram is shown in Fig. 11.4.
It is easy to extend the above derivations of reduction factors to a system with N trafficstreams. The state of the system is given by :
x = (x1, x2, . . . , xj−1, xj, xj+1, . . . , xN) . (11.12)
where xi denotes number of channels occupied by stream i, which for single rate traffic isequal to number of connections. For states ∑N
j=1 xj > n, xj ≥ 0 we get for n = 1 server asimple unique expression for the reduction factors, which is a generalization of (11.11):
gj(x) =1∑Nj=1 xj
, j = 1, 2, . . . , N . (11.13)
Thus all customers share the single server equally. In the following section we show this uniquestate transition diagram can be interpreted as corresponding to various queueing strategies.
300 CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS
.............................................................................................................................................................................................................................................................................................................................................................. ......................
........................................................................................................................................................................................................................................................................................................................................................................
......................................................................... ................
.........................................................................................
......................................................................... ................
.........................................................................................
........................................................................................................................................................................................................................................................................................................................................................ ................
........................................................................................................................................................................................................................................................................................................................................................................
......................................................................... ................
.........................................................................................
......................................................................... ................
.........................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.....................
.............................................................................................................................................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.....................
.............................................................................................................................................................................................................................................................................................................................................
x1−1, x2−1 x1, x2−1
x1−1, x2 x1, x2
· · ·· · ·
· · ·· · ·
· · ·· · ·
· · ·· · ·
......
......
......
......
λ2
1x1+x2−1
· x1 µ1
λ2
1x1+x2
· x1 µ1
λ1
1x1+x2−1
· x2 µ2
λ1
1x1+x2
· x2 µ2
Figure 11.4: State transition diagram for a multi–dimensional single-server system which isreversible. The system does not have product form.
11.1.2 Single-server Processor Sharing (PS) system
The above result corresponds to a Processor Sharing (PS, Sec. 10.7) system. All (x1 + x2)customers share the server equally and the capacity of the system is constant (one server).The total service rate µx1,x2 in state [x1, x2] becomes (Fig. 11.4):
µx1,x2 =x1 · µ1
x1 + x2
+x2 · µ2
x1 + x2
=x1 · µ1 + x2 · µ2
x1 + x2
. (11.14)
The total service rate is state-dependent when classes of customers have different servicerates. The number of customers served per time unit depends on the mix the customerscurrently being served. For a system with N traffic streams the total service intensity instate x is:
µx =
∑Nj=1 xj · µj∑Nj=1 xj
, j = 1, 2, . . . , N . (11.15)
This model is reversible and valid for individual arbitrary service times distributions, andthe system will be insensitive to the service time distributions. This property is called the“magic property” of processor sharing and was originally dealt with by Kleinrock (1964 [73]).In Sec. 10.7 we had only one type of customers and a one-dimensional state transition diagram.Now we have N types of customers, and to define the state of the system in a unique way weneed an N -dimensional state transition diagram.
11.1. REVERSIBLE MULTI-CHAIN SINGLE-SERVER SYSTEMS 301
Theorem 11.1 Σj Mj/Gj/1–PS single-server system with processor sharing (PS) is re-versible and insensitive to the service time distributions, and each class may have individualmean service time.
11.1.3 Non-sharing single-server system
Let us assume that the server is occupied by one customer at a time, i.e. there is no sharingof the capacity. Then for Poisson arrival processes and classical queueing systems withdisciplines as for example FCFS, LCFS, SIRO the customer being served in state x, i.e. thenext customer departing, will be a random one of the x customers in the system.
From the state transition diagram for two services (Fig 11.4) we see that the customer beingserved is of type one, respectively type two, with the following probabilities:
ptype-1 served =
x1 µ1
x1+x2
x1 µ1
x1+x2+ x2 µ2
x1+x2
=x1 µ1
x1 µ1 + x2 µ2
, (11.16)
ptype-2 served =
x2 µ2
x1+x2
x1 µ1
x1+x2+ x2 µ2
x1+x2
=x2 µ2
x1 µ1 + x2 µ2
. (11.17)
We see that this is only a random one of the x1 + x2 customers when µ1 = µ2. Thus the twoclasses must have the same mean service time for the state transition diagram to describean M/M/1 non-sharing system. In all other cases (µ1 6= µ2), the customer being served willnot be a random one among the (x1 + x2) customers in the system. It is also obvious thatthe system is only reversible when the service times are exponentially distributed, as theinter-departure time during saturation periods will be equal to the service time distribution.This interpretation corresponds to a classical M/M/1 system with total arrival λ = λ1 + λ2
and mean service time µ−1 = µ−11 = µ−1
2 .
By superposition of Poisson processes it is obvious that this is also valid for N traffic streams.We thus have:
Theorem 11.2 The non-sharing Σj Mj/M/1 system (FCFS, LCFS, SIRO) is only reversibleif all customers have the same exponentially distributed service time with same mean service.time.
11.1.4 Single-server LCFS-PR system
The state transition diagram in Fig. 11.4 can also be interpreted as the state transitiondiagram of an Σj Mj/Gj/1–LCFS–PR (preemptive resume, non-sharing) system. It is obvious
302 CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS
that this system is reversible because the process follows exactly the same path in the statetransition diagram away from state zero due to arriving customers, as back to state zero dueto departing customers. Thus we always have local balance. The latest arriving customer instate (x1, x2) belongs with probability (xj/(x1 + x2)) to class j (j = 1, 2). This is valid forany number N of services.
Theorem 11.3 Σj Mj/Gj/1–LCFS-PR single-server system with LCFS-PR is reversible andinsensitive to the service time distributions, and the services may have individual mean servicetimes.
11.1.5 Summary for reversible single server systems
The multi-dimensional state transition for single-server systems can be interpretated in thesame way as for two services. In conclusion, for a single-server queueing systems with Nclasses of customers to be reversible, the state transition diagram must be as shown in Fig. 11.4in N dimensions. For this diagram we have the following interpretations:
• ΣNj=1 Mj/Gj/1–PS,
• ΣNj=1 Mj/M/1 non-sharing with same exponential service time for all customers, or
• ΣNj=1 Mj/Gj/1–LCFS–PR (non-sharing).
These systems are also called symmetric queueing systems. Reversibility implies that thedeparture processes of all classes are identical with the arrival processes. In principle we mayintroduce new interpretations. Due to reversibility, the departure process will be the sametype as the arrival process for each chain, i.e. a Poisson process like the arrival process. This isof course also valid for a system with one type of customers, as we may split the Poisson arrivalprocess up into more Poisson arrival processes and thus get a reversible multi-dimensionalsystem.
11.1.6 State probabilities for multi-services single-server system
All three single-server systems mentioned above are interpretations of the same state tran-sition diagram and thus have the same state probabilities and mean performance measures.Part of the state transition diagram for two services is given by Fig. 11.4. The diagram isreversible, since flow clockwise equals flow counter-clockwise. Hence, there is local balance.All state probabilities can be expressed by state zero. For two services we find:
p(x1, x2) = p(0, 0) · Ax11
x1!· A
x22
x2!· (x1 + x2)! (11.18)
11.1. REVERSIBLE MULTI-CHAIN SINGLE-SERVER SYSTEMS 303
In comparison with the multi-dimensional Erlang–B formula (7.10) we now have the addi-tional factor (x1 +x2)!. The product form between classes is lost because the state probabilitycannot be written as the product of state probabilities of two independent systems:
p(x1, x2) 6= p1(x1) · p2(x2) .
This absence of product form will later complicate the evaluation of queueing networks asthe state space of a node becomes very large and cannot be aggregated. We find p(0, 0) bynormalization: ∞∑
x1=0
∞∑
x2=0
p(x1, x2) = 1 .
Using the Binomial expansion we find the aggregated state probabilities:
p(x1 + x2 = x) = p(0, 0) · (A1 + A2)x (11.19)
= (1− A) · Ax , (11.20)
where A = A1 + A2. State probability p(0, 0) = 1− A is obtained explicitly without need ofnormalization. This is identical with the state probabilities of M/M/1 with the offered trafficA = (A1+A2) (9.30).
If there are N different traffic streams, the state probabilities become:
p(x1, x2, . . . , xN) = p(0) · Ax11
x1!
Ax22
x2!· · · A
xNN
xN !· (x1 + x2 + . . .+ xN) ! (11.21)
p(x) = p(0) ·
N∏
j=1
Axjj
·
N∑
j=1
xj
!
N∏
j=1
xj!
, (11.22)
where p(x) = p(x1, x2, . . . , xN). This can be expressed by the polynomial distribution (2.94):
p(x) = p(0) ·
N∏
j=1
Axjj
·(x1 + x2 + · · ·+ xNx1, x2, . . . , xN
). (11.23)
For an unlimited number of queueing positions the global state probabilities of the totalnumber of customers becomes:
p(x) = px1 + x2 + · · ·+ xN = x .By the polynomial expansion we observe that the state probabilities are identical with stateprobabilities of the M/M/1 system:
p(x) = p(0) · (A1 + A2 + · · ·+ AN)x (11.24)
= (1− A) · Ax , (11.25)
304 CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS
11.1.7 Generalized algorithm for state probabilities
To evaluate the global state probabilities we may use the following trivial algorithm for Ntraffic classes. We first find the relative state probabilities q where we typically put q(0) equalto one:
p(x) =
0 x < 0 ,
1− A x = 0 ,
N∑
j=1
pj(x) x = 1, 2, . . . ,
(11.26)
where
pj(x) =
0 x < 1 ,
Aj · p(x− 1) x = 1, 2, . . . ,(11.27)
In this case the normalization of global states is very simple as we have p(0) = 1− A.
11.1.8 Performance measures
The mean queue length L, which includes all customers in the system (partly being served,partly waiting), becomes as for M/M/1. This is a geometric distribution (11.25) with themean value:
L =A
1− A ,
where the total offered traffic is A = (A1 + A2 + · · · + AN). In state x the average numberof class–j calls is x · pj(x). The mean queue length for stream j (including all customers)becomes:
Lj =∞∑
x=0
x · pj(x) =AjA· L , or
LjAj
=L
Awhere L =
N∑
j=1
Lj . (11.28)
The mean sojourn time for type j customers becomes by Little’s law:
W j =Ljλj
= sj ·L
A,
orW j
sj=L
A. (11.29)
11.2. REVERSIBLE MULTI-CHAIN & SERVER SYSTEMS 305
The mean queue length L includes both waiting and served customers. As carried traffic isequal to offered traffic, the increase in L due to limited capacity is L = L−A. For stream jthe increase is Lj = Lj − Aj. In the same way we have for queue lengths W = W − s andWj = W j − sj. Subtracting one on both sides of (11.43) and (11.44) we get:
LjAj
=Wj
sj=L
A=W
s= constant. (11.30)
Lj and WJ corresponds to the the usual definition of waiting times in non-sharing queueingsystems. For a given stream the mean waiting time is proportional to the mean service timeof this stream. This is the most important property of processor sharing.
11.2 Reversible multi-chain & server systems
We now consider a system with n servers and infinite queue. All customers only request oneserver (BBU, channel) to be served. The state of the system is defined by
x = (x1, x2, . . . , xj, . . . , xN) ,
where xj is the number of type j customers in the system. Customers of type j arriveaccording to a Poisson arrival process with intensity λj, and the service time is exponentiallydistributed with intensity µj (mean value 1/µj) (j = 1, 2, . . . , N). If the number of serverswere infinite, then we would get the state transition diagram shown in Fig. 11.2. However, thecapacity is limited to n servers, so we have to reduce the service rates in all states requiringmore than n servers (overload). In the following we deal with the general case with N services.The principles are the same as for the above single-server system.
11.2.1 Reduction factors for multi-server systems
The service rate in state x = (x1, x2, . . . , xj, . . . , N) is for type j customers reduced by afactor gj(x). The reduction factors gj(x) are chosen so that we maintain reversibility andutilize all the capacity when needed. They can be specified for various parts of the statetransition diagram as follows.
1. Non-feasible states: xj ≤ 0 for at least one value j ∈ 1, 2, . . . , N:
gj(x) = 0 , j = 1, 2, . . . , N . (11.31)
The reduction factors are undefined for these states which have probability zero. Bychoosing the value zero, the recursion formula derived below (11.35) is initiated in acorrect way.
306 CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS
2. States with demand less than capacity: xj ≥ 0 ∀ j and 0 ≤∑Nj=1 xj ≤ n :
gj(x) = 1 , j = 1, 2, . . . , N . (11.32)
Every call get the capacity required, and there is no reduction of the requested servicerate.
3. States with only one type of customers: xi = 0 ∀ i 6= j and xj ≥ n:
gj(x) =n
xj. (11.33)
Along the axes we have a classical M/M/n–system with only one type of service, andwe assume that the calls share the capacity equally as they all are identical.
4. States with more types of customers, in total requiring more than n channels:
xj ≥ 0 ∀ j and x =N∑
j=1
xj > n :
• Flow balance: the state transition diagram is required to be reversible: We considerfour neighboring states in a square below state (x1, . . . , xj, . . . , xk, . . . xN) keepingnumber of connections constant except for services j and k: (Fig.11.3):
(x1, . . . , xj − 1, . . . , xk, . . . xN) (x1, . . . , xj, . . . , xk, . . . xN)
(x1, . . . , xj − 1, . . . , xk − 1, . . . xN) (x1, . . . , x1, . . . , xk − 1, . . . xN)
By applying the Kolmogorov cycle requirement for reversibility (Sec. 7.2) for anypair of services we get after reduction (Fig. 11.3):
A necessary and sufficient condition for reversibility (Kingman 1969) is that alltwo-dimensional flow paths are in equilibrium. In total we may choose j and k inso many ways: (
N
2
)=N (N − 1)
2
For each pair we have a balance equation. We assume that we know the reductionfactors for states x− 1j below state x where
x− 1j = x1, x2, . . . xj−1, xj − 1, xj+1 . . . xN.
To find the N reduction factors in state x = x1, x2, . . . , xN we need N indepen-dent equations. We may choose Kolmogorov cycles for the two-dimensional planes1, j, (j = 2, 3, . . . , N), and this gives us N − 1 independent equations. We getthe following flow balance equations for j = 1, 2, . . . N :
g1(x) · gj(x− 11) = gj(x) · g1(x− 1j) ,
11.2. REVERSIBLE MULTI-CHAIN & SERVER SYSTEMS 307
orgj(x) = g1(x) · g1,j(x) , (11.34)
where
g1,j(x) =gj(x− 11)
g1(x− 1j).
We notice that g1,1(x) = 1.
• Normalization: We obtain one equation more by requiring that we use the totalcapacity n:
n =N∑
j=1
xj · gj(x)
=N∑
j=1
xj · g1(x) · g1,j(x) .
From this we get g1(x), and from (11.34) we then find all other reduction factors in state x:
gj(x) =
n∑Nj=1 xj · g1,j(x)
j = 1 ,
g1(x) · g1,j(x) , j = 2, 3, . . . , N .
(11.35)
We know reduction factors for all states x up to and including global state n, i.e. stateswhere x =
∑Ni=1 xi ≤ n. We also know all reduction factors for states where only one type is
present, We can then recursively calculate all other reduction factors. Knowing the reductionfactors we find the relative state probabilities, and finally by normalization the detailed stateprobabilities, This is equivalent to calculation of all relative state probabilities, and then byglobal normalization the detailed state probabilities.
For two traffic streams and a single server we get of course the reduction factors given in (11.9)and (11.10). As seen above, the reduction factors are independent of the traffic processes, andthe approach includes Engset traffic, Pascal traffic, and any state-dependent Poisson arrivalprocess.
Using the above initialization values it can easily be shown that we get the following uniquesolution:
gj(x) =
1 0 ≤ x ≤ n ,
n
xn ≤ x ,
(11.36)
where x =N∑
j=1
xj , xj ≥ 0 .
308 CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS
Thus during overload all customers are reduced by the same factor, and the customers sharethe capacity equally. In Fig. 11.5 we consider a multi-server queueing system with N = 2traffic streams (chains). We notice that the diagram is reversible. In the following section weshow this unique state transition diagram may be interpreted as corresponding to differentstrategies.
11.2.2 Generalized processor sharing (GPS) system
The state transition diagram in Fig. 11.5 can be interpreted as follows. In states [x1, x2]below saturation (x1 + x2 ≤ n) every user occupy one server. Above saturation all usersshare the available capacity equally. The state transition diagram Fig. 11.5 is reversible.It is insensitive to the service time distribution and each service may have individual meanservice time. This model is called the GPS (Generalized Processor Sharing) model. For statex1 +x2 > n, traffic stream one wants a total service rate x1 ·µ1, and traffic stream two wantsa service rate x2 · µ2. But the service rate of both streams are reduced by the same factorn/(x1 + x2).
Theorem 11.4 ΣNj=1 Mj/Gj/n–GPS multi-server system with generalized processor sharing
(GPS) is reversible and insensitive to the service time distributions, and each class may haveindividual mean service time.
11.2.3 Non-sharing multi-chain & server system
We consider M/M/n–non-sharing systems. A customer being served always has one server,A customer is either waiting or served. To maintain reversibility for x1 + x2 > n we have torequire that all services have the same mean service time µ−1
j = µ−1, which furthermore mustbe exponentially distributed. Otherwise, the next departing customer will not be a randomone among the customers in the system (Fig. 11.5). The proof is the same as for the singleserver case in Sec. 11.1.3. This corresponds to an M/M/n system with total arrival rateλ =
∑j λj and service rate µ. The state probabilities are given by (9.2) and (9.4), and the
state transition diagram is reversible. The system M/M/∞ may be considered as a specialcase of M/M/n and this has already been dealt with in connection with classical waitingsystems (Chap. 9).
Theorem 11.5 The ΣNj=1Mj/M/n system (FCFS, LCFS, SIRO) is only reversible if all cus-
tomers have the same mean service time, and this service time must be exponentially dis-tributed.
11.2. REVERSIBLE MULTI-CHAIN & SERVER SYSTEMS 309
11.2.4 Symmetric queueing systems
For multiple servers the non-sharing system ΣNj=1 Mj/Gj/n–LCFS-PR will in general not be
reversible, because the last arriving customer may not be the first to finish service becausethere are more servers working in parallel. If all streams have same mean holding time thissystem will be included in Theorem 11.5. Otherwise, it is only reversible for single-serversystems (Sec. 11.1.4).
In conclusion, multi-server queueing systems with more classes of customers will only bereversible when the system is one of the following queueing system:
• ΣNj=1 Mj/Gj/n–GPS, which includes ΣN
j=1 Mj/Gj/1–PS,
• ΣNj=1 Mj/Mj/n–non-sharing with same service time for all customers, which includes
the single server system,
• ΣNj=1 Mj/Gj/1–LCFS–PR. This is only valid for single-server systems.
These systems are all reversible, and they are also called symmetric queueing systems. Re-versibility implies that the departure processes of all classes are Poisson processes like thearrival processes. For the classical non-sharing M/M/n systems we have a reversible systemwhich in not insensitive.
11.2.5 State probabilities
For a node with N services and n servers we exploit local balance and get the followingdetailed state probabilities:
p(x1, x2, . . . , xN)
p(0, 0, . . . 0)=
Ax11
x1!
Ax22
x2!· · · A
xNN
xN !, ΣN
j=1xj ≤ n
Ax11
x1!
Ax22
x2!· · · A
xNN
xN !· (x1+x2+ . . .+xN)!
n! n(x1+x2+...+xN )−n , ΣNj=1xj ≥ n
(11.37)
State p(0, 0, . . . 0) is obtained by normalization. For n = 1 we of course get (11.22). We let:
p(x) =∑∑xi=x
p(x1, x2, . . . xj, . . . xN).
By the multinominal theorem (2.96) we get (9.2):
p(x)
p(0)=
Ax
x!, 0 ≤ i ≤ n ,
Ax
n! · nx−n , i ≥ n .
(11.38)
310 CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS
· · ·· · ·
......................................................................... ................
.........................................................................................
........................................................................................................................................................................................................................ ......................
..................................................................................................................................................................................................................................
.................................................................................................................................................................................................................. ................
..................................................................................................................................................................................................................................
......................................................................... ................
.........................................................................................
· · ·· · ·
· · ·· · ·
......................................................................... ................
.........................................................................................
........................................................................................................................................................................................................................ ......................
..................................................................................................................................................................................................................................
.................................................................................................................................................................................................................. ................
..................................................................................................................................................................................................................................
......................................................................... ................
.........................................................................................
· · ·· · ·
· · ·· · ·
......................................................................... ................
.........................................................................................
........................................................................................................................................................................................................................ ......................
..................................................................................................................................................................................................................................
.................................................................................................................................................................................................................. ................
..................................................................................................................................................................................................................................
......................................................................... ................
.........................................................................................
· · ·· · ·
......
........
........
........
........
........
........
........
.................
.........................................................................................................
........
........
........
........
........
........
........
.................
.........................................................................................................
........
........
........
........
........
........
........
.................
.........................................................................................................
......
......
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
...................
...........................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
...................
...........................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
...................
...........................................................................................................................................................................................................................
......
......
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
...................
...........................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
...................
...........................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
...................
...........................................................................................................................................................................................................................
......
........
........
........
........
........
........
........
.................
.........................................................................................................
........
........
........
........
........
........
........
.................
.........................................................................................................
........
........
........
........
........
........
........
.................
.........................................................................................................
x−1, n−x−1
x−1, n−x
x−1, n−x+1
x, n−x−1
x, n−x
x, n−x+1
x+1, n−x−1
x+1, n−x
x+1, n−x+1
λ1 λ1
λ1 λ1
λ1 λ1
λ2 λ2 λ2
λ2 λ2 λ2
x · µ1
x · µ1
nn+1 · x · µ1
(x+1) · µ1
nn+1 · (x+1) · µ1
nn+2 · (x+1) · µ1
(n−x) · µ2 (n−x) · µ2n
n+1 · (n−x) · µ2
(n−x+ 1) · µ2n
n+1 · (n−x+ 1) · µ2n
n+2 · (n−x+ 1) · µ2
Figure 11.5: State transition diagram for a reversible multi–dimensional ΣjMj/Mj/n–system.The detailed states shown, correspond to global states below and above global state n.
11.2. REVERSIBLE MULTI-CHAIN & SERVER SYSTEMS 311
11.2.6 Generalized algorithm for state probabilities
We now consider a system with n servers and N traffic streams. The global relative statesprobabilities are obtained by the recursion:
q(x) =
0 x < 0 ,
1 x = 0 ,
N∑
i=j
qj(x) x = 1, 2, . . . ,∞ ,
(11.39)
where
qj(x) =
Ajx· q(x− 1) x ≤ n ,
Ajn· q(x− 1) x > n .
(11.40)
Here pj(x) is the contribution of stream j to global state x:
pj(x) =∑∑xi=x
xjx· p(x1, x2, . . . xj, . . . xN) (11.41)
State probability p(0) is obtained by the normalization:
Q =∞∑
i=0
p(i) =∞∑
i=0
N∑
j=1
pj(i) . (11.42)
By normalizing all relative state probabilities qj(x) and q(x) by Q we get the true stateprobabilities pj(x) and p(x). To get a numerical robust algorithm, the normalization shouldbe carried out in each step (increase of x) as described in Sec. 4.4.1. The algorithm is amodification of the generalized algorithm in Sec. 7.6.2 for single-slot Poisson traffic in losssystems.
11.2.7 Performance measures
These are derived in the sane way as for single-server system in Sec. 11.1.8. The total meanqueue length L, which includes all customers in the system (partly being served, partlywaiting), becomes as for M/M/n.
L =∞∑
x=0
x · p(x)
312 CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS
In state x the average number of class–j calls is x · pj(x). The mean queue length for streamj (including all customers) becomes:
Lj =∞∑
x=0
x · pj(x) =AjA· L , or
LjAj
=L
Awhere L =
N∑
j=1
Lj . (11.43)
The mean sojourn time for type j customers becomes by Little’s law:
W j =Ljλj
= sj ·L
A,
or
W j
sj=L
A. (11.44)
The mean queue length L includes both waiting and served customers. As carried traffic isequal to offered traffic, the increase in L due to limited capacity is L = L−A. For stream jthe increase is Lj = Lj − Aj. In the same way we have for queue lengths W = W − s andWj = W j − sj. Subtracting one on both sides of (11.43) and (11.44) we get:
LjAj
=Wj
sj=L
A=W
sconstant. (11.45)
Lj and WJ corresponds to the the usual definition of waiting times in non-sharing queueingsystems. For a given stream the mean waiting time is proportional to the mean service timeof this stream. This is the most important property of processor sharing.
11.3 Reversible multi-rate & chain & server systems
We now consider a queueing system with n servers which is offered N multi-rate trafficstreams. Traffic stream j has constant arrival rate λj, service rate µj (mean service time1/µj, and requires dj simultaneous channels for full service. If the demand is bigger than thecapacity, then the service rate is reduced by a state dependent reduction factor. Whereasthe systems with single-rate traffic considered above were simple, this system becomes morecomplex because the reduction factors becomes much more complex.
11.3. REVERSIBLE MULTI-RATE & CHAIN & SERVER SYSTEMS 313
11.3.1 Reduction factors
These are derived in a similar way as for single-slot traffic (Sec. 11.1.1 and 11.2.1). Theservice rate in state x = (x1, x2, . . . , xj, . . . , xN) is for type j customers reduced by a factorgj(x). The reduction factors gj(x) are chosen so that we maintain reversibility, and use allcapacity needed. They can be specified for various parts of the state transition diagram asfollows.
1. Non-feasible states: xj ≤ 0 for at least one value j ∈ 1, 2, . . . , N:
gj(x) = 0 , j = 1, 2, . . . , N . (11.46)
The reduction factors are undefined for these states which have probability zero. Bychoosing the value zero, the recursion formula derived below (11.7) is initiated in acorrect way.
2. States with demand less than capacity: xj · dj ≥ 0 ∀ j and 0 ≤∑Nj=1 xj · dj ≤ n :
gj(x) = 1 , j = 1, 2, . . . , N . (11.47)
Every call get the capacity required and there is no reduction of the requested servicerate.
3. States with one type of customers only: xi = 0 ∀ i 6= j and xj ≥ n:
gj(x) =n
xj, j = 1, 2, . . . , N . (11.48)
Along the axes we have the classical M/M/n–system with only one service, and weassume that the calls share the capacity equally as they are all identical.
4. States with more types of customers, in total requiring more than n channels:
xj ≥ 0 ∀ j and x =N∑
j=1
xj · dj > n :
• Flow balance: The state transition diagram is reversible: If we consider four neigh-boring states in a square below state (. . . , xj, . . . , xk, . . .) keeping all other dimen-sions except i and j constant (Fig.11.6):
(x1, . . . , xj − dj, . . . , xk, . . . xN) (x1, . . . , xj, . . . , xk, . . . xN)
(x1, . . . , xj − dj, . . . , xk − dk, . . . xN) (x1, . . . , xj, . . . , xk − dk, . . . xN)
then by applying the Kolmogorov cycle requirement for reversibility (Sec. 7.2) forany pair of services we get after reduction (Fig. 11.6):
314 CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS
A necessary and sufficient condition for reversibility (Kingman 1969) is that alltwo-dimensional flow paths are in equilibrium. In total we may choose:
(N
2
)=N (N − 1)
2
different cycles and thus different balance equations.
We assume that we know the reduction factors for states x− dj below state x. Tofind the N reduction factors in state x = x1, x2, . . . , xN we need N independentequations. Thus we may choose Kolmogorov cycles for the two-dimensional planes1, j, (j = 1, N) which yields N −1 independent equations. We furthermore havethe normalization equation requiring that the total capacity used is n. We get thefollowing flow balance equations for j = 1, 2, . . . N :
g1(x) · gj(x− d1) = gj(x) · g1(x− dj)
orgj(x) = g1(x) · g1,j(x) (11.49)
where
g1,j(x) =gj(x− d1)
g1(x− dj)We notice that g1,j(x) = 1.
• Normalization:The capacity normalization equations is:
n =N∑
j=1
xj · gj(x)
=N∑
j=1
xj · g1(x) · g1,j(x)
From this we get g1(x), and from (11.49) we then find all other reduction factors in state x:
g1(x) =n∑N
j=1 xj · g1,j(x)
gj(x) = g1(x) · g1,j(x) , j = 2, 3, 4, . . . , N
As we know all reduction factors up to n where x =∑N
i=1 xi · di ≤ n and all reduction factorsfor states where only one service is active, then we can recursively calculate all reductionfactors. This is equivalent to calculating the relative state probabilities, and thus by globalnormalization the detailed state probabilities.
11.3. REVERSIBLE MULTI-RATE & CHAIN & SERVER SYSTEMS 315
For two traffic streams and single-slot traffic we get the reduction factors given in (11.9) and(11.10). As mentioned above the reduction factors are independent of the traffic processes,and the approach includes Engset traffic, Pascal traffic, and any state-dependent Poissonarrival process.
We notice that the reduction factors are independent of the traffic parameters. In Fig. 11.6we consider a multi-server queueing system with N = 2 types of customers (chains).
.............................................................................................................................................................................................................................................................................................................................................................. ......................
........................................................................................................................................................................................................................................................................................................................................................................
......................................................................... ................
.........................................................................................
......................................................................... ................
.........................................................................................
........................................................................................................................................................................................................................................................................................................................................................ ................
........................................................................................................................................................................................................................................................................................................................................................................
......................................................................... ................
.........................................................................................
......................................................................... ................
.........................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.....................
.............................................................................................................................................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.....................
.............................................................................................................................................................................................................................................................................................................................................
x1−d1, x2−d2 x1, x2−d2
x1−d1, x2 x1, x2
· · ·· · ·
· · ·· · ·
· · ·· · ·
· · ·· · ·
......
......
......
......
λ2
g1(x1, x2−d2) · x1 µ1
λ2
g1(x1, x2) · x1 µ1
λ1
g2(x1−d1, x2) · x2 µ2
λ1
g2(x1, x2) · x2 µ2
Figure 11.6: State transition diagram for a system with two types (chains) of customers withmulti-rate traffic and a n server. In state (x1, x2) the requested service rate xj · µj for type jis reduced by a factor gj(x1, x2). As for example g2(x1−1, x2) and g2(x1, x2) will be different,the system do not have product form.
11.3.2 Generalized algorithm for state probabilities
We now consider a multi-rate system with n servers and N traffic streams.
The initialization values of pj(x) are pj(x) = 0, x < dj. This is a simple general recursionformula covering all classical Markovian queueing models.
Recursion
For states x we find the following, where we may replace q by p. If we know all (relative)
316 CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS
probabilities 0, 1, . . . , x− 1, then we find the relative state probabilities for state x from:
qj,y(x) =1
x· (dj Aj) · q(x− dj)
qj(x) =1
min x, n
djx· (λj) · q(x− dj) +
N∑
i=1
x− dix
(λi) · qj(x− di)
qj,l(x) = qj(x)− qj,y(x)
qy(x) =N∑
j=1
qj,y(x)
ql(x) =N∑
j=1
qj,l(x)
q(x) =N∑
j=1
qj(x) = qy(x) + ql(x)
The new absolute state probabilities are obtained by normalizing all previous state probabil-ities 0, 1, . . . (x− 1) and the new relative state probabilities x by dividing by 1 + q(x).
Initializing: no problem (let eventually qj(x) = qj,y(x) for x ≤ n. qj,y(x) is initialized as inthe ’old’ system.
Proof of recursion
The above equation for qj(x) becomes very simple now. A simple draft proof is as follows.We rewrite the formula to:
qj(x) ·min x, n =
djx· (dj Aj) · q(x− dj) +
N∑
i=1
x− dix
(diAi) · qj(x− di)
Left hand sideis the flow down from state x due to departures (service rate is min x, n).
Right hand side term one:This is new contribution to qj(x) because a new call type j arrives. Arrival rate is with our
11.3. REVERSIBLE MULTI-RATE & CHAIN & SERVER SYSTEMS 317
definitions dj Aj (choosing µj = 1). A new call type j adds dj slots so that the ratio of type jslots in state x is dj/x. Slots of type j already present when a call arrive is taken account of by:
Right hand side term two:Already existing slots of type j in state x − di is given by qj(x − di) . So if a call type iarrives, then these x− di slots are transferred to the new state x so that the ratio of type jslots in state x becomes (x− di)/x.
Performance measures
Carried traffic in state x for type j:
yj(x) = x · qj,y(x)
Total carried traffic type j:
yj =n+k∑
i=0
yj(i)
Queue length in state x for type j:
lj(x) = x · qj,l(x)
Total queue length traffic type j:
lj =n+k∑
i=0
lj(i)
When we are in state x, then the mean number of channels serving type j calls is:
nj,y(x) =pj,y(x)
pj(x)· x
and the mean queue length of type j calls measured in [channels] is:
nj,l(x) =pj,l(x)
pj(x)· x
Of course we have:N∑
j=1
nj,y(x) + nj,l(x) = x
318 CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS
· · ·· · ·
......................................................................... ................
.........................................................................................
........................................................................................................................................................................................................................ ......................
..................................................................................................................................................................................................................................
.................................................................................................................................................................................................................. ................
..................................................................................................................................................................................................................................
......................................................................... ................
.........................................................................................
· · ·· · ·
· · ·· · ·
......................................................................... ................
.........................................................................................
........................................................................................................................................................................................................................ ......................
..................................................................................................................................................................................................................................
.................................................................................................................................................................................................................. ................
..................................................................................................................................................................................................................................
......................................................................... ................
.........................................................................................
· · ·· · ·
· · ·· · ·
......................................................................... ................
.........................................................................................
........................................................................................................................................................................................................................ ......................
..................................................................................................................................................................................................................................
.................................................................................................................................................................................................................. ................
..................................................................................................................................................................................................................................
......................................................................... ................
.........................................................................................
· · ·· · ·
......
........
........
........
........
........
........
........
.................
.........................................................................................................
........
........
........
........
........
........
........
.................
.........................................................................................................
........
........
........
........
........
........
........
.................
.........................................................................................................
......
......
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
...................
...........................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
...................
...........................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
...................
...........................................................................................................................................................................................................................
......
......
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
...................
...........................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
...................
...........................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
...................
...........................................................................................................................................................................................................................
......
........
........
........
........
........
........
........
.................
.........................................................................................................
........
........
........
........
........
........
........
.................
.........................................................................................................
........
........
........
........
........
........
........
.................
.........................................................................................................
x−1, n−x−1
x−1, n−x
x−1, n−x+1
x, n−x−1
x, n−x
x, n−x+1
x+1, n−x−1
x+1, n−x
x+1, n−x+1
(S1−x+1) γ1 (S1−x) γ1
(S1−x+1) γ1 (S1−x) γ1
(S1−x+1) γ1 (S1−x) γ1
(S2−n+x+1) γ2 (S2−n+x) γ2 (S2−n+x) γ2
(S2−n+x+1) γ2 (S2−n+x) γ2(S2−n+x+1) γ2
x · µ1
x · µ1
nn+1 · x · µ1
(x+1) · µ1
nn+1 · (x+1) · µ1
nn+2 · (x+1) · µ1
(n−x) · µ2 (n−x) · µ2n
n+1 · (n−x) · µ2
(n−x+ 1) · µ2n
n+1 · (n−x+ 1) · µ2n
n+2 · (n−x+ 1) · µ2
Figure 11.7: State transition diagram for a reversible multi–dimensional system n servers,finite number of sources.
11.4 Finite source models
From the state transition diagram it is obvious that the above results can be generalized toservices with finite number of sources as the reductions factors only depend on the bandwidthdemand.
In Fig. 11.7 we show the state transition diagram for a system with two finite-source trafficstreams.
In Chap 12 we consider closed queueing networks, where the nodes are the queueing modelsdescribed in this chapter. We include finite number of users in each chain by truncating thePoisson case. and not the finite source case as in Fig. 11.7.
For Engset traffic where stream j has Sj sources and the arrival rate for an idle source typej is γj we get a good approximation if we replace Aj by (assuming µj = 1):
(Sj − nj,y(x− dj) + nj,l(x− dj)) · γj
It will be investigated whether this is exact.
11.4. FINITE SOURCE MODELS 319
The initialization values are pj(x) = 0, x < 1.We should normalize the state probabilities in each step – wrapping upWe may truncate the state space. detailed states or some convolution with trun-cation?)Performance measures.Finite buffer
320 CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS
Chapter 12
Queueing networks
Many systems behave in such a way that a job achieves services from several successivenodes, i.e. once it has obtained service at one node, then it goes on to the next node. Thetotal service demand is composed of service demands at several nodes. Hence, the systemis a network of queues, a queueing network, where each individual queue is called a node.Examples of queueing networks are telecommunication systems, computer systems, packetswitching networks, and Flexible Manufacturing Systems (FMS). The terms job, customer,source, messages and others are used synonymously.
In queueing networks we define the queue-length in a node as the total number of jobs inthe node, including delayed and served jobs. In the same way we define the waiting timeas the total sojourn time, including both delay and service time. This si because the nodesin general operates as generalized processor sharing nodes, and not as classical non-sharingqueueing systems (cf. Chap. 11
The aim of this chapter is to introduce the basic theory of queueing networks, illustrated byapplications. Usually, the theory is considered as being rather complicated, which is mainlydue to the large amount of parameters. In this chapter we shall give a simple introductionto general analytical queueing network models based on product forms. We also describe theconvolution algorithm and the MVA–algorithm, illustrating the theory with examples.
The theory of queueing networks is similar to the theory of multi–dimensional loss systems(Chap. 7). In Chap. 8 we considered multi-dimensional loss systems whereas in this chapterwe are looking at networks of queueing systems.
321
322 CHAPTER 12. QUEUEING NETWORKS
12.1 Introduction to queueing networks
Queueing networks are classified as closed and open queueing networks. In closed queueingnetworks the number of customers is fixed whereas in open queueing networks the numberof customers is varying. Erlang’s classical waiting system, M/M/n, is an example of an openqueueing network with one node, whereas Palm’s machine/repair model with S terminals isa closed network with two nodes. If there are more than one type of customers, a networkcan be a mixed open and closed network. Since the departure process from one node is thearrival process at another node, we shall pay special attention to the departure process, inparticular when it can modeled as a Poisson process. This was analyzed in Chap. 11, and wewill review the results in the section on symmetric queueing systems (Sec. 12.2).
The state of a queueing network is defined as the simultaneous distribution of number ofcustomers in each node. If K denotes the total number of nodes, then the state is describedby a vector p(x1, x2, . . . , xK) where xk is the number of customers in node k (k = 1, 2, . . . , K).Frequently, the state space is very large and it is difficult to calculate the state probabilities bysolving node balance equations. If every node is a reversible (symmetric) queueing system,for example a Jackson network (Sec. 12.3), then we will have product form. The stateprobabilities of networks with product form can be aggregated and detailed performancemeasures obtained by using the convolution algorithm (Sec. 12.5.1) or the MVA–algorithm(Sec. 12.5.2).
Jackson networks can be generalized to BCMP–networks (Sec. 12.6), where there are Ntypes of customers. Customers of one specific type all belongs to a so-called chain. Fig. 12.1illustrates an example of a queueing network with 4 chains. When the number of chainsincreases the state space increases, correspondingly, and only systems with a small numberof chains or jobs can be exact calculated. In case of a multi-chain network, the state of eachnode becomes multi-dimensional (Chap. 11). Within a node we do not have product formbetween the chains. But the product form between nodes is maintained, and the convolution–algorithm (Sec. 12.5.1) and the MVA–algorithm (Sec. 12.5.2) are applicable. A number ofapproximate algorithms for large networks are published in the literature.
12.2 Symmetric (reversible) queueing systems
In order to analyze queueing systems, it is important to know when the departure process ofa queueing system is a Poisson process. The multi-service reversible queueing models dealtwith in Chap. 11 all have this property, and the state probabilities are all given by the stateprobabilities of M/M/n with special cases for n = 1 and n = ∞. We summarize the stateprobabilities for one service obtained in Chap. 9:
12.2. SYMMETRIC (REVERSIBLE) QUEUEING SYSTEMS 323
Figure 12.1: An example of a queueing network with four open chains.
1. M/M/n. This is Burke’s theorem (Burke, 1956 [13]), which states, that the departureprocess of an M/M/n–system is a Poisson process. The state probabilities are given by(9.2) or (11.38):
p(x) =
p(0) · Ax
x!, 0 ≤ x ≤ n ,
p(0) · Ax
n! · nx−n , x ≥ n .
(12.1)
where A = λ/µ, and p(0) is given by (9.4).
2. IS = M/G/∞. IS is abbreviation for Infinite Server and this corresponds to the Poissoncase (Sec. 4.2). From Sec. 3.6 we know that a random translation of the events of aPoisson process results in a new Poisson process. This model is denoted as a systemwith the queueing discipline IS, Infinite number of Servers. The state probabilities aregiven by the Poisson distribution (4.6):
p(x) = p(0) · Ax
x!, i = 0, 1, 2, . . . . (12.3)
where p(0) = e−A.
3. M/G/1–PSThis is a single server queueing system with a general service time distribution andprocessor sharing. The state probabilities are the same as for M/M/1 (10.79)(n = 1 in(12.1):
p(x) = p(0) · Ax , x = 0, 1, 2, . . . , (12.4)
where p(0) = 1− A.
4. M/G/n–GPSThis multi-server queueing system has the same state probabilities as M/M/n above(12.1).
324 CHAPTER 12. QUEUEING NETWORKS
5. M/G/1–LCFS-PR (PR = Preemptive Resume). This system also has the same stateprobabilities as M/M/1 (12.4) with p(0) = 1− A.
Above we have expressed all state probabilities by state zero as we later only need the relativestate probabilities. Only these four queueing disciplines are easy to deal with in the theoryof queueing networks. But for example also for Erlang’s loss system, the departure processwill be a Poisson process, if we include blocked customers.
The above-mentioned reversible queueing systems are also called symmetric queueing systemsas they are symmetric in time. Both the arrival process and the departure process are Poissonprocesses and the systems are reversible (Kelly, 1979 [68]). The process is called reversiblebecause it looks the same way when we reverse the time (cf. when a movie is reversibleit looks the same whether we play it forward or backward). Apart from M/M/n thesesymmetric queueing systems have the common feature that a customer is served immediatelyupon arrival.
Example 12.2.1: M/M/1 departure processAt first it may seem illogical that the departure process of M/M/1 with arrival rate λ and servicerate µ is a Poisson process with rate λ. During busy periods (probability A = λ/µ) the departureprocess is a Poisson process with rate µ. When the system becomes idle (probability 1 − A) theinter-departure time become an inhomogeneous Erlang-2 distribution with rate λ in the first phaseand rate µ in the second. In a phase diagram we may take the time intervals in reverse order so itlooks like Fig. 2.11. From the decomposition principle of Cox-distributions it becomes obvious. Asimilar decomposition can be worked out for M/M/n. 2
12.3 Open networks: single chain
In 1957, J.R. Jackson who was working with production planning and manufacturing systems,published a paper with a theorem, now called Jackson’s theorem (1957 [52]). He showed that aqueueing network of M/M/n – nodes has product form. Knowing Burke’s theorem (1956 [13]),Jackson’s result is obvious. Historically, the first paper on queueing systems in series was byanother Jackson, R.R.P. Jackson (1954 [51]).
Theorem 12.1 Jackson’s theorem: Consider an open queueing network with K nodessatisfying the following conditions:
• Structure: each node is an M/M/n–queueing system. Node k has nk servers, and theaverage service time is 1/µk.
• Traffic: jobs arrive from outside the system to node k according to a Poisson processwith intensity λk. Customers may also arrive to node k from other nodes.
12.3. OPEN NETWORKS: SINGLE CHAIN 325
• Strategy: a job which has just finished his service at node j, is immediately transferredto node k with probability pjk or leaves the network with probability:
1−K∑
k=1
pjk .
A customer may visit the same node several times if pkk > 0.
Flow balance equations:The total average arrival intensity Λk to node k is obtained by solving the flow balance equa-tions:
Λk = λk +K∑
j=1
Λj · pjk . (12.5)
Let p(x1, x2, . . . , xK) denote the state space probabilities under the assumption of statisticalequilibrium, i.e. the probability that there is xk customers at node k. Furthermore, we assume;
Λk
µk= Ak < nk . (12.6)
Then the state space probabilities are given by the product form:
p (x1, x2, . . . , xK) =K∏
k=1
pk (xk) . (12.7)
where for node k, pk(xk) is the state probabilities of Erlang’s M/M/n queueing system witharrival rate Λk and service rate µk.
The offered traffic Λk/µk to node k must be less than the capacity nk of the node to enterstatistical equilibrium (12.6). The key point of Jackson’s theorem is that each node can beconsidered independently of all other nodes and that the state probabilities are as for Erlang’sdelay system (Sec. 12.2). This simplifies the calculation of the state space probabilitiessignificantly. The proof of the theorem was derived by Jackson in 1957 by showing that thesolution satisfy the node balance equations under the assumption of statistical equilibrium.Jackson’s first model thus only deals with open queueing networks.
In Jackson’s second model (Jackson, 1963 [53]) the arrival intensity from outside:
λ =K∑
j=1
λj (12.8)
may depend on the current number of customers in the network. Furthermore, µk can dependon the number of customers at node k. In this way, we can model queueing networks which
326 CHAPTER 12. QUEUEING NETWORKS
.................................................................................................................................................
............................................................................... .................................................................................................................................................
.......................................................................................................................................................................................................... ................ ........................................................................................................................... ........................................................................................................ ................ ........................................................................................ ................ ........................................................................................ ................µ1 µ2
λ
Figure 12.2: State transition diagram of an open queueing network consisting of two M/M/1–systems in series.
are either closed, open, or mixed. In all three cases, the state probabilities have productform. The model by Gordon & Newell (1967 [36]), which is often cited in the literature, canbe treated as a special case of Jackson’s second model.
Example 12.3.1: Two M/M/1 nodes in seriesFig. 12.2 shows an open queueing network of two M/M/1 nodes in series. The corresponding statetransition diagram is given in Fig. 12.3. Clearly, the state transition diagram is not reversible:(between two neighbour states there is only flow in one direction, (cf. Sec. 7.2). If we solve thebalance equations to obtain the state probabilities we find that the solution can be written on aproduct form:
p(x1, x2) = p1(x1) · p2(x2) ,
p(x1, x2) =
(1−A1) ·Ai1·
(1−A2) ·Aj2,
where A1 = λ/µ1 and A2 = λ/µ2. The state probabilities can be expressed in a product formp(x1, x2) = p1(x1) · p2(x2), where p1(x1) is the state probabilities for a M/M/1 system with offeredtraffic Ai, and p2(x2) is the state probabilities for a M/M/1 system with offered traffic A2. The stateprobabilities of Fig. 12.3 are identical to those of Fig. 12.4, which has local balance and productform. Thus it is possible to find a system which is reversible and has the same state probabilitiesas the non-reversible system. There is regional but not local balance in Fig. 12.3. If we consider asquare of four states, then to the outside world there will be balance, but internally there will becirculation via the diagonal state shift. 2
In queueing networks customers will often be looping, so that a customer may visit the samenode several times. If we have a queueing network with looping customers, where the nodesare M/M/n–systems, then the arrival processes to the individual nodes are no more Poissonprocesses. Anyway, we may calculate the state probabilities as if the individual nodes areindependent M/M/n systems. This is explained in the following example.
Example 12.3.2: Networks with feed backFeedback is for example introduced in Example 12.3.1 by letting a customer, which has just endedits service at node 2, return to node 1 with probability p21 (Fig. 12.2). With probability 1 − p21
the customer leaves the system. The flow balance equations (12.5) gives the total arrival intensityto each node and p21 must be chosen such that both Λ1/µ1 and Λ2/µ2 are less than one. Lettingλ → 0 and p21 → 1 we notice that the total arrival process to node 1 is not a Poisson processes:
12.3. OPEN NETWORKS: SINGLE CHAIN 327
only rarely a new job will arrive, but once it has entered the system it will circulate very fast manytimes. The number of times it loops back will be geometrically distributed and the inter-arrivaltime is the sum of the two service times. I.e. when there is one (or more) customers in the system,then the arrival rate to each node will be relatively high, whereas the rate will be very low if thereis no customers in the system. The arrival process will be bursty.The situation is similar to the decomposition of an exponential distribution into a weighted sum ofErlang-k distributions, with geometrical weight factors (Sec. 2.3.3). Instead of considering a singleexponential inter-arrival distribution, we can decompose this into infinitely many phases (Fig. 2.12)and consider each phase as an arrival. Hence, the arrival process has been transformed from aPoisson process to a process with bursty arrivals. The total service time will be exponentiallydistributed with rate µ1 · (1− p21), respectively µ2 · (1− p21). But the total service time is split upinto phases which are interleaved by waiting times and service time at the other node. 2
1
µ1
µ1µ1 µ1 µ1
µ1 µ1 µ1
µ1 µ1 µ1µ2 µ2 µ2 µ2 µ2
µ2
µ2µ2µ2µ2
µ2µ2µ2
µ2
µ2
µ
λ
02 12 22 32 42
41
03 13 23 33 43
01 11 21 31
00 10 20 30 40
λ λ λ λ
λ λ λ λ
λ λ λ λ
λ λ λ
Figure 12.3: State transition diagram for the open queueing network shown in Fig. 12.2. Thediagram is non–reversible.
12.3.1 Kleinrock’s independence assumption
Above we assume a job sample a new service time with rate µi when the job arrives at nodei, independent of the service time at other nodes. If we consider a real-life data network,then the packets will have the same length (for example in bytes), and therefore the sameservice time on all links and nodes of equal speed. The theory of queueing networks has toassume that a job samples a new service time in every node. This is a necessary assumptionfor the product form. This assumption was first investigated by Kleinrock (1964 [73]), Manyanalysis show that turns out to be a good approximation to real systems.
328 CHAPTER 12. QUEUEING NETWORKS
Figure 12.4: State transition diagram for two independent M/M/1–queueing systems withidentical arrival intensity, but individual mean service times. The diagram is reversible.
12.4 Open networks: multiple chains
Dealing with open systems is easy. First we solve the flow balance equation (12.5) indi-vidually for each chain and obtain the arrival intensity for chain j to node k (Λj,k). Thestate probabilities for a node are then given by (11.37). We still have product form betweenthe nodes, i.e. the nodes are independent, and we can easily calculate any state probabilityexplicitly.
12.5 Closed networks: single chain
Dealing with closed queueing networks is much more complicated. We are interested in thestate probabilities defined by p(x1, x2, . . . , xk, . . . , xK), where xk is the number of customersin node k (1 ≤ k ≤ K). With a fixed number of jobs we don’t know the true arrival rate tothe nodes. If we choose (or know) the arrival rate to a single node, then by solving the flowbalance equations we find the relative arrival rate to all other nodes. Thus we can find therelative traffic to each node. To find the true normalized arrival rate and traffic, we have tofind the normalization constant for the whole network, which means that we have to add allstate probabilities.
12.5. CLOSED NETWORKS: SINGLE CHAIN 329
12.5.1 Convolution algorithm
The number of states increases rapidly when the number of nodes and/or customers increases.In general, it is only possible to deal with small systems. The complexity is similar to thatof multi dimensional loss systems (Chapter 7).
We will now show how the convolution algorithm can be applied to closed queueing networks.The algorithm corresponds to the convolution algorithm for loss systems (Chapter 7). Weconsider a queueing network with K nodes and a single chain with S jobs. We assume thatthe queueing systems in each node are symmetric (Sec. 12.2). The algorithm has three steps:
• Step 1: Flow balance equationsLet the arrival intensity to an arbitrary chosen reference node k be equal to some valueΛk. By solving the flow balance equation (12.5) for the closed network we obtain therelative arrival rates Λk (1 ≤ k ≤ K) to all nodes. We then obtain the relative offeredtraffic values αk = Λk/µk. Often we choose the above arrival intensity of the referencenode so that the offered traffic to this node becomes one.
• Step 2: State probabilitiesConsider each node as if it is isolated and has the offered random (PCT-I) traffic αk(1 ≤ k ≤ K). Depending on the actual symmetric queueing system at node k, we findthe relative state probabilities qk(xk) at node k. The state space will be limited by thetotal number of customers S: 0 ≤ xk ≤ S.
• Step 3: convolutionConvolve the state probabilities of the nodes recursively. For example, for the first twonodes we have:
q12 = q1 ∗ q2 , (12.9)
where
q12(x) =x∑
i=0
q1(i) · q2(x− i), x = 0, 1, . . . , S .
By convolution we reduce the number of nodes to two: The node we are interested in,and all other nodes aggregated into one node. When all nodes except node k have beenconvolved we have the final convolution:
q1,2,...,k...K = q1,2,...,k−1,k+1,...K ∗ qk , (12.10)
During the last convolution we convolve two nodes: the aggregated node consisting ofall nodes except node k, and node k, and we obtain the detailed performance measuresof node k. By changing the order of convolution of the nodes we can obtain the perfor-mance measures of all other nodes. Since the total number of customers is fixed (S) onlystate q1,2,...,K(S) exists in the total aggregated system and therefore this macro-statemust have the probability one. We can then normalize all micro-state probabilities.
330 CHAPTER 12. QUEUEING NETWORKS
..................................................................................................................................................
..............................................................................
................................................................................................................................................
................................................................................
................................................................................................................................................
................................................................................
.................................................................................................................................................
...............................................................................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................ ........................................................................................ ................ ........................................................................................ ................ ........................................................................................ ................ ...............................................................................................................................................................................
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.................................................... ................
.................................................... ................
.................................................... ................
· · ·· · ·· · ·
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.............................................................................................................................................................................................................................................................................................................................................................................................................................................. ...........................................................................................................
............................................................................................................................................................................................................................................................
................................
................................
................................
................................
.............
........................................................................................................................................................................................................................
..............................................................................................................................................................................................................................................................................................
......................
......................
......................
........
........
........
........
........
........
........
.
........
........
........
........
........
........
........
.
........
........
........
........
........
........
........
.
........
........
........
........
........
........
........
.
........
........
........
........
........
........
........
.
........
........
........
........
........
........
........
.
............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............................................................................................................................................................................................................................................................................................................................................................................
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
..
1
2
S
µ2
µ1
µ1
µ1
Node 1 Node 2
Figure 12.5: The machine/repair model as a closed queueing networks with two nodes. Theterminals correspond to one IS–node, because the tasks always find an idle terminal, whereasthe CPU corresponds to an M/M/1–node.
Example 12.5.1: Palm’s machine/repair modelWe consider the machine/repair model of Palm introduced in Sec. 9.6 as a closed queueing network(Fig. 12.5). There are S jobs and terminals one server (computer). The mean thinking time isµ−1
1 and the mean service time at the CPU is µ−12 . In queueing network terminology there are two
nodes: node one is the terminals, i.e. an M/G/∞ (actually it is an M/G/S system, but since thenumber of customers is limited to S it corresponds to an M/G/∞ system), and node two is theCPU, i.e. an M/M/1 system with service intensity µ2. We choose the relative arrival rate to nodeone equal to Γ1 and find Γ2 = Γ1 = ΓThe relative load at node 1 and node 2 are
α1 = Λ/µ1 and α2 = Λ/µ2 ,
respectively. We consider each node in isolation and obtain the state probabilities of each node,q1(i) and q2(j), as if the arrival processes are Poisson processes. By convolving q1(x1) and q2(x2)we get q12(x), (0 ≤ x ≤ S), as shown in Table 12.1.
The last term with S customers (an unnormalised probability) q12(S) is made up from the terms:
q12(S) =S∑
i=0
q1(i) · q2(S − i)
= 1 · αS2 + α1 · αS−12 +
α21
2!· αS−2
2 + . . . · αi1
i!· αx−i2 + . . .+
αS1S!· 1 .
12.5. CLOSED NETWORKS: SINGLE CHAIN 331
State Node 1 Node 2 Queueing network
x q1(x1) q2(x2) q12 = q1 ∗ q2
0 1 1 1
1 α1 α2 α1 + α2
2α2
1
2!α2
2 α22 + α1 · α2 +
α21
2!...
......
...
xαx1x!
αx2...
......
......
SαS1S!
αS2 q12(S)
Table 12.1: The convolution algorithm applied to Palm’s machine/repair model. Node 1 isan IS-system, and node two is an M/M/1-system (Example 12.5.1).
We know that this total has probability one, and from the individual contributions we identify thestate probabilities of the two nodes. A simple rearranging yields:
q12(S) = αS2 ·
1 +%
1+%2
2!+ · · ·+ %S
S!
,
where% =
α1
α2=µ2
µ1.
The probability that all terminals are thinking is identified as the last term q1(S) ·q2(0) (S terminalsin node 1, zero terminals in node 2) normalized by the sum q12(S):
px1 = S, x2 = 0 =
%S
S!
1 + %+%2
2!+%3
3!+ · · ·+ %S
S!
= E1,S(%) ,
which is Erlang’s B-formula. Thus the result is in agreement with the result obtained in Sec. 9.6.We notice that λ appears with the same power in all terms of q1,2(S) and thus corresponds to aconstant which disappears when we normalize. 2
Example 12.5.2: Central server systemIn 1971 J. P. Buzen introduced the central server model illustrated in Fig. 12.6 to model a multi-programmed computer system with one CPU and a number of input/output channels (peripheral
332 CHAPTER 12. QUEUEING NETWORKS
units). The degree of multi-programming S describes the number of jobs processed simultaneously.The number of peripheral units is denoted by K−1 as shown in Fig. 12.6, which also shows thetransition probabilities.
Typically a job requires service hundreds of times, either by the central unit or by one of theperipherals units. We assume that when a job is finished it is immediately replaced by a newjob. Hence S is constant. The service times are all exponentially distributed with intensity µi(i = 1, . . . ,K).
.................................................................................................................................................
...............................................................................
................................................................................................................................................
................................................................................
................................................................................................................................................
................................................................................
................................................................................................................................................
................................................................................
........................................................................................................................... ................ ........................................................................................ ................ ........................................................................................ ................ ....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
................................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................ ............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...........................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
....
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
....
...............................................................................................................................................
µ1
µ2
µ3
µK
p11
p12
p13
p1K
I/O channels
S circulating tasks
new tasks
CPU· · · · · · · · · · · ·
· · · · · · · · · · · ·
Figure 12.6: Central server queueing system consisting of one central server (CPU) and (K−1)I/O–channels. A fixed number of tasks S are circulating in the system.
Buzen drew up a scheme to evaluate this system. The scheme is a special case of the convolutionalgorithm. Let us illustrate it by a case with S = 4 customers and K = 3 nodes and:
µ1 =128, µ2 =
140, µ3 =
1280
,
p11 = 0.1 , p12 = 0.7 , p13 = 0.2 .
The relative loads become:
α1 = 1 , α2 = 1 , α3 = 2 .
If we apply the convolution algorithm we obtain the results shown in Table 12.2. The term q123(4)is made up by:
q123(4) = 1 · 16 + 2 · 8 + 3 · 4 + 4 · 2 + 5 · 1 = 57 .
12.5. CLOSED NETWORKS: SINGLE CHAIN 333
State Node 1 Node 2 Node 1*2 Node 3 Queueing network
i q1(i) q2(i) q12 = q1 ∗ q2 q3 q123 = (q1 ∗ q2) ∗ q3
0 1 1 1 1 1
1 1 1 2 2 4
2 1 1 3 4 11
3 1 1 4 8 26
4 1 1 5 16 57
Table 12.2: The convolution algorithm applied to the central server system.
Node 3 serves customers in all states except for state q3(0) · q12(4) = 5. The utilization of node 3 istherefore a3 = 52/57. Based on the relative loads we now obtain the exact loads:
a1 =2657, a2 =
2657, a3 =
5257.
The average number of customers at node 3 is:
L3 = 1 · (4 · 2) + 2 · (3 · 4) + 3 · (2 · 8) + 4 · (1 · 16) / 57 ,
L3 =14457
.
By changing the order of convolution we get the average queue lengths L1 and L2 and ends up with:
L1 =4257, L2 =
4257, L3 =
14457
.
The sum of all average queue lengths is of course equal to the number of customers S. Notice, thatin queueing networks we define the queue length as the total number of customers in the node,including customers being served. From the utilization and mean service time we find the averagenumber of customers finishing service per time unit at each node:
λ1 =2657· 1
28, λ2 =
2657· 1
40, λ3 =
5257· 1
280.
Applying Little’s result we finally obtain the mean sojourn time Wk = Lk/λk:
W1 = 45.23 , W2 = 64.62 , W3 = 775.38 .
2
334 CHAPTER 12. QUEUEING NETWORKS
12.5.2 MVA–algorithm
The Mean Value Algorithm (MVA) is an algorithm for calculating performance measures ofqueueing networks where all nodes are single-server systems. It combines in an elegant waytwo main results in queueing theory: the arrival theorem (5.29) and Little’s law (3.20). Thealgorithm was first published by Lavenberg & Reiser (1980 [80]).
We consider a queueing network with K nodes and S customers (all belonging to a singlechain). We choose some value of the arrival rate to some node, for example λ1 = 1 to nodeone. From the flow balance equations we find the relative arrival rates to all other nodes.The relative load of node k is αk = λk · sk. (k = 1, 2, . . . , K). The algorithm is recursive innumber of customers as a network with S + 1 customers is evaluated from a network with Scustomers.
Assume that the average number of customers at node k is Lk(S) where S is the total numberof customers in the network. Obviously
K∑
k=1
Lk(S) = S . (12.11)
The algorithm is recursive in two steps:
Step 1: Arrival theoremIncrease the number of customers from S to (S + 1). According to the arrival theorem,the (S + 1)th customer will see the system as a system with S customers in statisticallyequilibrium. Hence, the average sojourn time (waiting time + service time) at node k is:
• For M/M/1, M/G/1–PS, and M/G/1–LCFS–PR:
Wk(S + 1) = Lk(S) + 1 · sk .
• For M/G/∞:Wk(S + 1) = sk .
where sk is the average service time in node k which has nk servers. As we only calculatemean waiting times, we may assume FCFS queueing discipline.
Step 2: Little’s theoremWe apply Little’s law (L = λ ·W ), which is valid for all systems in statistical equilibrium.For node k we have:
Lk(S + 1) = c · λk ·Wk(S + 1) ,
where λk is the relative arrival rate to node k. The normalizing constant c is obtained fromthe total number of customers::
K∑
k=1
Lk(S + 1) = S + 1 . (12.12)
12.5. CLOSED NETWORKS: SINGLE CHAIN 335
By these two steps we have performed the recursion from S to (S + 1) customers. For S = 1there will be no waiting time in the system and Wk(1) equals the average service time sk.
Nodes with a limited number of servers (n > 1) can only be dealt with approximately by theMVA–algorithm, but are easy to deal with by the convolution algorithm.
Example 12.5.3: Central server modelWe apply the MVA–algorithm to the central server model (Example 12.5.2). The relative arrivalrates are:
λ1 = 1 , λ2 = 0.7 , λ3 = 0.2 .
Node 1 Node 2 Node 3
S = 1 W1(1) = 28 W2(1) = 40 W3(1) = 280L1(1) = c · 1 · 28 L2(1) = c · 0.7 · 40 L3(1) = c · 0.2 · 280L1(1) = 0.25 L2(1) = 0.25 L3(1) = 0.50
S = 2 W1(2) = 1.25 · 28 W2(2) = 1.25 · 40 W3(2) = 1.50 · 280L1(2) = c · 1 · 1.25 · 28 L2(2) = c · 0.7 · 1.25 · 40 L3(2) = c · 0.2 · 1.50 · 280L1(2) = 0.4545 L2(2) = 0.4545 L3(2) = 1.0909
S = 3 W1(3) = 1.4545 · 28 W2(3) = 1.4545 · 40 W3(3) = 2.0909 · 280L1(3) = c · 1 · 1.4545 · 28 L2(3) = c · 0.7 · 1.4545 · 40 L3(3) = c · 0.2 · 2.0909 · 280L1(3) = 0.6154 L2(3) = 0.6154 L3(3) = 1.7692
S = 4 W1(4) = 1.6154 · 28 W2(4) = 1.6154 · 40 W3(4) = 2.7692 · 280L1(4) = c · 1 · 1.6154 · 28 L2(4) = c · 0.7 · 1.6154 · 40 L3(4) = c · 0.2 · 2.7692 · 280L1(4) = 0.7368 L2(4) = 0.7368 L3(4) = 2.5263
Naturally, the result is identical to the one obtained with the convolution algorithm. The sojourntime at each node (using the original time unit):
W1(4) = 1.6154 · 28 = 45.23 ,
W2(4) = 1.6154 · 40 = 64.62 ,
W3(4) = 2.7693 · 280 = 775.38 .
2
Example 12.5.4: MVA-algorithm applied to the machine/repair modelWe consider the machine/repair model with S sources, terminal thinking time A and CPU–servicetime equal to one time unit. As mentioned in Sec. 9.6.2 this is equivalent to Erlang’s loss systemwith S servers and offered traffic A. It is also a closed queueing network with two nodes and Scustomers in one chain. If we apply the MVA–algorithm to this system, then we get the recursionformula for the Erlang–B formula (4.29). The relative arrival rates are identical, as a customer
336 CHAPTER 12. QUEUEING NETWORKS
alternatively visits node one and two: λ1 = λ2 = 1.
Node 1 Node 2
S = 1 W1(1) = A W2(1) = 1
L1(1) = c · 1 ·A L2(1) = c · 1 · 1
L1(1) = A1+A L2(1) = 1
1+A
S = 2 W1(2) = A W2(2) = 1 + 11+A
L1(2) = c · 1 ·A L2(2) = c · 1 · (1 + 11+A)
L1(2) = A · 1+A
1+A+A2
2 !
L2(2) = 2−A · 1+A
1+A+A2
2 !
. . . . . . . . .
S = x W1(x) = A W2(x) = 1 + L2(x− 1)
L1(x) = c ·A L2(x) = c · 1 + L2(x− 1)L1(x) = A · 1− Ex(A) L2(x) = x−A · 1− Ex(A)
We know that the queue-length at the terminals (node 1) is equal to the carried traffic in theequivalent Erlang–B system and that all other customers stay in the CPU (node 2). We thus havein general: From this we have the normalization constant c = 1−Ex(A) and we get for the (x+1)’thcustomer:
L1(x+ 1) + L2(x+ 1) = c ·A+ c · 1 + L2(x)
x+ 1 = c ·A+ c · 1 + x−A · (1− Ex)
Ex+1 =A · Ex
x+ 1 +A · Ex,
because we know c = 1− Ex+1. This is just the recursion formula for the Erlang–B formula. 2
12.6 BCMP multi-chain queueing networks
In 1975 the second model of Jackson was further generalised by Baskett, Chandy, Muntzand Palacios (1975 [4]). They showed that queueing networks with more than one type ofcustomers also have product form, provided that:
a) Each node is a symmetric (reversible) queueing system (cf. Sec. 12.2: Poisson arrivalprocess ⇒ Poisson departure process).
12.6. BCMP MULTI-CHAIN QUEUEING NETWORKS 337
b) The customers are classified into N chains. Each chain is characterized by its own meanservice time sj and transition probabilities pjik. A restriction applies if the queueingdiscipline at a node is a non-sharing M/M/n queueing system (including M/M/1): theaverage service time must be identical for all chains in a node.
BCMP–networks can be evaluated with the multi-dimensional convolution algorithm and themultidimensional MVA algorithm.
Mixed queueing networks (open & closed) are calculated by first calculating the traffic load ineach node from the open chains. This traffic must be carried to enter statistical equilibrium.The capacity of the nodes are reduced by this traffic, and the closed queueing network iscalculated by the reduced capacity. So the main problem is to calculate closed networks.For this we have more algorithms among which the most important ones are convolutionalgorithm and the MVA (Mean Value Algorithm) algorithm.
12.6.1 Convolution algorithm
The algorithm is essentially the same as in the single chain case:
• Step 1: Flow balance equationsConsider each chain as if it is alone in the network. Find the relative load at eachnode by solving the flow balance equation (12.5). At an arbitrary reference node weassume the arrival rate is equal to one. For each chain we may choose a different nodeas reference node. For chain j in node k the relative arrival intensity λjk is obtainedfrom (we use the upper index to denote the chain):
λjk =K∑
i=1
pjik · λji , j = 1, . . . , N , (12.13)
where:
K = number of nodes,
N = number of chains,
pjik = the probability that a customer of chain j moves from node i to node k.
We choose an arbitrary node as reference node, e.g. node 1, i.e. λj1 = 1. The relativeload at node k due to customers of chain j is then:
αjk = λjk · sjkwhere sjk = is the mean service time at node k for customers of chain j. Notice j is anindex, not a power.
338 CHAPTER 12. QUEUEING NETWORKS
• Step 2: State probabilitiesBased on the relative loads found in step 1, we obtain the multi-dimensional stateprobabilities for each node (Sec. 11.2.5). Each node is considered in isolation and wetruncate the state space according to the number of customers in each chain. Forexample for node k (1 ≤ k ≤ K):
pk
= pk(x1, x2, . . . , xN) , 0 ≤ xj ≤ Sj , j = 1, 2, . . . N ,
where Sj is the number of customers in chain j.
• Step 3: Convolution ¡¡ In order to find the state probabilities of the total network, thestate probabilities of each node are convolved together similar to the single chain case.The only difference is that the convolution is multi-dimensional. When we perform thelast convolution we may obtain the performance measures of the last node. Again, bychanging the order of nodes, we can obtain the performance measures of all nodes.
The total number of states increases rapidly. For example, if chain j has Sj customers, thenthe total number of states in each node becomes:
N∏
j=1
(Sj+1) . (12.14)
The number of ways the customers can be distributed in a queueing network with K nodesand N chains with Sj customers in chain j is:
C =N∏
j=1
C(Sj, kj) (12.15)
where kj (1 ≤ kj < k) is the number of nodes visited by chain j and:
C(Sj, kj) =
(Sj + kj − 1kj − 1
)=
(Sj + kj − 1
Sj
). (12.16)
The algorithm is best illustrated with an example.
Example 12.6.1: Palm’s machine-repair model with two types of customersAs seen in Example 12.5.1, this system can be modelled as a queueing network with two nodes.Node 1 corresponds to the terminals (machines) while node 2 is the CPU (repair man). Node 2is a single server system whereas node 1 is modeled as an Infinite Server IS-system. The numberof customers in the chains are (S1 = 2, S2 = 3) and the mean service time in node k is sjk. Therelative load of chain 1 is denoted by α1 in node 1 and by α2 in node 2. Similarly, the load af chain2 is denoted by β1, respectively β2. Applying the convolution algorithm yields:
• Step 1.
12.6. BCMP MULTI-CHAIN QUEUEING NETWORKS 339
Chain 1: S1 = 2 customersRelative load: α1 = λ1 · s1
1 , α2 = λ1 · s12 .
Chain 2: S2 = 3 customersRelative load: β1 = λ2 · s2
1 , β2 = λ2 · s22 .
• Step 2.For node 1 (IS) the relative state probabilities are (cf. 7.10):
q1(0, 0) = 1 q1(0, 2) =β2
1
2
q1(1, 0) = α1 q1(1, 2) =α1 · β2
1
2
q1(2, 0) =α2
1
2q1(2, 2) =
α21 · β2
1
4
q1(0, 1) = β1 q1(0, 3) =β3
1
6
q1(1, 1) = α1 · β1 q1(1, 3) =α1 · β3
1
6
q1(2, 1) =α2
1 · β1
2q1(2, 3) =
α21 · β3
1
12
For node 2 (single server) (cf. 11.22) we get:
q2(0, 0) = 1 q2(0, 2) = β22
q2(1, 0) = α2 q2(1, 2) = 3 · α2 · β22
q2(2, 0) = α22 q2(2, 2) = 6 · α2
2 · β22
q2(0, 1) = β2 q2(0, 3) = β32
q2(1, 1) = 2 · α2 · β2 q2(1, 3) = 4 · α2 · β32
q2(2, 1) = 3 · α22 · β2 q2(2, 3) = 10 · α2
2 · β32
• Step 3.Next we convolve the two nodes. We know that the total number of customers are (2, 3), i.e.
340 CHAPTER 12. QUEUEING NETWORKS
we are only interested in state (2, 3):
q12(2, 3) = q1(0, 0) · q2(2, 3) + q1(1, 0) · q2(1, 3)
+ q1(2, 0) · q2(0, 3) + q1(0, 1) · q2(2, 2)
+ q1(1, 1) · q2(1, 2) + q1(2, 1) · q2(0, 2)
+ q1(0, 2) · q2(2, 1) + q1(1, 2) · q2(1, 1)
+ q1(2, 2) · q2(0, 1) + q1(0, 3) · q2(2, 0)
+ q1(1, 3) · q2(1, 0) + q1(2, 3) · q2(0, 0)
Using the actual values yields:
q12(2, 3) = + 1 · 10 · α22 · β3
2 + α1 · 4 · α2 · β32
+α2
1
2· β3
2 + β1 · 6 · α22 · β2
2
+ α1 · β1 · 3 · α2 · β22 +
α21 · β1
2· β2
2
+β2
1
2· 3 · α2
2 · β2 +α1 · β2
1
2· 2 · α2 · β2
+α2
1 · β21
4· β2 +
β31
6· α2
2
+α1 · β3
1
6· α2 +
α21 · β3
1
12· 1
Note that α1 and α2 together (chain 1) always appears in the second power whereas β1 and β2
(chain 2) appears in the third power corresponding to the number of customers in each chain.Because of this, only the relative loads are relevant, and the absolute probabilities are obtain bynormalisation by dividing all the terms by q12(2, 3). The detailed state probabilities are now easy toobtain. Only in the state with the term (α2
1 · β31)/12 is the CPU (repair man) idle. If the two types
of customers are identical the model simplifies to Palm’s machine/repair model with 5 terminals.In this case we have:
E1,5(x) =112 · α2
1 · β31
q12(2, 3).
Choosing α1 = β1 = α and α2 = β2 = 1, yields:112 · α2
1 · β31
q12(2, 3)=
α5/1210 + 4α+ 1
2α2 + 6α+ 3α2 + 1
2α3 + 3
2α2 + α3 + 1
4α4 + 1
6α3 + 1
6α4 + 1
12α5
=
α5
5!
1 + α+α2
2+α3
3!+α4
4!+α5
5!
,
i.e. the Erlang–B formula as expected. 2
12.7. OTHER ALGORITHMS FOR QUEUEING NETWORKS 341
12.7 Other algorithms for queueing networks
The MVA–algorithm is also applicable to queueing networks with more chains, when the nodesare single-server systems. During the last decade several algorithms have been published. Anoverview can be found in (Conway & Georganas, 1989 [16]). In general, exact algorithmsare not applicable for bigger networks. Therefore, many approximative algorithms have beendeveloped to deal with queueing networks of realistic size.
12.8 Complexity
Queueing networks has the same complexity as circuit switched networks with direct routing(Sec. 8.5 and Tab. 8.2). The state space of the network shown in Tab. 12.3 has the followingnumber of states for every node (12.14:
N∏
i=0
(Si + 1) . (12.17)
The worst case is when every chain consists of one customer. Then the number of statesbecomes 2S where S is the number of customers.
Node PopulationChain
1 2 · · · K Size
1 α11 α21 · · · αK1 S1
2 α12 α22 · · · αK2 S2
. . . . . . . . . . . . . . . . . .
N α1N α2N · · · αKN SN
Table 12.3: The parameters of a queueing network with N chains, K nodes and∑
i Si cus-tomers. The parameter αjk denotes the load from customers of chain j in node k (cf. Tab. 8.2).
12.9 Optimal capacity allocation
We now consider a data transmission system with K nodes, which are independent singleserver queueing systems M/M/1 (Erlang’s delay system with one server). The arrival processto node k is a Poisson process with intensity λk messages (customers) per time unit, and the
342 CHAPTER 12. QUEUEING NETWORKS
message size is exponentially distributed with mean value 1/µk [bits]. The capacity of nodek is ϕk [bits per time unit]. The mean service time becomes:
s =1/µkϕk
=1
µk ϕk.
So the mean service rate is µk ϕk and the mean sojourn time is given by (9.34):
m1,k =1
µk ϕk − λk.
We introduce the following linear restriction on the total capacity:
F =K∑
k=1
ϕk . (12.18)
For every allocation of capacity which satisfies (12.18), we have the following mean sojourntime for all messages (call average):
m1 =K∑
k=1
λkλ· 1
µk · ϕk − λk, (12.19)
where:
λ =K∑
k=1
λk . (12.20)
By applying (10.55) we get the total mean service time:
1
µ=
K∑
k=1
λkλ· 1
µk. (12.21)
The total offered traffic is then:
A =λ
µ · F . (12.22)
Kleinrock’s law for optimal capacity allocation (Kleinrock, 1964 [73]) reads:
Theorem 12.2 Kleinrock’s square root law: The optimal allocation of capacity whichminimises m1 (and thus the total number of messages in all nodes) is:
ϕk =λkµk
+ F · (1− A)
√λk/µk∑K
i=1
√λi/µi
, (12.23)
under the condition that:
F >K∑
k=1
λkµk
. (12.24)
12.9. OPTIMAL CAPACITY ALLOCATION 343
Proof: This can be shown by introducing Lagrange multiplier ϑ and consider:
G = m1 − ϑ
K∑
k=1
ϕk − F. (12.25)
Minimum of G is obtained by choosing ϕk as given in (12.23).
With this optimal allocation we find the mean sojourn time:
m1 =
∑Kk=1
√λk/µk
2
λ · F · (1− A). (12.26)
This optimal allocation corresponds to that all nodes first are allocated the necessary mini-mum capacity λi/µi. The remaining capacity (12.21):
F −K∑
k=1
λiµi
= F · (1− A) (12.27)
is allocated among the nodes proportional the square root of the average flow λk/µk.
If all messages have the same mean value (µk = µ), then we may consider different costs inthe nodes under the restriction that a fixed amount is available (Kleinrock, 1964 [73]).
344 CHAPTER 12. QUEUEING NETWORKS
Chapter 13
Traffic measurements
Traffic measurements are carried out in order to obtain quantitative information about theload on a system to be able to dimension the system. By traffic measurements we understandany kind of collection of data on the traffic loading a system. The system considered may bea physical system, for instance a computer, a telephone system, or the central laboratory of ahospital. It may also be a fictitious system. The collection of data in a computer simulationmodel corresponds to a traffic measurements. Billing of telephone calls also corresponds to atraffic measurement where the measuring unit used is an amount of money.
The extension and type of measurements and the parameters (traffic characteristics) measuredmust in each case be chosen in agreement with the demands, and in such a way that aminimum of technical and administrative efforts result in a maximum of information andbenefit. According to the nature of traffic a measurement during a limited time intervalcorresponds to a registration of a certain realization of the traffic process. A measurement isthus a sample of one or more random variables. By repeating the measurement we usuallyobtain a different value, and in general we are only able to state that the unknown parameter(the population parameter, for example the mean value of the carried traffic) with a certainprobability is within a certain interval, the confidence interval. The full information is equalto the distribution function of the parameter. For practical purposes it is in general sufficientto know the mean value and the variance, i.e. the distribution itself is of minor importance.
In this chapter we shall focus upon the statistical foundation for estimating the reliability of ameasurement, and only to a limited extent consider the technical background. As mentionedabove the theory is also applicable to stochastic computer simulation models.
345
346 CHAPTER 13. TRAFFIC MEASUREMENTS
13.1 Measuring principles and methods
The technical possibilities for measuring are decisive for what is measured and how the mea-surements are carried out. The first program controlled measuring equipment was developedat the Technical University of Denmark, and described in (Andersen & Hansen & Iversen,1971 [2]). Any traffic measurement upon a traffic process, which is discrete in state and con-tinuous in time can in principle be implemented by combining two fundamental operations:
1. Number of events: this may for example be the number of errors, number of callattempts, number of errors in a program, number of jobs to a computing center, etc.(cf. number representation, Sec. 3.1.1 ).
2. Time intervals: examples are conversation times, execution times of jobs in a computer,waiting times, etc. (cf. interval representation, Sec. 3.1.2).
By combining these two operations we may obtain any characteristic of a traffic process.The most important characteristic is the (carried) traffic volume, i.e. the summation of all(number) holding times (interval) within a given measuring period.
From a functional point of view all traffic measuring methods can be divided into the followingtwo classes:
1. Continuous measuring methods.
2. Discrete measuring methods.
13.1.1 Continuous measurements
In this case the measuring point is active and it activates the measuring equipment at theinstant of the event. Even if the measuring method is continuous the result may be discrete.
Example 13.1.1: Measuring equipment: continuous timeExamples of equipment operating according to the continuous principle are:
(a) Electro-mechanical counters which are increased by one at the instant of an event.
(b) Recording x–y plotters connected to a point which is active during a connection.
(c) Ampere-hour meters, which integrate the power consumption during a measuring period.When applied for traffic volume measurements in old electro-mechanical exchanges every trunkis connected through a resistor of 9,6 kΩ, which during occupation is connected between –48volts and ground and thus consumes 5 mA.
(d) Water meters which measure the water consumption of a household.
2
13.1. MEASURING PRINCIPLES AND METHODS 347
13.1.2 Discrete measurements
In this case the measuring point is passive, and the measuring equipment must itself test(poll) whether there have been changes at the measuring points (normally binary, on-off).This method is called the scanning method and the scanning is usually done at regular instants(constant = deterministic time intervals). All events which have taken place between twoconsecutive scanning instants are from a time point of view referred to the latter scanninginstant, and are considered as taking place at this instant.
Example 13.1.2: Measuring equipment: discrete timeExamples of equipment operating according to the discrete time principle are:
(a) Call charging according to the Karlsson principle, where charging pulses are issued at regulartime instants (distance depends upon the cost per time unit) to the meter of the subscriber,who has initiated the call. Each unit (step) corresponds to a certain amount of money. If wemeasure the duration of a call by its cost, then we observe a discrete distribution (0, 1, 2, . . .units). The method is named after S.A. Karlsson from Finland (Karlsson, 1937 [65]). Incomparison with most other methods it requires a minimum of administration.
(b) The carried traffic on a trunk group of an electro-mechanical exchange is in practice measuredaccording to the scanning principle. During one hour we observe the number of busy trunks100 times (every 36 seconds) and add these numbers on a mechanical counter, which thusindicate the average carried traffic with two decimals. By also counting the number of callswe can estimate the average holding time.
(c) The scanning principle is particularly appropriate for implementation in digital systems. Forexample, the processor controlled equipment developed at DTU, Technical University of Den-mark, in 1969 was able to test 1024 measuring points (e.g. relays in an electro-mechanicalexchange, trunks or channels) within 5 milliseconds. The states of each measuring point(idle/busy or off/on) at the two latest scannings are stored in the computer memory, andby comparing the readings we are able to detect changes of state. A change of state 0 → 1corresponds to start of an occupation and 1→ 0 corresponds to termination of an occupation(last–look principle). The scannings are controlled by a clock. Therefore we may monitorevery channel during time and measure time intervals and thus observe time distributions.Whereas the classical equipment (erlang-meters) mentioned above observes the traffic processin the state space (vertical, number representation), then the program controlled equipmentobserves the traffic process in time space (horizontal, interval representation), in discretetime. The amount of information is almost independent of the scanning interval as only statechanges are stored (the time of a scanning is measured in an integral number of scanningintervals).
2
Measuring methods have had decisive influence upon the way of thinking and the way offormulating and analyzing the statistical problems. The classical equipment operating instate space has implied that the statistical analyzes have been based upon state probabilities,i.e. basically birth and death processes. From a mathematically point of view these modelshave been rather complex (vertical measurements).
348 CHAPTER 13. TRAFFIC MEASUREMENTS
The following derivations are in comparison very elementary and even more general, and theyare inspired by the operation in time space of the program controlled equipment. (Iversen,1976 [41]) (horizontal measurements).
13.2 Theory of sampling
Let us assume we have a sample of n IID (Independent and Identically Distributed) observa-tions X1, X2, . . . , Xn of a random variable with unknown finite mean value m1 and finitevariance σ2 (population parameters).
The mean value and variance of the sample are defined as follows:
X =1
n·
n∑
i=1
Xi (13.1)
s2 =1
n− 1
n∑
i=1
X2i − n · X2
(13.2)
Both X and s2 are functions of a random variable and therefore also random variables, definedby a distribution we call the sample distribution. X is a central estimator of the unknownpopulation mean value m1, i.e.:
EX = m1 (13.3)
Furthermore, s2/n is a central estimator of the unknown variance of the sample mean X, i.e.:
σ2X = s2/n. (13.4)
We describe the accuracy of an estimate of a sample parameter by means of a confidenceinterval, which with a given probability specifies how the estimate is placed relatively to theunknown theoretical value. In our case the confidence interval of the mean value becomes:
X ± tn−1,1−α/2 ·√s2
n(13.5)
where tn−1,1−α/2 is the upper (1 − α/2) percentile of the Student’s t-distribution with n−1 degrees of freedom. The probability that the confidence interval includes the unknowntheoretical mean value is equal to (1−α) and is called the level of confidence. Some valuesof the Student’s t-distribution are given in Table 13.1. When n becomes large, then theStudent’s t-distribution converges to the Normal distribution, and we may use the percentileof this distribution. The assumption of independence are fulfilled for measurements takenon different days, but for example not for successive measurements by the scanning methodwithin a limited time interval, because the number of busy channels at a given instant willbe correlated with the number of busy circuits in the previous and the next scanning. In
13.2. THEORY OF SAMPLING 349
................................................... ................
................................................... ................
................................................... ................ ................................................... ................
................................................... ................
................................................... ................
................................................... ................ ................................................... ................
................................................... ................
................................................... ................
................................................... ................
................................................... ................
00
000
00 0
000
0000
0
0
0
0
0
0
0
00
0
0
1
1
1
1
1111
11
1
1 1
111
11
1
111
1
1
2
2 2 2 2
2
2
3
3
3
3
4
4
4
4
5 6 7 8
a
a
a
b
b
b
c
c
c
d
d
d
Σ
Scan
Time
Scanning method
Discrete traffic process
Continuous traffic process
Figure 13.1: Observation of a traffic process by a continuous measuring method and by thescanning method with regular scanning intervals. By the scanning method it is sufficient toobserve the changes of state.
350 CHAPTER 13. TRAFFIC MEASUREMENTS
n α = 10% α = 5% α = 1%
1 6.314 12.706 63.6572 2.920 4.303 9.9255 2.015 2.571 4.032
10 1.812 2.228 3.16920 1.725 2.086 2.84540 1.684 2.021 2.704∞ 1.645 1.960 2.576
Table 13.1: Percentiles of the Student’s t-distribution with n degrees of freedom. A specificvalue of α corresponds to a probability mass α/2 in both tails of the Student’s t-distribution.When n is large, then we may use the percentiles of the Normal distribution.
the following sections we calculate the mean value and the variance of traffic measurementsduring for example one hour. This aggregated value for a given day may then be used as asingle observation in the formulæ above, where the number of observations typically will bethe number of days, we measure.
Example 13.2.1: Confidence interval for call congestionOn a trunk group of 30 trunks (channels) we observe the outcome of 500 call attempts. Thismeasurement is repeated 11 times, and we find the following call congestion values (in percentage):
9.2, 3.6, 3.6, 2.0, 7.4, 2.2, 5.2, 5.4, 3.4, 2.0, 1.4
The total sum of the observations is 45.4 and the total of the squares of the observations is 247.88 .We find (13.1) X = 4.1273 % and (13.2) s2 = 6.0502 (%)2. At 95%–level the confidence intervalbecomes by using the t-values in Table 13.1: (2.47–5.78). It is noticed that the observations areobtained by simulating a PCT–I traffic of 25 erlang, which is offered to 30 channels. Accordingto the Erlang B–formula the theoretical blocking probability is 5.2603 %. This value is within theconfidence interval. If we want to reduce the confidence interval with a factor 10, then we have todo 100 times as many observations (cf. formula 13.5), i.e. 50,000 per measurements (sub-run). Wecarry out this simulation and observe a call congestion equal to 5.245 % and a confidence interval(5.093 – 5.398). 2
13.3 Continuous measurements in an unlimited period
Measuring of time intervals by continuous measuring methods with no truncation of themeasuring period are easy to deal with by the theory of sampling described in Sec. 13.2above.
13.3. CONTINUOUS MEASUREMENTS IN AN UNLIMITED PERIOD 351
........
......................
........
......................
........
......................
........
......................
........
......................
........
......................
........
......................
........
......................
0 T.................................................... ................
.................................................... ................
Time
a: Unlimited measuring period
0 T Time
b: Limited measuring period
Figure 13.2: When analyzing traffic measurements we distinguish between two cases: (a)Measurements in an unlimited time period. All calls initiated during the measuring periodcontributes with their total duration. (b) Measurements in a limited measuring period. Allcalls contribute with the portion of their holding times which are located inside the measuringperiod. In the figure the sections of the holding times contributing to the measurements areshown with full lines.
For a traffic volume or a traffic intensity we can apply the formulæ (2.82) and (2.84) for astochastic sum. They are quite general, the only restriction being stochastic independencebetween X and N . In practice this means that the systems must be without congestion.In general we will have a few percentages of congestion and may still as worst case assumeindependence. By far the most important case is a Poisson arrival process with intensity λ.We then get a stochastic sum (Sec. 2.3.3). For the Poisson arrival process we have when weconsider a time interval T :
m1,n = σ2n = λ · T
352 CHAPTER 13. TRAFFIC MEASUREMENTS
and therefore we find:
m1,s = λT ·m1,t
σ2s = λT
m2
1,t + σ2t
= λT ·m2,t = λT ·m21,t · εt , (13.6)
where m2,t is the second (non-central) moment of the holding time distribution, and εt isPalm’s form factor of the same distribution:
ε =m2,t
m21,t
= 1 +σ2t
m21,t
(13.7)
The distribution of ST will in this case be a compound Poisson distribution (Feller, 1950 [32]).
The formulæ correspond to a traffic volume (e.g. erlang-hours). For most applications asdimensioning we are interested in the average number of occupied channels, i.e. the trafficintensity (rate) = traffic per time unit (m1,t = 1, λ = A), when we choose the mean holdingtime as time unit:
m1,i = A (13.8)
σ2i =
A
T· εt (13.9)
These formulæ are thus valid for arbitrary holding time distributions. The formulæ (13.8)and (13.9) are originally derived by C. Palm (1941 [91]). In (Rabe, 1949 [99]) the formulæfor the special cases εt = 1 (constant holding time) and εt = 2 (exponentially distributedholding times) were published.
The above formulæ are valid for all calls arriving inside the interval T when measuring thetotal duration of all holding times regardless for how long time the stay (Fig. 13.2 a).
Example 13.3.1: Accuracy of a measurementWe notice that we always obtain the correct mean value of the traffic intensity (13.8). The variance,however, is proportional to the form factor εt. For some common cases of holding time distributionswe get the following variance of the traffic intensity measured:
Constant: σ2i =
A
T,
Exponential distribution: σ2i =
A
T· 2 ,
Observed (Fig. 2.5): σ2i =
A
T· 3.83 .
13.4. SCANNING METHOD IN AN UNLIMITED TIME PERIOD 353
Observing telephone traffic, we often find that εt is significant larger than the value 2 (exponentialdistribution), which is presumed to be valid in many classical teletraffic models (Fig. 2.5). Therefore,the accuracy of a measurement is lower than given in many tables. This, however, is compensated bythe assumption that the systems are non–blocking. In a system with blocking the variance becomessmaller due to negative correlation between holding times and number of calls. 2
Example 13.3.2: Relative accuracy of a measurementThe relative accuracy of a measurement is given by the ratio:
S =σim1,i
= εtAT
1/2= variation coefficient.
From this we notice that if εt = 4, then we have to measure twice as long a period to obtain thesame reliability of a measurement as for the case of exponentially distributed holding times. 2
For a given time period we notice that the accuracy of the traffic intensity when measuringa small trunk group is much larger than when measuring a large trunk group, because theaccuracy only depends on the traffic intensity A. When dimensioning a small trunk group, anerror in the estimation of the traffic of 10 % has much less influence than the same percentageerror on a large trunk group (Sec. 4.8.1). Therefore we measure the same time period on alltrunk groups. In Fig. 13.5 the relative accuracy for a continuous measurement is given bythe straight line h = 0.
13.4 Scanning method in an unlimited time period
In this section we only consider regular (constant) scanning intervals. The scanning principleis for example applied to traffic measurements, call charging, numerical simulations, andprocessor control. By the scanning method we observe a discrete time distribution for theholding time which in real time usually is continuous.
In practice we usually choose a constant distance h between scanning instants, and we findthe following relation between the observed time interval and the real time interval (fig. 13.3):
Observed time Real time
0 h 0 h – 1 h1 h 0 h – 2 h2 h 1 h – 3 h3 h 2 h – 4 h. . . . . .
354 CHAPTER 13. TRAFFIC MEASUREMENTS
0
0
1
1
2
2
3
3
4
4
5
5
6
Observed number of scans
Interval for the real time (scan)
Figure 13.3: By the scanning method a continuous time interval is transformed into a discretetime interval. The transformation is not unique (cf. Sec. 13.4).
We notice that there is overlap between the continuous time intervals, so that the discretedistribution cannot be obtained by a simple integration of the continuous time interval overa fixed interval of length h. If the real holding times have a distribution function F (t), thenit can be shown that we will observe the following discrete distribution (Iversen, 1976 [41]):
p(0) =1
h
∫ h
0
F (t) dt (13.10)
p(k) =1
h
∫ h
0
F (t+ kh)− F (t+ (k − 1)h) dt , k = 1, 2, . . . . (13.11)
Interpretation: The arrival time of the call is assumed to be independent of the scanningprocess. Therefore, the density function of the time interval from the call arrival instantto the first scanning time is uniformly distributed and equal to (1/h) (Sec. 3.6.3). Theprobability of observing zero scanning instants during the call holding time is denoted byp(0) and is equal to the probability that the call terminates before the next scanning time.For at fixed value of the holding time t this probability is equal to F (t)/h, and to obtain thetotal probability we integrate over all possible values t (0 ≤ t < h) and get (13.10). In asimilar way we derive p(k) (13.11).
By partial integration it can be shown that for any distribution function F (t) we will alwaysobserve the correct mean value:
h ·∞∑
k=0
k · p(k) =
∫ ∞
0
t · dF (t) . (13.12)
When using Karlsson charging we will therefore always in the long run charge the correctamount.
13.4. SCANNING METHOD IN AN UNLIMITED TIME PERIOD 355
For exponential distributed holding time intervals, F (t) = 1−e−µ t , we will observe a discretedistribution, Westerberg’s distribution (Iversen, 1976 [41]):
p(0) = 1− 1
µh
(1− e−µh
), (13.13)
p(k) =1
µh
(1− e−µh
)2 · e−(k−1)µh, k = 1, 2, . . . (13.14)
This distribution can be shown to have the following mean value and form factor:
m1 =1
µh, (13.15)
ε = µh · eµh + 1
eµh − 1≥ 2 . (13.16)
The form factor ε is equal to one plus the square of the relative accuracy of the measurement.For a continuous measurement the form factor is 2. The contribution ε− 2 is thus due to theinfluence from the measuring principle.
The form factor is a measure of accuracy of the measurements. Fig. 13.4 shows how the formfactor of the observed holding time for exponentially distributed holding times depends onthe length of the scanning interval (13.16). By continuous measurements we get an ordinarysample. By the scanning method we get a sample of a sample so that there is uncertaintyboth because of the measuring method and because of the limited sample size.
Fig. 3.2 shows an example of the Westerberg distribution. It is in particular the zero classwhich deviates from what we would expect from a continuous exponential distribution. Ifwe insert the form factor in the expression for σ2
s (13.9), then we get by choosing the meanholding time as time unit m1,t = 1/µ = 1 the following estimates of the traffic intensity whenusing the scanning method:
m1,i = A ,
σ2i =
A
T
h · eh + 1
eh − 1
. (13.17)
By the continuous measuring method the variance is 2A/T . This we also get now by lettingh→ 0.
Fig. 13.5 shows the relative accuracy of the measured traffic volume, both for a continuousmeasurement (13.8) & (13.9) and for the scanning method (13.17). Formula (13.17) wasderived by (Palm, 1941 [91]), but became only known when it was “re-discovered” by W.S.Hayward Jr. (1952 [38]).
Example 13.4.1: Billing principlesVarious principles are applied for charging (billing) of calls. In addition, the charging rate if usually
356 CHAPTER 13. TRAFFIC MEASUREMENTS
varied during the 24 hours to influence the habits of the subscriber. Among the principles we maymention:
(a) Fixed amount per call. This principle is often applied in manual systems for local calls (flatrate).
(b) Karlsson charging. This corresponds to the measuring principle dealt with in this sectionbecause the holding time is placed at random relative to the regular charging pulses. Thisprinciple has been applied in Denmark in the crossbar exchanges.
(c) Modified Karlsson charging. We may for instance add an extra pulse at the start of the call.In digital systems in Denmark there is a fixed fee per call in addition to a fee proportionalwith the duration of the call.
(d) The start of the holding time is synchronized with the scanning process. This is for exampleapplied for operator handled calls and in coin box telephones.
2
13.5 Numerical example
For a specific measurement we calculate m1,i and σ2i . The deviation of the observed traffic
intensity from the theoretical correct value is approximately Normal distributed. Therefore,the unknown theoretical mean value will be within 95% of the calculated confidence intervals(cf. Sec. 13.2):
m1,i ± 1, 96 · σi (13.18)
The variance σ2i is thus decisive for the accuracy of a measurement. To study which factors
are of major importance, we make numerical calculations of some examples. All formulæ mayeasily be calculated on a pocket calculator.
Both examples presume PCT–I traffic, (i.e. Poisson arrival process and exponentially dis-tributed holding times), traffic intensity = 10 erlang, and mean holding time = 180 seconds,which is chosen as time unit.
Example a: This corresponds to a classical traffic measurement:
Measuring period = 3600 sec = 20 time units = T .Scanning interval = 36 sec = 0.2 time units = h = 1/λs.(100 observations)
Example b: In this case we only scan once per mean holding time:
Measuring period = 720 sec = 4 time units = T .Scanning interval = 180 sec = 1 time unit = h = 1/λs.(4 observations)
From Table 13.5 we can draw some general conclusions:
13.5. NUMERICAL EXAMPLE 357
0 1 2 30
1
2
3
4
5Formfactor ε
Scan interval [s−1]
..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
....................................
...................................
.................................
.................................
...............................
...............................
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.....................................................................
....................................................
...........................................
.......................................
....................................
......................................
...................................
.................................
................................
...............................
.............................................
.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........................................................................................................................................
..............................................................
................................................
.........................................
.....................................
...................................
...................................
.................................
...............................
................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
k = 1 2 5 ∞
Figure 13.4: Form factor for exponentially distributed holding times which are observed byErlang-k distributed scanning intervals in an unlimited measuring period. The case k = ∞corresponds to regular (constant) scan intervals which transform the exponential distributioninto Westerberg’s distribution. The case k = 1 corresponds to exponentially distributed scanintervals (cf. the roulette simulation method). The case h = 0 corresponds to a continuousmeasurement. We notice that by regular scan intervals we loose almost no information if thescan interval is smaller than the mean holding time (chosen as time unit).
358 CHAPTER 13. TRAFFIC MEASUREMENTS
AT1 2 5 10 20 50 100 200 5000.02
0.05
0.1
0.2
0.5
1
2
5Relative accuracy of A
Traffic volume [s]
................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ........
h
630
A = 1
Figure 13.5: Using double-logarithmic scale we obtain a linear relationship between the rela-tive accuracy of the traffic intensity A and the measured traffic volume A ·T when measuringin an unlimited time period. A scan interval h = 0 corresponds to a continuous measurementand h > 0 corresponds to the scanning method. The influence of a limited measuring methodis shown by the dotted line for the case A = 1 erlang and a continuous measurement takingaccount of the limited measuring interval. T is measured in mean holding times.
• By the scanning method we loose very little information as compared to a continuousmeasurement as long as the scanning interval is less than the mean holding time (cf.Fig. 13.4). A continuous measurement can be considered as an optimal reference forany discrete method.
• Exploitation of knowledge about a limited measuring period results in more informationfor a short measurement (T < 5), whereas we obtain little additional information forT > 10. (There is correlation in the traffic process, and the first part of a measuringperiod yields more information than later parts).
• By using the roulette method we loose of course more information than by the scanningmethod (Iversen 1976, [41], 1977 [42]).
All the above mentioned factors have far less influence than the fact that the real holdingtimes often deviate from the exponential distribution. In practice we often observe a form
13.5. NUMERICAL EXAMPLE 359
Example a Example b
σ2i σi σ2
i σi
Continuous MethodUnlimited (13.8) 1.0000 1.0000 5.0000 2.2361Limited 0.9500 0.9747 3.7729 1.9424
Scanning MethodUnlimited (13.17) 1.0033 1.0016 5.4099 2.3259Limited 0.9535 0.9765 4.2801 2.0688
Roulette MethodUnlimited 1.1000 1.0488 7.5000 2.7386Limited 1.0500 1.0247 6.2729 2.5046
Table 13.2: Numerical comparison of various measuring principles in different time intervals.
factor about 4–6.
The conclusion to be made from the above examples is that for practical applications it ismore relevant to apply the elementary formula (13.8) with a correct form factor than to takeaccount of the measuring method and the measuring period.
The above theory is exact when we consider charging of calls and measuring of time inter-vals. For stochastic computer simulations the traffic process in usually stationary, and thetheory can be applied for estimation of the reliability of the results. However, the resultsare approximate as the theoretical assumptions about congestion free systems seldom are ofinterest.
In real life measurements on working systems we have traffic variations during the day, techni-cal errors, measuring errors etc. Some of these factors compensate each other and the resultswe have derived give a good estimate of the reliability, and it is a good basis for comparingdifferent measurements and measuring principles.
360 CHAPTER 13. TRAFFIC MEASUREMENTS
BIBLIOGRAPHY 361
Bibliography
[1] Abate, J. & Whitt, W. (1997): Limits and approximations for the M/G/1 LIFOwaiting–time distribution. Operations Research Letters, Vol. 20 (1997) : 5, 199–206.
[2] Andersen, B. & Hansen, N.H. & og Iversen, V.B. (1971): Use of minicomputer fortelephone traffic measurements. Teleteknik (Engl. ed.) Vol. 15 (1971) : 2, 33–46.
[3] Ash, G.R. (1998): Dynamic routing in telecommunications networks. McGraw-Hill1998. 746 pp.
[4] Baskett, F. & Chandy, K.M. & Muntz, R.R. & Palacios, F.G. (1975): Open, closedand mixed networks of queues with different classes of customers. Journal of the ACM,April 1975, pp. 248–260. (BCMP queueing networks).
[5] Bear, D. (1988): Principles of telecommunication traffic engineering. Revised 3rd Edi-tion. Peter Peregrinus Ltd, Stevenage 1988. 250 pp.
[6] Bech, N.I. (1954): A method of computing the loss in alternative trunking and gradingsystems. The Copenhagen Telephone Company, May 1955. 14 pp. Translated fromDanish: Metode til beregning af spærring i alternativ trunking- og graderingssystemer.Teleteknik, Vol. 5 (1954) : 4, pp. 435–448.
[7] Bolotin, V.A. (1994): Telephone circuit holding time distributions. ITC 14, 14th Inter-national Teletraffic Congress. Antibes Juan-les-Pins, France, June 6-10. 1994. Procced-ings pp. 125–134. Elsevier 1994.
[8] Bretschneider, G. (1956): Bie Berechnung von Leitungsgruppen fur uberfließendenVerkehr. Nachrichtentechnische Zeitschrift, NTZ, Vol. 9 (1956) : 11, 533–540.
[9] Bretschneider, G. (1973): Extension of the equivalent random method to smooth traf-fics. ITC–7, Seventh International Teletraffic Congress, Stockholm, June 1973. Proceed-ings, paper 411. 9 pp.
[10] Brockmeyer, E. (1957): A Survey of Traffic–Measuring Methods in the CopenhagenExchanges. Teleteknik (Engl. ed.) 1957:1, pp. 92–105.
[11] Brockmeyer, E. (1954): The simple overflow problem in the theory of telephone traffic.Teleteknik 1954, pp. 361–374. In Danish. English translation by Copenhagen TelephoneCompany, April 1955. 15 pp.
[12] Brockmeyer, E. & Halstrøm, H.L. & Jensen, Arne (1948): The life and works of A.K.Erlang. Transactions of the Danish Academy of Technical Sciences, 1948, No. 2, 277pp. Copenhagen 1948.
[13] Burke, P.J. (1956): The output of a queueing system. Operations Research, Vol. 4(1956), 699–704.
362 BIBLIOGRAPHY
[14] Christensen, P.V. (1914): The number of selectors in automatic telephone systems. ThePost Office Electrical Engineers Journal, Vol. 7 (1914), 271–281.
[15] Cobham, A. (1954): Priority assignment in waiting line problems. Operations Research,Vol. 2 (1954), 70–76.
[16] Conway, A.E. & Georganas, N.D. (1989): Queueing networks – exact computationalalgorithms: A unified theory based on decomposition and aggregation. The MIT Press1989. 234 pp.
[17] Cooper, R.B. (1972): Introduction to queueing theory. New York 1972. 277 pp.
[18] Cox, D.R. (1955): A use of complex probabilities in the theory of stochastic processes.Proc. Camb. Phil. Soc., Vol. 51 (1955), pp. 313–319.
[19] Cox, D.R. & Miller, H.D. (1965): The theory of stochastic processes. Methuen & Co.London 1965. 398 pp.
[20] Cox, D.R.& Isham, V. (1980): Point processes. Chapman and Hall. 1980. 188 pp.
[21] Crommelin, C.D. (1932): Delay probability formulae when the holding times are con-stant. Post Office Electrical Engineers Journal, Vol. 25 (1932), pp. 41–50.
[22] Crommelin, C.D. (1934): Delay probability formulae. Post Office Electrical EngineersJournal, Vol. 26 (1934), pp. 266–274.
[23] Delbrouck, L.E.N. (1983): On the steady–state distribution in a service facility carryingmixtures of traffic with different peakedness factor and capacity requirements. IEEETransactions on Communications, Vol. COM–31 (1983) : 11, 1209–1211.
[24] Dickmeiss, A. & Larsen, M. (1993): Spærringsberegninger i telenet (Blocking calcula-tions in telecommunication networks, in Danish). Master’s thesis. Institut for Telekom-munikation, Danmarks Tekniske Højskole, 1993. 141 pp.
[25] Eilon, S. (1969): A simpler proof of L = λW . Operations Research, Vol. 17 (1969),pp. 915–917.
[26] Elldin, A., and G. Lind (1964): Elementary telephone traffic theory. Chapter 4. L.M.Ericsson AB, Stockholm 1964. 46 pp.
[27] Engset, T.O. (1915): Om beregning av vælgere i et automatisk telefonsystem, en un-dersøkelse angaaende punkter i grundlaget for sandsynlighetsteoriens anvendelse paabestemmelse av de automatiske centralinretningers omfang. Kristiania (Oslo) 1915. 128pp. English version: On the calculation of switches in an automatic telephone system.Telektronikk, Vol. 94 (1998) :2 , 99–142.
[28] Engset, T.O. (1918): Die Wahrscheinlichkeitsrechnung zur Bestimmung der Wahlerzahlin automatischen Fernsprechamtern. Elektrotechnische Zeitschrift, 1918, Heft 31. Trans-lated to English in Telektronikk (Norwegian), June 1991, 4pp.
BIBLIOGRAPHY 363
[29] Erlang, A.K. (1909): The Theory of Probabilities and Telephone Conversations. NytMatematisk Tidsskrift, B, Vol. 20, pp. 33–40 (in Danish). English translation: TheLife and Works of A.K. Erlang, E. Brockmeyer, H.L. Halstrøm og Arne Jensen, pp.131–137.. Copenhagen 1948
[30] Esteves, J.S. & Craveirinha, J. & Cardoso, D. (1995): Computing Erlang–B Func-tion Derivatives in the Number of Servers. Communications in Statistics – StochasticModels, Vol. 11 (1995) : 2, 311–331.
[31] Farmer, R.F. & Kaufman, I. (1978): On the Numerical Evaluation of Some Basic TrafficFormulae. Networks, Vol. 8 (1978) 153–186.
[32] Feller, W. (1950): An introduction to probability theory and its applications. Vol. 1,New York 1950. 461 pp.
[33] Fortet, R. & Grandjean, Ch. (1964): Congestion in a loss system when some calls wantseveral devices simultaneously. Electrical Communications, Vol. 39 (1964) : 4, 513–526.Paper presented at ITC–4, Fourth International Teletraffic Congress, London. England,15–21 July 1964.
[34] Fredericks, A.A. (1980): Congestion in blocking systems – a simple approximationtechnique. The Bell System Technical Journal, Vol. 59 (1980) : 6, 805–827.
[35] Fry, T.C. (1928): Probability and its Engineering Uses. New York 1928, 470 pp.
[36] Gordon, W.J., and & Newell, G.F. (1967): Closed queueing systems with exponentialservers. Operations Research, Vol. 15 (1967), pp. 254–265.
[37] Grillo, D. & Skoog, R.A. & Chia, S. & Leung, K.K. (1998): Teletraffic engineering formobile personal communications in ITU–T work: the need to match theory to practice.IEEE Personal Communications, Vol. 5 (1998) : 6, 38–58.
[38] Hayward, W.S. Jr. (1952): The reliability of telephone traffic load measurements byswitch counts. The Bell System Technical Journal, Vol. 31 (1952) : 2, 357–377.
[39] ITU-T (1993): Traffic intensity unit. ITU–T Recommendation B.18. 1993. 1 p.
[40] Iversen, V.B. (1973): Analysis of real teletraffic processes based on computerized mea-surements. Ericsson Technics, No. 1, 1973, pp. 1–64. “Holbæk measurements”.
[41] Iversen, V.B. (1976): On the accuracy in measurements of time intervals and traffic in-tensities with application to teletraffic and simulation. Ph.D.–thesis. IMSOR, TechnicalUniversity of Denmark 1976. 202 pp.
[42] Iversen, V.B. (1976): On general point processes in teletraffic theory with applicationsto measurements and simulation. ITC-8, Eighth International Teletraffic Congress, pa-per 312/1–8. Melbourne 1976. Published in Teleteknik (Engl. ed.) 1977 : 2, pp. 59–70.
[43] Iversen, V.B. (1980): The A–formula. Teleteknik (English ed.), Vol. 23 (1980) : 2, 64–79.
364 BIBLIOGRAPHY
[44] Iversen, V.B. (1982): Exact calculation of waiting time distributions in queueing sys-tems with constant holding times. NTS-4, Fourth Nordic Teletraffic Seminar, Helsinki1982. 31 pp.
[45] Iversen, V.B. (1987): The exact evaluation of multi–service loss system with access con-trol. Teleteknik, English ed., Vol 31 (1987) : 2, 56–61. NTS–7, Seventh Nordic TeletrafficSeminar, Lund, Sweden, August 25–27, 1987, 22 pp.
[46] Iversen, V.B. & Nielsen, B.F. (1985): Some properties of Coxian distributions withapplications. Proceedings of the International Conference on Modelling Techniques andTools for Performance Analysis, pp. 61–66. 5–7 June, 1985, Valbonne, France. North–Holland Publ. Co. 1985. 365 pp. (Editor N. Abu El Ata).
[47] Iversen, V.B. & Stepanov, S.N. (1997): The usage of convolution algorithm with trun-cation for estimation of individual blocking probabilities in circuit-switched telecom-munication networks. Proceedings of the 15th International Teletraffic Congress, ITC15, Washington, DC, USA, 22–27 June 1997. 1327–1336.
[48] Iversen, V.B. & Sanders, B. (2001): Engset formulæ with continuous parameters – the-ory and applications. AEU, International Journal of Electronics and Communications,Vol. 55 (2001) : 1, 3-9.
[49] Iversen, V.B. (2005): Algorithm for evaluating multi-rate loss systems. COM Depart-ment, Technical University of Denmark. December 2005. 27 pp. Submitted for publica-tion.
[50] Iversen B.B. (2007): Reversible fair scheduling: the teletraffic revisited. Proceedingsfrom 20th International Teletraffic Congress, ITC20, Ottawa, Canada, June 17-21, 2007.Springer Lecture Notes in Computer Science. Vol. LNCS 4516 (2007), pp. 1135-1148.
[51] Jackson, R.R.P. (1954): Queueing systems with phase type service. Operational Re-search Quarterly, Vol. 5 (1954), 109–120.
[52] Jackson, J.R. (1957): Networks of waiting lines. Operations Research, Vol. 5 (1957),pp. 518–521.
[53] Jackson, J.R. (1963): Jobshop–like queueing systems. Management Science, Vol. 10(1963), No. 1, pp. 131–142.
[54] Jagerman, D.L. (1984): Methods in Traffic Calculations. AT&T Bell Laboratories Tech-nical Journal, Vol. 63 (1984) : 7, 1283–1310.
[55] Jagers, A.A. & van Doorn, E.A. (1986): On the Continued Erlang Loss Function.Operations Research Letters, Vol. 5 (1986) : 1, 43–46.
[56] Jensen, Arne (1948): An elucidation of A.K. Erlang’s statistical works through thetheory of stochastic processes. Published in “The Erlangbook”: E. Brockmeyer, H.L.Halstrøm and A. Jensen: The life and works of A.K. Erlang. København 1948, pp.23–100.
BIBLIOGRAPHY 365
[57] Jensen, Arne (1948): Truncated multidimensional distributions. Pages 58–70 in “TheLife and Works of A.K. Erlang”. Ref. Brockmeyer et al., 1948 [56].
[58] Jensen, Arne (1950): Moe’s Principle – An econometric investigation intended as anaid in dimensioning and managing telephone plant. Theory and Tables. Copenhagen1950. 165 pp.
[59] Jerkins, J.L. & Neidhardt, A.L. & Wang, J.L. & Erramilli A. (1999): Operations mea-surement for engineering support of high-speed networks with self-similar traffic. ITC16, 16th International Teletraffic Congress, Edinburgh, June 7–11, 1999. Proceedingspp. 895–906. Elsevier 1999.
[60] Johannsen, Fr. (1908): “Busy”. Copenhagen 1908. 4 pp.
[61] Johansen, K. & Johansen, J. & Rasmussen, C. (1991): The broadband multiplexer,“TransMux 1001”. Teleteknik, English ed., Vol. 34 (1991) : 1, 57–65.
[62] Joys, L.A.: Variations of the Erlang, Engset and Jacobæus formulæ. ITC–5, FifthInternational Teletraffic Congress, New York, USA, 1967, pp. 107–111. Also publishedin: Teleteknik, (English edition), Vol. 11 (1967) :1 , 42–48.
[63] Joys, L.A. (1968): Engsets formler for sannsynlighetstetthet og dens rekursionsformler.(Engset’s formulæ for probability and its recursive formulæ, in Norwegian). Telektron-ikk 1968 No 1–2, pp. 54–63.
[64] Joys, L.A. (1971): Comments on the Engset and Erlang formulae for telephone trafficlosses. Thesis. Report TF No. 25/71, Research Establishment, The Norwegian Telecom-munications Administration. 1971. 127 pp.
[65] Karlsson, S.A. (1937): Tekniska anordninger for samtalsdebitering enligt tid (Techni-cal arrangement for charging calls according to time, In Swedish). Helsingfors Tele-fonforening, Tekniska Meddelanden 1937, No. 2, pp. 32–48.
[66] Kaufman, J.S. (1981): Blocking in a shared resource environment. IEEE Transactionson Communications, Vol. COM–29 (1981) : 10, 1474–1481.
[67] Keilson, J. (1966): The ergodic queue length distribution for queueing systems withfinite capacity. Journal of Royal Statistical Society, Series B, Vol. 28 (1966), 190–201.
[68] Kelly, F.P. (1979): Reversibility and stochastic networks. John Wiley & Sons, 1979.230 pp.
[69] Kendall, D.G. (1951): Some problems in the theory of queues. Journal of Royal Statis-tical Society, Series B, Vol. 13 (1951) : 2, 151–173.
[70] Kendall, D.G. (1953): Stochastic processes occuring in the theory of queues and theiranalysis by the method of the imbedded Markov chain. Ann. Math. Stat., Vol. 24 (1953),338–354.
366 BIBLIOGRAPHY
[71] Khintchine, A.Y. (1955): Mathematical methods in the theory of queueing. London1960. 124 pp. (Original in Russian, 1955).
[72] Kingman, J.F.C. (1969): Markov population processes. J. Appl. Prob., Vol. 6 (1969),1–18.
[73] Kleinrock, L. (1964): Communication nets: Stochastic message flow and delay.McGraw–Hill 1964. Reprinted by Dover Publications 1972. 209 pp.
[74] Kleinrock, L. (1975): Queueing systems. Vol. I: Theory. New York 1975. 417 pp.
[75] Kleinrock, L. (1976): Queueing systems. Vol. II: Computer applications. New York1976. 549 pp.
[76] Kosten, L. (1937): Uber Sperrungswahrscheinlichkeiten bei Staffelschaltungen. Elek.Nachr. Techn., Vol. 14 (1937) 5–12.
[77] Kruithof, J. (1937): Telefoonverkehrsrekening. De Ingenieur, Vol. 52 (1937) : E15–E25.
[78] Kuczura, A. (1973): The interrupted Poisson process as an overflow process. The BellSystem Technical Journal, Vol. 52 (1973) : 3, pp. 437–448.
[79] Kuczura, A. (1977): A method of moments for the analysis of a switched communicationnetwork’s performance. IEEE Transactions on Communications, Vol. Com–25 (1977) : 2,185–193.
[80] Lavenberg, S.S. & Reiser, M. (1980): Mean–value analysis of closed multichain queueingnetworks. Journal of the Association for Computing Machinery, Vol. 27 (1980) : 2, 313–322.
[81] Levy-Soussan, G. (1968): Numerical Evaluation of the Erlang Function through aContinued-Fraction Algorithm. Electrical Communication, Vol. 43 (1968) : 2, 163–168.
[82] Lind, G. (1976): Studies on the probability of a called subscriber being busy. ITC–8,Eighth International Teletraffic Congress, Melbourne, November 1976. Paper 631. 8 pp.
[83] Listov–Saabye, H. & Iversen V.B. (1989): ATMOS: a PC–based tool for evaluatingmulti–service telephone systems. IMSOR, Technical University of Denmark 1989, 75pp. (In Danish).
[84] Little, J.D.C. (1961): A proof for the queueing formula L = λW . Operations Research,Vol. 9 (1961) : 383–387.
[85] Maral, G. (1995): VSAT networks. John Wiley & Sons, 1995. 282 pp.
[86] Marchal, W.G. (1976): An approximate formula for waiting time in single server queues.AIIE Transactions, December 1976, 473–474.
[87] Mejlbro, L. (1994): Approximations for the Erlang Loss Function. Technical Universityof Denmark 1994. 32 pp. NTS–14, Copenhagen 18–20 August 1998. Proceedings pp.90–102. Department of Telecommunication, Technical University of Denmark.
BIBLIOGRAPHY 367
[88] Messerli, E.J. (1972): Proof of a Convexity Property of the Erlang B Formula. The BellSystem Technical Journal, Vol. 51 (1972) 951–953.
[89] Molina, E.C. (1922): The Theory of Probability Applied to Telephone Trunking Prob-lems. The Bell System Technical Journal, Vol. 1 (1922) : 2, 69–81.
[90] Molina, E.C. (1927): Application of the Theory of Probability to Telephone TrunkingProblems. The Bell System Technical Journal, Vol. 6 (1927) 461–494.
[91] Palm, C. (1941): Mattnoggrannhet vid bestamning af trafikmangd enligt genomsok-ningsforfarandet (Accuracy of measurements in determining traffic volumes by the scan-ning method). Tekn. Medd. K. Telegr. Styr., 1941, No. 7–9, pp. 97–115.
[92] Palm, C. (1943): Intensitatsschwankungen im Fernsprechverkehr. Ericsson Technics,No. 44, 1943, 189 pp. English translation by Chr. Jacobæus: Intensity Variations inTelephone Traffic. North–Holland Publ. Co. 1987.
[93] Palm, C. (1947): The assignment of workers in servicing automatic machines. Journalof Industrial Engineering, Vol. 9 (1958) : 28–42. First published in Swedish in 1947.
[94] Palm, C. (1947): Table of the Erlang loss formula. Telefonaktiebolaget L M Ericsson,Stockholm 1947. 23 pp.
[95] Palm, C. (1957): Some propositions regarding flat and steep distribution functions, pp.3–17 in TELE (English edition), No. 1, 1957.
[96] Panken, F.J.M. & van Doorn, E.A.: Arrivals in a loss system with arrivals in ge-ometrically distributed batches and heterogeneous service requirements. IEEE/ACMTrans. on Networking, vol. 1 (1993) : 6, 664–667.
[97] Postigo–Boix, M. & Garcıa–Haro, J. & Aguilar–Igartua, M. (2001): (Inverse Multi-plexing of ATM) IMA – technical foundations, application and performance analysis.Computer Networks, Vol. 35 (2001) 165–183.
[98] Press, W.H. & Teukolsky, S.A. & Vetterling, W.T. & Flannery, B.P. (1995): Numericalrecipes in C, the art of scientific computing. 2nd edition. Cambridge University Press,1995. 994 pp.
[99] Rabe, F.W. (1949): Variations of telephone traffic. Electrical Communications, Vol. 26(1949) 243–248.
[100] Rapp, Y. (1964): Planning of junction network in a multi–exchange area. EricssonTechnics 1964, pp. 77–130.
[101] Rapp, Y. (1965): Planning of junction network in a multi–exchange area. EricssonTechnics 1965, No. 2, pp. 187–240.
[102] Riordan, J. (1956): Derivation of moments of overflow traffic. Appendix 1 (pp. 507–514)in (Wilkinson, 1956 [119]).
368 BIBLIOGRAPHY
[103] Roberts, J.W. (1981): A service system with heterogeneous user requirements – applica-tions to multi–service telecommunication systems. Performance of data communicationsystems and their applications. G. Pujolle (editor), North–Holland Publ. Co. 1981, pp.423–431.
[104] Roberts, J.W. (2001): Traffic theory and the Internet. IEEE Communications MagazineVol. 39 (2001) : 1, 94–99.
[105] Ross, K.W. & Tsang, D. (1990): Teletraffic engineering for product–form circuit–switched networks. Adv. Appl. Prob., Vol. 22 (1990) 657–675.
[106] Ross, K.W. & Tsang, D. (1990): Algorithms to determine exact blocking probabilitiesfor multirate tree networks. IEEE Transactions on Communications. Vol. 38 (1990) : 8,1266–1271.
[107] Ronnblom, N. (1958): Traffic loss of a circuit group consisting of both–way circuitswhich is accessible for the internal and external traffic of a subscriber group. TELE(English edition), 1959 : 2, 79–92.
[108] Sanders, B. & Haemers, W.H. & Wilcke, R. (1983): Simple approximate techniquesfor congestion functions for smooth and peaked traffic. ITC–10, Tenth InternationalTeletraffic Congress, Montreal, June 1983. Paper 4.4b–1. 7 pp.
[109] Stepanov, S.S. (1989): Optimization of numerical estimation of characteristics ofmultiflow models with repeated calls. Problems of Information Transmission, Vol. 25(1989) : 2, 67–78.
[110] Stormer, H. (1963): Asymptotische Naherungen fur die Erlangsche Verlustformel. AEU,Archiv der Elektrischen Ubertragung, Vol. 17 (1963) : 10, 476–478.
[111] Sutton, D.J. (1980): The application of reversible Markov population processes toteletraffic. A.T.R. Vol. 13 (1980) : 2, 3–8.
[112] Szybicki, E. (1967): Numerical Methods in the Use of Computers for Telephone TrafficTheory Applications. Ericsson Technics 1967, pp. 439–475.
[113] Techguide (2001): Inverse Multiplexing – scalable bandwidth solutions for the WAN.Techguide (The Technologu Guide Series), 2001, 46 pp. <www.techguide.com>
[114] Vaulot, E. & Chaveau, J. (1949): Extension de la formule d’Erlang au cas ou le trafic estfonction du nombre d’abonnes occupes. Annales de Telecommunications, Vol. 4 (1949)319–324.
[115] Veirø, B. (2002): Proposed Grade of Service chapter for handbook. ITU–T Study Group2, WP 3/2. September 2001. 5 pp.
[116] Villen, M. (2002): Overview of ITU Recommendations on traffic engineering. ITU–TStudy Group 2, COM 2-KS 48/2-E. May 2002. 21 pp.
BIBLIOGRAPHY 369
[117] Wallstrom, B. (1964): A distribution model for telephone traffic with varying callintensity, including overflow traffic. Ericsson Technics, 1964, No. 2, pp. 183–202.
[118] Wallstrom, B. (1966): Congestion studies in telephone systems with overflow facilities.Ericsson Technics, No. 3, 1966, pp. 187–351.
[119] Wilkinson, R.I. (1956): Theories for toll traffic engineering in the U.S.A. The BellSystem Technical Journal, Vol. 35 (1956) 421–514.
Author index
Abate, J., 270, 361Aguilar–Igartua, M., 179, 367Andersen, B., 346, 361Ash, G.R., 361
Baskett, F., 336, 361Bear, D., 220, 361Bech, N.I., 171, 361Bolotin, V.A., 361Bretschneider, G., 171, 174, 361Brockmeyer, E., 123, 167, 171, 271, 361Burke, P.J., 323, 324, 361Buzen, J.P., 331
Cardoso, D., 122, 363Chandy, K.M., 336, 361Chaveau, J., 368Chia, S., 363Christensen, P.V., 362Cobham, A., 287, 362Conway, A.E., 341, 362Cooper, R.B., 362Cox, D.R., 66, 362Craveirinha, J., 122, 363Crommelin, C.D., 271, 362
Delbrouck, L.E.N., 216, 362Dickmeiss, A., 362
Eilon, S., 82, 362Elldin, A., 362Engset, T.O., 124, 142, 362Erlang, A.K., 21, 72, 108, 363Erramilli A., 365Esteves, J.S., 122, 363
Farmer, R.F., 123, 363Feller, W., 63, 245, 352, 363Flannery, B.P., 367
Fortet, R., 212, 363Fredericks, A.A., 177, 363Fry, T.C., 84, 124, 271, 272, 363
Garcıa–Haro, J., 179, 367Georganas, N.D., 341, 362Gordon, W.J., 326, 363Grandjean, Ch., 212, 363Grillo, D., 363
Haemers, W.H., 181, 368Halstrøm, H.L., 361Hansen, N.H., 346, 361Hayward, W.S. Jr., 177, 355, 363
Isham, V., 362ITU-T, 363Iversen, V.B., 23–26, 68, 73, 136, 154, 201,
204, 216, 274, 346, 348, 354, 355,358, 361, 363, 364, 366
Jackson, J.R., 324, 325, 364Jackson, R.R.P., 364Jagerman, D.L., 123, 364Jagers, A.A., 122, 364Jensen, Arne, 84, 110, 126, 190, 194, 225,
226, 235, 239, 240, 361, 364, 365Jerkins, J.L., 365Johannsen, F., 32, 365Johansen, J., 179, 365Johansen, K., 179, 365Joys, L.A., 140, 365
Karlsson, S.A., 347, 365Kaufman, I., 123, 363Kaufman, J.S., 212, 365Keilson, J., 270, 365Kelly, F.P., 324, 365Kendall, D.G., 262, 281, 282, 365
370
Author index 371
Khintchine, A.Y., 75, 272, 366Kingman, J.F.C., 191, 366Kleinrock, L., 285, 327, 342, 343, 366Kosten, L., 169, 366Kruithof, J., 366Kuczura, A., 96, 182, 184, 366
Larsen, M., 362Lavenberg, S.S., 334, 366Leung, K.K., 363Lind, G., 362, 366Listov-Saabye, H., 204, 366Little, J.D.C., 366Levy-Soussan, G., 124, 366
Maral, G., 14, 366Marchal, W.G., 280, 366Mejlbro, L., 124, 366Messerli, E.J., 122, 367Miller, H.D., 362Moe, K., 126Molina, E.C., 124, 367Muntz, R.R., 336, 361
Neidhardt, A.L., 365Newell, G.F., 326, 363Nielsen, B.F., 68, 364
Palacios, F.G., 336, 361Palm, C., 43, 59, 72, 93, 117, 245, 352, 355,
367Panken, F.J.M., 367Postigo–Boix, M., 179, 367Press, W.H., 367
Ronnblom, N., 195, 368Rabe, F.W., 352, 367Raikov, D.A., 95Rapp, Y., 124, 174, 367Rasmussen, C., 179, 365Reiser, M., 334, 366Riordan, J., 170, 367Roberts, J.W., 212, 368Ross, K.W., 216, 368
Samuelson, P.A., 126Sanders, B., 136, 181, 235, 364, 368
Skoog, R.A., 363Stormer, H., 124, 368Stepanov, S.N., 115, 204, 364, 368Sutton, D.J., 191, 368Szybicki, E., 123, 124, 368
Techguide, 179, 368Teukolsky, S.A., 367Tsang, D., 216, 368
van Doorn, E.A., 122, 364, 367Vaulot, E., 368Veirø, B., 35, 368Vetterling, W.T., 367Villen, M., 368
Wallstrom, B., 151, 171, 369Wang, J.L., 365Whitt, W., 270, 361Wilcke, R., 181, 368Wilkinson, R.I., 171, 369
Index
A-subscriber, 7accessibility
full, 101delay system, 229Engset, 133Erlang-B, 101
restricted, 162ad-hoc network, 94Aloha protocol, 90, 107alternative routing, 162, 223arrival process
generalised, 182arrival theorem, 143, 334assignment
demand, 15fixed, 15
ATMOS-tool, 204availability, 101
B-ISDN, 8B-subscriber, 7balance
detailed, 192global, 188local, 192
balance equations, 105balking, 265Basic Bandwidth Unit, 195, 296batch Poisson process, 157batch-blocking, 158BBU, 195, 201, 296BCC, 102BCH, 124BCMP queueing networks, 336, 361Berkeley’s method, 181billing, 355Binomial distribution, 92, 135
traffic characteristics, 139
truncated, 142binomial moment, 44Binomial process, 91, 92Binomial theorem, 53Binomial-case, 135blocked calls cleared, 102Blocked Calls Held, BCH, 124blocking, 175blocking concept, 26BPP-traffic, 135, 193, 194Brockmeyer’s system, 169, 171Burke’s theorem, 323bursty traffic, 171Busy, 32busy hour, 23, 24
time consistent, 24Buzen’s algorithm, 331
CACmoving window, 122
call duration, 30call intensity, 21capacity allocation, 341carried traffic, 20, 109carrier frequency system, 13CCS, 22cdf, 42central moment, 44central server system, 331, 332chain
queueing network, 322, 337channel allocation, 9charging, 347circuit-switching, 14circulation time, 246class limitation, 193client-server, 245code receiver, 7
372
INDEX 373
code transmitter, 7coefficient of variation, 44, 353complementary distribution function, 42compound distribution, 58
Poisson distribution, 352concentration, 26conditional probability, 46confidence interval, 356congestion
call, 27, 108, 203time, 27, 108, 202traffic, 27, 109, 204virtual, 27
connection-less, 14, 15connection-oriented, 14conservation law, 285control channel, 9control path, 6convolution, 54, 56convolution algorithm
loss systems, 200multiple chains, 337single chain, 329
cord, 7Cox distribution, 66Cox–2 arrival process, 184CSMA, 16cumulants, 44cut equations, 104cyclic search, 8
D/M/1, 283data signalling speed, 22de-convolution, 204death rate, 47decomposition, 68decomposition theorem, 95DECT, 11Delbrouck’s algorithm, 216density function, 42dimensioning, 126
fixed blocking, 126improvement principle, 127
direct route, 162distribution function, 42
drop tail, 270
Ek/D/r, 277EART, 171EBHC, 22EERT–method, 174effective bandwidth, 195Engset distribution, 141Engset’s formula
recursion, 147Engset-case, 135equilibrium points, 269equivalent system, 173erlang, 20Erlang B-formula
inverse, 123Erlang fix-point method, 217Erlang’s B-formula, 107, 108
convexity, 122hyper-exponential service, 189multi-dimensional, 187recursion, 116
Erlang’s C-formula, 232Erlang’s delay system, 229
state transition diagram, 230Erlang’s extended B-formula, 119Erlang’s ideal grading, 163Erlang’s interconnection formula, 164Erlang-B formula
multi-dimensional, 190Erlang-book, 363Erlang-case, 134Erlang-k distribution, 56, 92ERM = ERT–Method, 171ERT–method, 171exponential distribution, 42, 87, 92
in parallel, 59decomposition, 68in series, 55minimum of k, 53
factorial moment, 44fair queueing, 293Feller-Jensen’s identity, 84flat distribution, 59
374 INDEX
flat rate, 356flow-balance equation, 325forced disconnection, 28form factor, 45Fortet & Grandjean algorithm, 212forward recurrence time, 50fractile, 45Fredericks & Hayward’s method, 177
gamma distribution, 71gamma function, 46
incomplete, 119geometric distribution, 92GI/G/1, 279GI/M/1, 280
FCFS, 283GoS, 126Grade-of-Service, 126GSM, 11
hand-over, 10hazard function, 47HCS, 176heavy-tailed distribution, 72, 154hierarchical cellular system, 176HOL, 264hub, 15human-factors, 32hunting
cyclic, 102ordered, 102random, 102sequential, 102
hyper-exponential distribution, 60hypo–exponential, 55hypo-exponential distribution, 55
IDC, 78IDI, 78IID, 78IMA, 179improvement function, 110, 238improvement principle, 127improvement value, 130independence assumption, 327index of dispersion
counts, 78intervals, 78
insensitivity, 121Integrated Services Digital Network, 8intensity, 92inter-active system, 246interrupted Poisson process, 96, 182interval representation, 77, 84, 346inverse multiplexing, 179IPP, 96, 98, 182Iridium, 11IS = Infinite Server, 323ISDN, 8iterative studies, 3ITU-T, 228
Jackson net, 324jockeying, 265
Karlsson charging, 347, 354, 356Kaufman & Roberts’ algorithm, 212Kingman’s inequality, 280Kleinrock’s square root law, 342Kolmogorov’s criteria, 192Kosten’s system, 169Kruithof’s double factor method, 218
lack of memory, 47Lagrange multiplier, 226, 239, 343LAN, 16last-look principle, 347leaky bucket, 279life-time, 41Lindley equations, 267line-switching, 14Little’s theorem, 82load function, 266local exchange, 13log-normal distribution, 72loss system, 27
M/D/1/k, 278M/D/n, 271, 276M/G/∞, 323M/G/1, 267M/G/1-LCFS-PR, 324
INDEX 375
M/G/1-PS, 323M/G/1/k, 270M/G/n-GPS, 323M/M/1, 243, 301M/M/n, 229, 308, 323M/M/n, FCFS, 240M/M/n/S/S, 245machine repair model, 229macro–cell, 176man-machine, 2Marchal’s approximation, 280Markov property, 41Markovian property, 47mean value, 44mean waiting time, 237measuring methods, 346
continuous, 346, 350discrete, 346horizontal, 348vertical, 347
measuring periodunlimited, 350, 353
median, 45mesh network, 13, 15message-switching, 15micro–cell, 176microprocessor, 6mobile communication, 9modeling, 2Moe’s principle, 126, 224, 239, 365
delay systems, 239loss systems, 127
multi-dimensionalErlang-B, 187loss system, 193
multi-rate traffic, 195, 208multinomial coefficient, 68multinomial distribution, 67multinomial theorem, 68multiplexing
frequency, 13pulse-code, 13time, 13
MVA-algorithmsingle chain, 322, 334
negative Binomial case, 135negative Binomial distribution, 92network management, 228Newton-Raphson iteration, 123Newton-Raphson’s method, 174node equations, 103non-central moment, 43non-preemptive, 264notation
distributions, 71Kendall’s, 262
number representation, 76, 84, 346
O’Dell grading, 162offered traffic, 21
definition, 102, 134on/off source, 136ordinarity, 81overflow theory, 161
packet switching, 15paging, 11Palm’s form factor, 45Palm’s identity, 43Palm’s machine-repair model, 246
optimising, 254Palm’s theorem, 93Palm-Wallstrom-case, 135paradox, 242parcel blocking, 175Pareto distribution, 72, 154partial blocking, 158Pascal distribution, 92Pascal-case, 135PASTA property, 109, 188PASTA–property, 93PCM-system, 13PCT-I, 102, 134PCT-II, 135, 136pdf, 42peakedness, 106, 110, 171percentile, 45persistence, 32point process, 76
independence, 80
376 INDEX
simple, 76, 81stationary, 80
Poisson distribution, 88, 92, 103calculation, 116truncated, 107, 108
Poisson process, 75, 92Poisson-case, 134polynomial distribution, 67, 303polynomial trial, 67potential traffic, 22preemptive, 264preferential traffic, 33primary route, 162Processor-Sharing, 293product form, 188, 325protocol, 8PS, 294pseudo random traffic, 136Pure Chance Traffic
Type I, 102, 134Type II, 135
QoS, 126Quality-of-Service, 126quantile, 45queueing networks, 321
Raikov’s theorem, 95random traffic, 134random variable, 41
in parallel, 58in series, 54j’th largest, 53
Rapp’s approximation, 174reduced load method, 217regeneration points, 269regenerative process, 269register, 6, 7rejected traffic, 21relative accuracy, 353reneging, 265renewal process, 78residual life-time, 46response time, 245reversible process, 191, 193, 324
ring network, 13roaming, 10roulette simulation, 359Round Robin, 293RR, 293
sampling theory, 348Sanders’ method, 181scanning method, 347, 353secondary route, 162service protection, 162service ratio, 255service time, 30simplicity, 81SJF, 288SLA, 37slot, 90SM, 21smooth traffic, 141, 171sojourn time, 245space divided system, 6SPC-system, 7sporadic source, 136square root law, 342standard deviation, 44star network, 13state transition diagram
general procedure, 114statistical equilibrium, 104statistical multiplexing, 26STD, 101steep distributions, 55stochastic process, 5store-and-forward, 15strategy, 3structure, 3subscriber-behaviour, 32superposition theorem, 93survival distribution function, 42symmetric queueing systems, 302, 309, 324
tableErlang’s B-formula, 117
telecommunication network, 12telephone system
INDEX 377
conventional, 5software controlled, 7
teletraffic theoryterminology, 3traffic concepts, 18
time distributions, 41time division, 6time-out, 28, 265traffic channels, 9traffic concentration, 26traffic intensity, 20, 351traffic matrix, 217traffic measurements, 345traffic splitting, 178traffic unit, 20traffic variations, 23traffic volume, 21, 351transit exchange, 13transit network, 13triangle optimization, 227
user perceived QoS, 27utilization, 22, 127
variate, 72VBR, 195virtual circuit protection, 193virtual queue length, 235virtual waiting time, 266voice path, 6VSAT, 14
waiting time distribution, 49FCFS, 240
Weibull distribution, 48, 71Westerberg’s distribution, 355Wilkinson’s equivalence method, 171wired logic, 3wireless communication, 9work conserving, 266
378 INDEX
INDEX 379
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 9.27 (Exam 2010)
Classical models
We consider Erlang’s loss system with n = 4 channels. The arrival rate is λ = 4 calls pertime-unit, and the service rate is µ = 2 calls per time-unit. In the following we assumestatistical equilibrium.
1. Find the offered traffic.
2. Find the state probabilities of the system.
3. Find the time congestion by using the recursion formula for Erlang-B.The individual steps of the recursion should be visible in the answer.
4. Find the distribution of number of blocked calls during a busy period where all channelsare busy. Find the mean value?
We now consider Erlang’s delay system with same parameters as above.
5. Find the probability of delay, the mean waiting time for all calls, and the mean waitingtime for delayed calls.
We then consider Palms’s machine repair model with S = 4 terminals and n = 1 computer.Thinking time is exponentially distributed with rate γ = 1 [time-units−1], and service timeis exponentially distributed with rate µ = 2 [time-units−1].
6. Find the utilization of the computer and the mean waiting time at the computer.
380 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 9.27 Exam 2010
CLASSICAL SYSTEMS
Question 1:
By definition the offered traffic is the average number of call attempts per mean service time(1.2):
A =λ
µ=
4
2,
A = 2 [erlang] .
Question 2:
The state transition diagram is shown below. We include the transition due to a blocked call(which don’t change the state) because we use it in Question 4.
4
2
4
4
4
6
4
8
43210
4
The relative state probabilities q(i) are obtained by cut equations, and the true state proba-bilities are obtained by normalization:
q(0) = 1 p(0) = 321
q(1) = 2 p(1) = 621
q(2) = 2 p(2) = 621
q(3) = 43
p(3) = 421
q(4) = 23
p(4) = 221
Total = 7 Total = 1
INDEX 381
Question 3:
We use the recursion formula (4.29) for evaluating Erlang’s B-formula (Erlang’s first formula):
E1,x(A) =A · E1,x−1(A)
x+ A · E1,x−1(A), E1,0(A) = 1 , x = 1, 2, . . .
Inserting A = 2 erlang we find:
E1,0(2) = 1 ,
E1,1(2) =2 · 1
1 + 2 · 1 =2
3,
E1,2(2) =2 · 2
3
2 + 2 · 23
=2
5,
E1,3(2) =2 · 2
5
3 + 2 · 25
=4
19,
E1,4(2) =2 · 4
19
4 + 2 · 419
=2
21,
which is in agreement with state probability p(4) in Question 2.
Question 4:
Let us denote the arrival rate, respectively the service rate, in state 4 by λ(4), respectivelyµ(4). Given we are in state 4, then the probability that next event is a call attempt(whichwill be blocked) is:
p =λ(4)
λ(4) + µ(4)=
4
4 + 8=
1
3.
The probability that next event is a departure, which terminates the busy period, is:
1− p =µ(4)
λ(4) + µ(4)=
8
4 + 8=
2
3.
As there is no memory in the process, we find the probability that i call attempts are blockedduring a busy period becomes a geometric distribution:
p(i) =
(1− 1
3
)·(
1
3
)i, i = 0, 1, 2 . . . .
As the distribution starts with value zero (See Table 3.1), the mean value becomes:
m1 =1
1− 13
− 1 =1
2.
382 INDEX
Question 5:
This is Erlang’s delay system with the same parameters n = 4 and A = 2 as above. We havea simple relationship between Erlang’s B-formula and Erlang’s C-formula (9.9):
E2,n(A) =n · E1,n(A)
n− A · (1− E1,n(A)
=4 · 2
21
4− 2 · (1− 221
)
=4
23= 0.1739
From (9.15), respectively (9.17) we get (s = 1/µ is the mean holding time):
Wn(A) = E2,n(A) · s
n− A =4
23·
12
4− 2,
W4(2) =1
23[time units] .
wn(A) =s
n− a =12
4− 2,
w4(2) =1
4[time units] .
Question 6:
Palm’s machine/repair model has the same state transition diagram as Erlang’s loss system.We observe that the service ratio µ/γ = 2 and number of terminals S = 4 are the sameparameters as above. We have changed the time scale so that the mean service time of thecomputer is 1/2. The computer is working except when all terminals (channels) are busy(9.37):
y = 1− E1,n(µ/γ) =19
21.
The waiting time is equal to the response time (9.46) minus the service time:
mw = mr −ms
=S
1− E1, n(µ/γ)·ms −mt −ms
=4
1− 221
· 1
2− 1− 1
2,
mw =27
38[time units] .
INDEX 383
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 6.15 (Exam 2010)
E2/M/n loss system
We consider a loss system (BCC = blocked calls cleared) with n channels. The holding timesare exponentially distributed with mean value 1/µ. The inter-arrival times are Erlang-2distributed with the arrival rate 2λ in each phase.
1. Find the offered traffic.
The state of the system is defined by (i, j) where i is the number of customers in the systemand j is the phase of the arrival process: phase one = a, phase two = b. The structure of thestate transition diagram is as follows:
........
...................
..................................................................................................................................................................................................... ........
................................
........................................................................................................................................................................................ ........
...................
..................................................................................................................................................................................................... ........
...................
.....................................................................................................................................................................................................
........
...................
..................................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
...................
..................................................................................................................................................................................................... ........
...................
.....................................................................................................................................................................................................
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..........................................................................................................................................................................................
........
........
........
..........................................................................................................................................................................................
........
........
........
.......................................................................................................................................................................................................................................................................
..................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
.................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
.................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
.......................................................................................................................................................................................................................................................................
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
0, a 1, a 2, a n, an−1, a
n, bn−1, b2, b1, b0, b
· · ·
· · ·
· · ·
....................................................................................................................................................................................................................
.........................................
2. Complete the state transition diagram by inserting the transition rates.
3. Find the time congestion E and traffic congestion C expressed by state probabilitiesp(i, j).
4. Find the state probabilities π(i, j) observed by a call just before entering the system,and find the call congestion B.
5. Find the numerical values of state probabilities when n = 2 channels, µ = 1 [time-unit−1], and λ = 1 [time-unit−1]. Start by using p(2, b) = 4/58, and use the nodebalance equations for states [2b], [2a], [1b], [1a] etc.
6. Find numerical values of time congestion E, call congestion B, and traffic congestion C.
384 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 6.15 (Exam 2010) Exercise 2010-2 = E2/M/n LOSS SYSTEM
Question 1:
By definition the offered traffic is the average number of call attempts per mean service time(1.2). The mean inter-arrival time is:
1
2λ+
1
2λ=
1
λ
so that the average arrival rate becomes λ calls per time unit. The offered traffic thenbecomes:
A =λ
µ.
Question 2:
........
...................
..................................................................................................................................................................................................... ........
................................
........................................................................................................................................................................................ ........
...................
..................................................................................................................................................................................................... ........
...................
.....................................................................................................................................................................................................
........
...................
..................................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
...................
..................................................................................................................................................................................................... ........
...................
.....................................................................................................................................................................................................
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..........................................................................................................................................................................................
........
........
........
..........................................................................................................................................................................................
........
........
........
.......................................................................................................................................................................................................................................................................
..................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
.................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
.................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
.......................................................................................................................................................................................................................................................................
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
0, a 1, a 2, a n, an−1, a
n, bn−1, b2, b1, b0, b
· · ·
· · ·
· · ·μ 2μ 3μ (n−1)μ nμ
μ 2μ 3μ (n−1)μ nμ
2λ2λ2λ2λ2λ2λ 2λ 2λ 2λ 2λ 2λ
....................................................................................................................................................................................................................
.........................................
Question 3:
The time congestion E is per definition the proportion of time all channels are busy, or theproportion of time call attempts are blocked:
E = p(n, a) + p(n, b) .
The traffic congestion C is by definition the proportion of the offered traffic A which isblocked. The carried traffic is:
Y =n∑
i=0
i · p(i, a) + p(i, b)
Thus we get:
C =A− YA
.
INDEX 385
Question 4:
Call attempts are only generated when the arrival process is in phase b. So a call attemptgenerated in state i, b se this state just before entering. The proportion of call attemptsobserving state [i, b] becomes
π(i) =2λ · p(i, b)
2λ · p(0, b) + 2λ · p(1, b) + . . .+ 2λ · p(n, b)
π(i) =p(i, b)∑nj=0 p(j, b)
.
The call congestion is by definition the proportion of call attempts which are blocked. Onlycall attempts generated in state [n, b] are blocked:
B = π(n, b) =p(n, b)∑nj=0 p(j, b)
.
Question 5:
For n = 2 channels, λ = µ = 1 [time-units−1] we get the following state-transition diagram.
........
...................
..................................................................................................................................................................................................... ........
................................
........................................................................................................................................................................................ ........
...................
.....................................................................................................................................................................................................
........
...................
..................................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
...................
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..........................................................................................................................................................................................
........
........
........
..........................................................................................................................................................................................
........
........
........
.................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
.......................................................................................................................................................................................................................................................................
......................................................................................................................................................................................................................................................................................
0, a 1, a 2, a
2, b1, b0, b1 2
1 2
2222 2 2
....................................................................................................................................................................................................................
.........................................
Letting p(2, b) = 1 we have under the assumption of statistical equilibrium the following byusing flow balance equations for nodes: flow out of a state must equal flow into this state.Below we always put flow out on the left-hand side.
386 INDEX
We choose p(2, b) = 1 and get the following flow balance equations:
State [2, b]:
(2 + 2) · p(2, b) = 2 · p(2, a)
p(2, a) = 2
State [2, a]:
(2 + 2) · p(2, a) = 2 · p(2, b) + 2 · p(1, b)
p(1, b) = 3
State [1, b]:
(1 + 2) · p(1, b) = 2 · p(2, b) + 2 · p(1, a)
p(1, a) =7
2
State [1, a]:
(1 + 2) · p(1, a) = 2 · p(2, a) + 2 · p(0, b)
p(0, b) =13
4
State [0, b]:
2 · p(0, b) = 1 · p(1, b) + 2 · p(0, a)
p(0, a) =7
4
We thus have the following relative state probabilities q. The true state probabilities p areobtained by normalization:
q(2, b) = 1 p(2, b) = 458
q(2, a) = 2 p(2, a) = 858
q(1, b) = 3 p(1, b) = 1258
q(1, a) = 72
p(1, a) = 1458
q(0, b) = 134
p(0, b) = 1358
q(0, a) = 74
p(0, a) = 758
Total = 229
Total = 1
INDEX 387
This agrees with the given value of p(2, b). We notice that all states a and all states b bothadd to the one half, as expected.
Only by starting with state (n, b) are we able to find the relative state probabilities explicitly(cf. a system with IPP arrival process, Example 6.7.1). We cannot truncate the state prob-abilities of a system to a system with fewer channels and obtain the new state probabilitiesby re-normalizing. The relative state probabilities change values, and we have to recalculateall state probabilities from scratch. The system is not reversible.
Question 6:
From these numerical values we find A = 1 and the congestion values.
E = p(2, a) + p(2, b)
=12
58,
B =p(2, b)
p(0, b) + p(1, b) + p(2, b)
=8
58,
Y = 1 · (p(1, a) + p(1, b)) + 2 · (p(2, a) + p(2, b))
=50
58,
C =A− YA
=8
58,
We thus notice that B = C which always is the case when we have a renewal arrival processand exponentially distributed service times.
2010-05-19
388 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 10.27 (Exam 2010)
Priority queueing system
We consider a single-server system M/G/1 with Poisson arrival processes. To types of cus-tomers arrive to the system
• Type-1: Arrival rate λ1 = 0.1 [time-units−1],Constant service time s1 = 1 [time-unit].
• Type-2: Arrival rate λ2 = 0.2 [time-units−1],Erlang-2 distributed service time with mean value s2 = 2 [time-units].
1. Find the total offered traffic and the total mean service time.
2. Find the mean waiting time for all customers without priority, using Pollaczek-Khintchine’sformula.
We now assume that type-1 customers have non-preemptive priority over type-2 customers.
3. Find the mean waiting time for each class.
4. Check that Kleinrock’s conservation law is fulfilled when comparing no priority withnon-preemptive discipline.
We now introduce a type-3 class of customers (best effort traffic) which can be preempted byboth type-1 and type-2, so that it does not influence the service of the two first classes:
• Type-3: Arrival rate λ3 = 0.2 [time-units−1],Exponential distributed service time with mean value s3 = 2 [time-units].
5. Find the mean waiting time for type-3 customers.
We consider only the first two classes and introduce processor sharing (PS) without priorityfor serving these two classes. The total offered traffic and total mean service time was obtainedin Question 1.
6. Find the mean waiting time for all customers and the mean mean waiting for each ofthe two classes.
INDEX 389
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 10.27 Exam 2010
PRIORITY QUEUEING SYSTEM
Question 1:
By definition the offered traffic is the average number of calls per mean service time (1.2):
A1 = λ1 · s1 = 0.1 · 1 = 0.1 [erlang]
A2 = λ2 · s2 = 0.2 · 2 = 0.4 [erlang]
At = A1 + A2 = 0.5 [erlang]
The total arrival rate is λ1 + λ2 = 0.3 [time-units−1]. From the total offered traffic we thenget the mean service time for all calls:
st =Atλt
=0.5
0.3=
5
3[time-units] .
We could of course also finds the mean service time for all customers by weighting the meanvalues (combination i parallel, Sec. 2.3.2):
st =λ1
λ1 + λ2
· s1 +λ2
λ1 + λ2
· s2
=0.1
0.1 + 0.2· 1 +
0.2
0.1 + 0.2· 2
=5
3q.e.d.
Question 2:
We use Pollaczek-Khintchine’s formula (10.3) where V is given by (10.4) for the total trafficprocess, or by (10.59) and (10.60) when combining more classes of customers.
A constant service time has second moment equal to mean value squared, whereas Erlang-2has the second moment given by (2.16). As the mean value is 2 for the Erlang-2 distribution,each phase has rate µ = 1 [time-units−1]. We get for the two types:
m2,1 = s21 = 1 [time-units2] ,
m2,2 =2 · (2 + 1)
12= 6 [time-units2] .
390 INDEX
Thus we get:
V1,2 = V1 + V2 =λ1
2·m2,1 +
λ2
2·m2,2
=0.1
2· 1 +
0.2
2· 6 = 0.05 + 0.6 ,
V1,2 = 0.65 [time-units] .
Finally using Pollaczek-Khintichines formula (10.3) we have:
W =V1,2
1− At=
0.65
1− 0.5,
W = 1.30 [time-units] .
Question 3:
For non-preemptive queueing strategy we find mean waiting time for type-1 by (10.66), andmean waiting time for type-2 by (10.68):
W1 =V1,2
1− A1
=0.65
1− 0.1,
W1 =13
18[time-units] ,
W2 =V1,2
(1− A1)(1− (A1 + A2)=
W1
1− At=
1318
1− 0.5,
W2 =13
9[time-units] .
Question 4:
For non-priority system we have from Question 2:
At ·W = 0.5 · 13
10=
13
20[time-units] .
For non-preemptive priorities we get from Question 3:
2∑
i=1
A1 ·Wi = 0.1 · 13
18+ 0.4 · 13
9=
13
20[time-units] .
Thus we see that the Conservation Law (10.63) is fulfilled. The average waiting time forall classes weighted by the traffic (load) of the mentioned class, is independent of the queuediscipline.
INDEX 391
Question 5:
The internal discipline between type-1 and type-2 has no influence upon the service of type-3.The second moment of the exponential distribution is given by (2.15). We have:
V3 =λ3
2·m2,3 =
0.2
2· 2
(12)2
= 0.8
V1,3 = V1 + V2 + V3 = 0.05 + 0.6 + 0.8
V1,3 = 1.45 [time-units] .
As type-3 is preempted by both type-1 and type-2 we find (10.77)
Wp =V1,p
1− A0,p−1 1− A0,p+
A0,p−1
1− A0,p−1
· sp ,
W3 =V1,3
1− (A1 + A2)1− (A1 + A2 + A3) +A1 + A− 2
1− (A1 + A2)· s3
=1.45
0.5 · 0.1 +0.5
0.5· 2 = 29.5 + 2 ,
W3 = 31.5 [time-units] .
Question 6:
M/G/1–PS has the same state probabilities as M/M/1 (10.79) or Sec. 12.2:
p(i) = (1− At) · Ait , i = 0, 1, 2 . . .
where the total offered traffic At and the total mean service time st was obtained in Question2. The mean waiting time for all jobs becomes (9.32):
W =At
1− At· st =
0.5
0.5· 5
3,
W =5
3[time-units] .
As the mean waiting time is proportional to the job duration, type-2 jobs on the averagehas the double waiting time of type-1 jobs: W2 = 2 ·W1. So we split the total waiting time
392 INDEX
according to the number of jobs:
W =5
3=
λ1
λ1 + λ2
·W1 +λ2
λ1 + λ2
·W2
=1
3·W1 +
2
3· (2 ·W1) =
5
3·W1 ,
W1 = 1 [time-units] ,
W2 = 2 [time-units] .
2010-05-18
INDEX 393
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 1.1
TRAFFIC PROCESS
Below we show a traffic process and the carried traffic, when the number of channels issufficient (n ≥ 8) (page 2).Also for n = 4 we show on page 3 the carried traffic and the blocked traffic. The performancemeasures time, call, and traffic congestion are given in the tables on page 4. (The columnwith Erlang-B values are obtained from a table or a compyuter program and dealt with inChap. 4). Note that we only include what happens within the observation period of 40 timeunits. Within this period we have 32 calls arriving (the first 3 arrive before the period). Forthe holding times we include parts of the first 3 calls (which are not counted) and excludeparts of the last 3 calls (which are counted). On the average the two contributions balanceeach other.
We now assume that the number of channels is n = 6.
1. Draw the carried traffic upon the upper grid and the lost traffic upon the lower grid onpage 2.
2. Fill out the missing information in Table 1 and 2 on page 4.
Updated 2010-02-05
394 INDEX
0 5 10 15 20 25 30 35 40
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
INDEX 395
0 5 10 15 20 25 30 35 40
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
396 INDEX
n ≥ 8 n = 6 n = 4
Offered Carried Rejected Carried Rejectedtraffic traffic traffic traffic traffic
i tpo po · tpo tpc pc · tpc tpr pr · tpr tpc pc · tpc tpr pr · tpr0 0 0 1 0 20 01 3 3 4 4 6 62 5 10 10 20 7 143 8 24 9 27 3 94 10 40 16 64 4 165 6 30 0 0 0 06 4 24 0 0 0 07 3 21 0 0 0 08 1 8 0 0 0 0
Σ 40 160 40 115 40 45∑px · tpx∑tpx
160
40=4.0
115
40=2.875
45
40=1.125
Table 13.3: For n = 8 there is no blocking and the carried traffic (index c) equals the offered traffic(index o). For n = 6, respectively n = 4, some calls are rejected (index r).
“MEASURED” Erlang–B
n Rejected call # of calls B E C Enumbers rejected
8 none 0 0 140
= 0.03 0 0.03
6 0.12
9, 10, 11, 12,4 25, 26, 27, 35 8 8
32= 0.25 16
40= 0.40 1.125
4.0= 0.28 0.31
Table 13.4: Comparison of call congestion B, time congestion E, and traffic congestion C.
INDEX 397
398 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningCOM · DTU, Networks Course 34 340
Solution to exercise 2.2
0 5 10 15 20 25 30 35 40
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
Updated: 2008-02-14
INDEX 399
400 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 1.2 (Exercise)
OFFERED TRAFFIC
1. We consider an Internet-cafe. Customers arrive at random. On the average 20 customersarrive per hour. The average time using a terminal is 15 minutes.
Quest. 1.1: Find the offered traffic measured in speech minutes during one hour.
Quest. 1.2: Find the offered traffic measured in erlangs.
2. We consider a cell in a cellular system. There are two arrival processes.
– Hand-over calls arrive with rate 3 calls per minute, and the mean holding time is90 seconds.
– New Calls arrive with 240 calls per hour and the mean holding time is 2 minutes.
Quest. 2.1: Find the offered traffic for each traffic stream and the total offered traffic.
3. To a computer system three types of tasks arrive:
a) inter-active tasks,
b) test tasks, and
c) production tasks.
All tasks arrive according to a Poisson proces, and the service times are constant.
For type a) 15 tasks arrive per minute, and the service time is 1 second.For type b) 3 tasks arrive per minute, and the service time is 5 seconds.For type c) 12 tasks arrive per hour, and the service time is 2 minutes.
Quest. 3.1: Find the offered traffic for each type and the total offered traffic.
4. The arrival process to a systems occurs according to a Poissonproces with rate λ = 2calls per time unit. Every call occupies two channels during the whole occupation time,which is exponentially distributed with mean value s = 3 time units.
Quest. 4.1: Find the offered traffic in calls (connections).
Quest. 4.1: Find the offered traffic in channels.
5. We consider traffic to a digital exchange offering ISDN calls (1 channel per call) andISDN–2 calls (2 channels per call):
– ISDN calls: Per hour 900 calls arrive and the mean holding time is 2 minutes.
– ISDN–2 calls: Per minute 2 calls arrive and the mean holding time is 150 seconds.
INDEX 401
Quest. 5.1: Find the offered traffic (measured in channels) for each type and the totaloffered traffic.
6. A digital 2.048 Mbps (Mbps = Mega bits per second) link is on the average offered 128packets per second. A packet contains on the average 1500 bytes (1 byte = 8 bits).
Quest. 6.1: Find the utilisation % of the link.
2009–02–05
402 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to Exercise 1.2:
OFFERED TRAFFIC
Question 1:
During one hour the number of speech minutes [SM] is:
20 · 15 minutes = 300 [SM]
This is a traffic volume and corresponds to 300/60 = 5 [Eh] (erlanghours).
Using minutes as time unit the offered traffic in erlang (1.2) becomes:
A =20
60· 15 = 5 [erlang]
in agreement with that the traffic volume per hour is 5 [Eh].
Question 2: Using the time unit [minutes] we get
Hand-over traffic: Ah−o = 3 · 90
60= 4.5 [erlang]
New calls traffic: Anew =240
60· 2 = 8 [erlang]
Thus the total offered traffic becomes:
A = 4.5 + 8 = 12.5 [erlang]
We get of course the same result using for example hours or seconds as time unit
Question 3:
Using minutes as time unit we get:
Type a: Aa = 15 · 1
60= 0.25 [erlang]
Type b: Ab = 3 · 5
60= 0.25 [erlang]
Type c: Ac =12
60· 2 = 0.4 [erlang]
Total: At = 0.9 [erlang]
INDEX 403
So the utilization of the system is % = 0.9.
Question 4:
The offered traffic in calls becomes, using the same time unit:
Acalls = 2 · 3 = 6 [erlang (calls)]
As every call uses 2 channels the offered traffic in channels becomes:
Achannels = 6 · 2 = 12 [erlang (channels)]
Question 5:
We want to find the offerd traffic in the unit [channels]. We find using the time unit [minutes]:
AISDN =900
60· 2 = 30 [erlang]
AISDN2 = 2 · 150
60· 2 = 10 [erlang]
Atotal = 30 + 10 = 40 [erlang]
Question 6:
Per second the offered traffic in bits per second becomes:
A = 128packet
second· 1500
byte
packet· 8 bit
byte= 1, 536, 000 bits per second.
Thus the utilization becomes:
% =1, 536, 000
2, 048, 000= 0.75
2008.02.14
404 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 2.1
FLAT DISTRIBUTION
We consider the following hyper-exponential distribution:
F (t) =1
10·(
1− e−t/7)
+9
10·(
1− e−3 t), t ≥ 0 .
1. Derive the mean value and the variance of the distribution.
2. Find the remaining life time distribution as a function of the actual age x.
3. Find the mean value m1,r(x) of the remaining life time as a function of actual age x.Draw a graph of m1,r(x) as a function of x.Find the upper limit of m1,r(x), and give an explanation of this value.
4. Show that the median of the distribution is 0.2672, and calculate the traffic load fromthe shortest half of all jobs. (The median of a distribution function is the value forwhich the distribution function takes the value 0.5. Half the observations will be largerand the other half shorter than this value).The following integral are given:
∫x · ea x dx =
eax
a2(a x− 1) ,
∫ t
0
x · ea x dx =eat
a2(a t− 1) +
1
a2 .
5. Find the distribution function for the remaining life time from a random point of timeand find the mean value of this distribution.
INDEX 405
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to Exercise 2.1:
The hyper-exponential distribution given is a weighted sum of two exponential distributionswith mean values m1 = 7, respectively m1 = 1/3.
...................................................................................................... ..........................
...................................................................................................... ..........................
...................................................................................................... ..........................
...................................................................................................... ..........................
............................................................................................................................... .......................... ............................................................................................................................... ..........................
1/10
9/10
3
1/7
...........................................................................................................................................................................................
..........................
.........................................................................................................................................................................................
............................
.........................................................................................................................................................................................
............................
...........................................................................................................................................................................................
..........................
Question 1:
Expressions for mean value, second moment, variance, and form-factor are given in (2.67),respectively (2.68):
m1 =1
10· 7 +
9
10· 1
3= 1 .(2.68)
m2 = 2 ·(
1
10· 49 +
9
10· 1
9
)= 10 ,
σ2 = m2 −m21 = 9 ,
ε =m2
m21
= 10 ,
where the numerator is the second (non-central) moment, and the denominator is the squareof the mean value.
Question 2:
The distribution function of the remaining life-time t, conditioned of an actual age x, is given
406 INDEX
by (2.18), and the corresponding density function s given by (2.19):
F (x+ t |x) =F (x+ t)− F (x)
1− F (x), t, x ≥ 0 ,
f(x+ t |x) =f (x+ t)
1− F (x), t ≥ 0 , x ≥ 0 .
The distribution function is given in the text of the exercise, and we find the density function:
f(t) =1
10
(1
7· e− 1
7·t)
+9
10
(3 · e−3t
),
f(x+ t |x) =
(110· e−x7
)·(
17· e− t7
)+(
910· e−3x
)· (3 · e−3t)
110· e−x7 + 9
10· e−3x
,
f(x+ t |x) =1
(k1 + k2)·k1
7· e− t7 + 3 k2 · e−3t
,
where
k1 =1
10· e−x7 ,
k2 =9
10· e−3x .
Here k1 is the probability that we choose the upper branch times the probability that theduration of this is longer than x. In similar way k2 is the probability that we choose the lowerbranch times the probability that the duration of this is longer than x.
The distribution function of the remaining life-time is also a hyper-exponential distribution,composed of the same two exponential distributions as above, but with weight-factors whichdepend on x.
F (x+ t|x) can be written in a similar way:
F (x+ t|x) =1
(k1 + k2)·k1 ·
(1− e− t7
)+ k2 ·
(1− e−3t
).
Question 3:
The mean value of the remaining life time for a given x is obtained by exploiting that weknow the mean value of a hyper-exponential distribution (2.67):
m1,r(x) =k1
k1+k2
· 7 +k2
k1 + k2
· 1
3
INDEX 407
For x = 0 we get the same result as in Question 1. For increasing x, k1/(k1 +k2) converges to1. There is an increasing probability that the observation is from the exponential distribution,which has the mean value 7:
limx→∞
m1,r(x) = 7
0 1 20
1
2
3
4
5
6
7
8µr
x
Question 4:
The median of the distribution is numerically found to me = 0.2672, as the distributionfunction for this values equals 1/2. This means that half of all observations are less than themedian.
The traffic load from the shortest half of all holding times is obtained from (2.30) for x =0.2677 and the mean value m1 = 1:
ρx =
∫ x0t · f(t) dt
m1
=
∫ x
0
t ·(
1
10· 1
7· e− t7 +
9
10· 3 · e−3·t
)dt
= 1− 7
10· e−x7
(x7
+ 1)− 3
10· e−3x · (3x+ 1) ,
408 INDEX
The following integrals are given:
∫x · ea x dx =
eax
a2(a x− 1) ,
∫ t
0
x · ea x dx =eat
a2(a t− 1) +
1
a2.
For x = 0.2672 we get:ρx = 0.0580 .
Thus the shortest 50 % of all jobs only contribute to the load with 5,8 %.
Question 5:
The density function for the remaining life time from a random point of time is given by(2.32):
v(t) =1− F (t)
m1
=1
10· e− t7 +
9
10· e−3t .
The distribution function then becomes:
V (t) =
∫ t
0
v(t) dt ,
V (t) =7
10·(
1− e− t7)
+3
10·(1− e−3t
).
That is, we again get a hyper-exponential distribution.
In comparison with the original distribution we get a weighting of the same two exponentialdistributions, but the weighting factors are proportional to the contributions to the meanvalue from the two phases in the original distribution. (see Question 1). The mean valuebecomes (2.34):
m1,v =m1
2· ε =
1
2· 10 = 5 ,
which also is obtained from the above hyper-exponential distribution:
m1,v =7
10· 7 +
3
10· 1
3= 5 .
2010-02-15
INDEX 409
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 2.4 (exam 1988)
COX LIFE-TIME DISTRIBUTION
We consider the following Cox-2 distribution, which has the same rate λ in both phases:
- λ -p
?1 - p
λ
?
1. Show that the distribution function is given by:
F (t) = 1− e−λt − p · λ t e−λt , t ≥ 0 ,
and find the density distribution.
2. Find the non-central moments mi of the distribution.
3. Find the distribution of the remaining life-time at a random point of time.
4. Find the death rate as a function of the actual age.
2009-02-8
410 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 2.4: (exam 1988)
Question 1:
The phase diagram is easily transformed into the following diagram which is a combination inparallel of an exponential distribution and an Erlang-2 distribution: The distribution function
-
λ-
λ
λ
-
-
-
1− p
p
F (t) becomes (2.61):F (t) = (1− p) · F1(t) + p · F2(t) ,
where F1(t) is the distribution function of an exponential distribution and F2(t) is the distri-bution function of an Erlang–2 distribution. F1(t) is given by (2.3):
F1(t) = 1− e−λt.
The distribution F2(t) can be obtained in the following ways:
1. F2(t) is obtained from (2.46), where the density function of an Erlang-k distribution isgiven by:
fk(t) · dt =(λt)k−1
(k − 1)!· e−λt · λ · dt, λ > 0, t ≥ 0 .
For k = 2 we get the following result:
f2(t) = λ2 · t · e−λt ,
F2(t) =
∫ t
0
f(u) · du ,
F2(t) =
∫ t
0
λ2 · u · e−λu du ,
F2(t) =[−(1 + λ · u) · e−λu
]t0,
F2(t) = 1− e−λt − λt · e−λt .
INDEX 411
2. F2(t) is obtained from (2.47):
F2(t) =∞∑
j=k
(λt)j
j!· e−λt = e−λt
∞∑
j=k
(λt)j
j!
= 1− e−λt ·k−1∑
j=0
(λt)j
j!,
as∞∑
j=0
(λt)j
j!= eλt.
For k = 2 we have:F2(t) = 1− e−λt − λ t · e−λt .
Thus F (t) is given by:
F (t) = (1− p) · (1− e−λt) + p · (1− e−λt − λt · e−λt)
= 1− e−λt − pλt · e−λt .
The density function is obtained in a similar way or by differentiating the distribution func-tion:
f(t) = (1− p) · λ · e−λt + p · λ · (λt) · e−λt ,
f(t) = λ · e−λt − p λ · e−λt + p λ · (λ t) · e−λt , t ≥ 0 .
Question 2:
By exploiting the theory for parallel/serial combination of random variables we get the (non-central) moments mi (Sec. 2.3):
mi = (1− p) ·mi (exponential) + p ·mi (Erlang-2)
= (1− p) · i !
λi+ p · (i+ 1)!
λi,
mi =i !
λi· (1 + p · i) ,
m1 =1 + p
λ,
m2 =2 + 4p
λ2.
412 INDEX
The moments may of course be obtained from (2.5), but this is not the intention with thequestion. It is sufficient to calculate the first two moments.
Question 3:
We want to find the distribution of the remaining life-time at a random point of time (eitherdensity function or distribution function). The density function is obtained from (2.32):
v(t) =1− F (t)
m1
=e−λt + p λt · e−λt
(1 + p)/λ,
v(t) =λ
1 + p
(e−λt + p λt · e−λt
).
(This is a sufficient answer).
The mean value of this becomes (2.33):
m1,v =m1
2· ε =
m1
2· m2
m21
=m2
2m1
=1
2· λ
1 + p· 2 + 4p
λ2.
m1,v =1 + 2p
λ (1 + p).
The distribution function is obtained as follows:
V (t) =
∫ t
0
v(u) du
=1
1 + p−e−λt − p · e−λt(λt+ 1) + 1 + p ,
V (t) = 1− e−λt − p
1 + p· λt · e−λt .
The various expressions are seem to be in agreement with the spacial cases p = 0 (exponentialdistribution) and p = 1 (Erlang–2 distribution).
INDEX 413
Question 4:
The death-rate as a function of the actual age becomes (2.21):
µ(t) =f(t)
1− F (t),
µ(t) =λ · e−λt − pλ · eλt + pλ · (λt) · e−λt
e−λt + pλt · e−λt ,
µ(t) =λ− pλ+ pλ · (λt)
1 + pλt.
As a control, we again have
p = 0 : µ(t) = λ (exponential distribution) ,
p = 1 : µ(t) =λ2t
1 + λt(Erlang–2 distribution) .
2009-02-14
414 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 3.1
POISSON PROCESS: SUPERPOSITION THEOREM
ZZZ~
>
-
j
jj
Local station 1
Local station 2
Transit station
λ
λ1
λ2
A transit station receives calls from two local stations. The arrival processes from the twolocal stations are independent and both Poisson processes with constant arrival rates λ1,respectively λ2.
Show that the total arrival process to the transit station is a Poisson process with intensityλ = λ1 + λ2 by considering the:
1. number representation,
2. interval representation.
3. intensities in short time intervals.
INDEX 415
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to Exercise 3.1:
Superposition theorem: Superposition of more Poisson processes results in a Poisson pro-cess.
Question 1: Number representation.
For a Poisson process, the number of events within a fixed time interval is Poisson distributed(3.36).
In a fixed time interval t we have Nt,1 calls from local exchange 1 and Nt,2 calls from localexchange 2. Both Nt,1 and Nt,2 are Poisson distributed random variables.
We shall show that the total number of calls Nt = Nt,1 +Nt,2 is a random variable which alsois Poisson distributed. This we will prove in two ways.
(1) Convolution
p Nt = n =n∑
i=0
p Nt,1 = i · p Nt,2 = n− i
=n∑
i=0
(λ1 t)i
i !· e−λ1t · (λ2 t)
n−i
(n− i)! · e−λ2t
= e−(λ1+λ2)t · 1
n !
n∑
i=0
n !
i ! (n− i) !· (λ1 t)
i · (λ2 t)n−i
= e−(λ1+λ2)t · 1
n !
n∑
i=0
(n
i
)· (λ1 t)
i · (λ2 t)n−i .
Using the Binomial expansion:
(a+ b)n =n∑
i=0
(n
i
)ai · bn−i ,
we get the Poisson distribution:
p Nt,s = n =(λ1 + λ2) tn
n!· e−(λ1+λ2)t .
416 INDEX
The parameter of this Poisson distribution is the sum of the two local exchange parameters,and this is what we should show.
(2) Probability generating functions (pgf) (not covered in the textbook)
When we deal with discrete distributions it is easier to use probability generating functions.
The probability generating function for the Poisson distribution is:
f(s) =∞∑
i=0
pN = i · si
= e−λt∞∑
i=0
(λ t s)i
i !
= eλ t (s− 1) .
There is a one-to-one relationship between a distribution and its probability generating func-tion. The probability generating function of the sum of two independent random variables isobtained as the product of the probability generating function of the two random variables.We see that the sum of two Poisson distributions with parameters λ1 t, respectively λ2 t,becomes a Poisson distribution with parameter (λ1 + λ2) t:
eλ1t (s− 1) · eλ2t (s− 1) = e(λ1 + λ2)t (s− 1) .
Question 2: Interval representation
In a Poisson process, the interval from any point of time to the next event is exponentiallydistributed. The two sub-processes are independent, and next call in the total process appearswhen the first call appears at any of the two local exchanges. Therefore, we shall show the thesmallest of two exponentially distributed random variables also is an exponentially distributedrandom variable. This has already been done in Sec. 2.2.7, formula (2.41). The distributionfunction of the smallest becomes (2.39):
F (t) = 1−2∏
i=1
[1− Fi(t)]
= 1− 1− (1− e−λ1t)1− (1− e−λ2t) ,
F (t) = 1− e−(λ1 + λ2) t q.e.d.
Question 3: By intensity arguments.
A Poisson proces is also characterized by the formulæ (3.7–3.9), where the factor of propor-
INDEX 417
tionality λ is constant:
P (i ≥ 2,∆t) = PNt+∆t −Nt ≥ 2 = o(∆t) ,
P (i = 1,∆t) = PNt+∆t −Nt = 1 = λ∆t+ o(∆t) ,
P (i = 0,∆t) = PNt+∆t −Nt = 0 = 1− λ∆t+ o(∆t) .
If we consider a small interval ∆t, then the probability of no events in the total processwithin this interval is equal to the product of the probabilities of no events in each of the twoindependent sub-processes within this interval:
p(i = 0,∆t) = p1(i = 0,∆t) · p2(i = 0,∆t)
= (1− λ1∆t+ o(∆t)) · (1− λ2∆t+ o(∆t))
= 1− (λ1 + λ2)∆t+ o(∆t) .
The probability of getting just one event in the total process becomes in a similar way:
P (i = 1,∆t) = P1(i = 0,∆t) · P2(i = 1,∆t) +
P1(i = 1,∆t) · P2(i = 0,∆t)
= (1− λ1∆t+ o(∆t)) · (λ2∆t+ o(∆t)) +
(λ1∆t+ o(∆t)) · (1− λ2∆t+ o(∆t))
= (λ1 + λ2)∆t+ o(∆t) .
More than one events can take place in more ways, but the total probability is in all casesequal to o(∆t), and the sum of products also becomes o(∆t):
P (i ≥ 2,∆t) = o(∆t) .
We thus see that the total process corresponds to a Poisson process with intensity:
λs = λ1 + λ2 .
Remark 1:Mathematically we have that o(∆t) + o(∆t) = o(∆t) and we can look away from terms ofhigher power o(∆t)i, i ≥ 2.
Remark 2:Physically, it is obvious that the superposition theorem (Exercise 3.1) and the splitting the-orem (Exercise 3.2) must be valid. As an example, the number of particles registered by a
418 INDEX
Geiger–Muller counter is Poisson distributed. The space angle covered by the counter de-pends on the distance between the radioactive material, but does not influence the type ofdistribution.
Processes influenced by a large number of independent factors converges to Poisson processes.
2009-02-18
INDEX 419
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 3.2
POISSON PROCESS: SPLITTING THEOREM
e -
3
QQQs
λ
Transit station
p
1-p
Local station 1
Local station 2
d
dCalls arrive from a transit station according to a Poisson process with constant rate λ to aone out of two local stations.
A random call chooses independent of all other calls local station 1 with probability p andlocal exchange 2 with probability q = 1−p.
Show the the two sub-processes both are Poisson processes:
1. By number representation. (It is known that the number of calls in a fixed time intervalin the original process is Poisson distributed).
2. By interval representation. (It is known that the inter-arrival time distribution in theoriginal process is an exponential distribution).
3. By considering intensities within short time intervals δt.
4. Which type of arrival process do we get to local station one if every second call is routedto this station?
420 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to Exercise 3.2
Splitting theorem (special case of Raikov’s theorem): A random splitting of a Poisson processwill result in sub-processes, which are also Poisson processes (Sec. 3.6.2).
Question 1: Number representation
During a time interval t the probability of observing just i events in the original process isgiven by the Poisson distribution:
PNt,s = i =(λt)i
i!e−λt , λ > 0 , t ≥ 0 , i = 0, 1, . . . .
Each individual event has, independent of all other events, the probability p for choosingdirection 1. For a fixed value of i, we therefore will have a Binomial distributed number n ofevents in direction 1:
Bn | i =
(i
n
)pn · (1− p)i−n , n < i .
The probability of getting exactly n events in direction 1 is then obtained by summation overall feasible values of i ≥ n :
PNt,1 = n =∞∑
i=n
Bn | i · PNt,s = i
=∞∑
i=n
(i
n
)pn · (1− p)i−n · (λt)i
i!· e−λt
=∞∑
i=n
i!
n! (i− n)!· pn · (1− p)i−n · (λt)i
i!· e−λt
=(pλt)n
n!· e−λt ·
∞∑
i=n
λt (1− p)i−n(i− n)!
=(pλt)n
n!· e−pλt
which is the Poisson distribution with parameter pλt. The calculations are of course similarfor direction 2.
INDEX 421
Question 2: Interval representation
Let us imagine we are at local exchange 1 and wait for a call from the transit station. Wehave the following possible course of events:
• First call is in direction 1. We then wait a time interval, which is exponentially dis-tributed. The probability for this outcome is p = p(1)).
• The first call is in direction 2, but the second call is in direction 1. We then waitan Erlang–2 distributed time interval (addition of two exponentially distributed timeintervals). The probability for this outcome is (1− p) · p = p(2).
...
• k’th call is in direction 1, after that all k−1 first events were in direction 2. We wait anErlang–k distributed time interval. The probability of this outcome is (1− p)k−1 · p =p(k). (Here p(k) is a geometric distribution).
...
The Erlang–k distribution has the density function (3.34):
gk(t) =(λt)k−1
(k − 1)!· λ · e−λt .
Therefore, the density function of the time until first event becomes:
f(t) =∞∑
i=1
gi(t) · p(i)
=∞∑
i=1
(λt)i−1
(i− 1)!· λ · e−λt · (1− p)i−1 · p
= p · λ · e−λt∞∑
i=1
λt (1− p)i−1
(i− 1)!
= (pλ) · e−(pλ)t ,
which just is the density function of an exponential distribution with intensity pλ. The arrivalprocess to local exchange 1 is thus a Poisson process with intensity pλ.
The interval representation thus shows that a weighted sum of Erlang–k distributions becomesan exponential distribution if the Erlang–k distribution has a weighting-factor equal to thek’th term of a geometric series, and the summation is over all k ≥ 1. (Sec. 2.3.3).
422 INDEX
Extra:The same result can be obtained by using probability generating functions. The Erlang-kdistribution has the Laplace-transform:
ϕ(s) =
(λ
λ+ s
)k.
The waiting time until the first event in direction 1 then becomes: (parallel combination ofstochastic variables):
ϕ1(s) =∞∑
i=1
(λ
λ+ s
)i· (1− p)i−1 · p .
This is an infinite summation, where the terms make up a quotient series. The sum becomes:
ϕ1(s) = p · λ
s+ λ· 1
1−
(1− p) · λs+λ
=pλ
s+ pλ,
which is just the Laplace-transform of an exponential distribution with parameter pλ.
The proof is of course carried through for direction 2 in a similar way.
Question 3 : Intensity considerations
The probability of getting an event in the main process within a short time interval of duration∆t is:
λ∆t+ o(∆t) .
If we in the main process have an event, then the probability of observing this event in thesub process one is equal to p.
The unconditional probability of getting an event in sub process 1 within a short time intervalof duration ∆t then becomes:
pλ ·∆t+ o(∆t) .
The probability of getting more than one event in subprocess one within ∆t is p · o(∆t) =o(∆t).Thus the probability of no events in subprocess one becomes.
1− pλ ·∆t+ o(∆t
This shows us that we in direction 1 have a Poisson process with intensity pλ.
INDEX 423
Question 4 :
If just every second event chooses direction one, then the distance (inter-arrival time) betweenevents in both directions becomes Erlang–2 distributed.Thus we don’t have a Poisson process.
Remark:
From a physical point of view it is obvious that the addition and splitting theorems (Exercise6.1) must be valid. The number of particles, which is counted by e.g. a Geiger–Muller meter,is Poisson distributed. The space-angle the meter covers, depends on the distance of the meterto the radioactive source, but has no influence upon the type of distribution for number ofevents.
The cosmic background radiation also follows a Poisson process, and the splitting theoremshows that we just should deduct this from the total observed value.
Processes which are caused by many independent factors, will converge to a Poisson process.
2009-02-16
424 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 4.1
ERLANG’s B–FORMULA
At a shopping center there is a gambling hall. People passing by decide at random andindependently of each other to enter and play, but if all gambling machines are occupied,then they pass on (alternativ formulation as an Internet cafe).
During opening hours there are on the average 40 people per hour which enter to play.People choose the first idle machine from the entrance and play on the average 6 minutes(exponentially distributed). A gambling machine make on the average an income equal to100 øre per minute it is used. The total expenses for rent of rooms and maintenance is 20 kr.per hour per machine, independent of whether it is used or not. Coins of an equivalent valueequal to 100 øre are used.
(In the following Erlang’s B-formula may be calculated using the recursion formula, tablesor computers).
1. Calculate the offered traffic.
2. What is the net income when the number of machines is 4?
3. Is it profitable to have more or fewer machines than 4?What is the optimal number of machines?
In the following we assume the number of machines is 4.
4. How many coins may the owner expect to collect from each machine after 12 hours ofopening?
5. How long time does it on the average take before the last customer leaves after closingtime, when there is 1, 2, 3 or 4 people playing at the closing time?
6. What is the proportion of time when just one machine (a random one) is idle?
7. What is the proportion of time the machine farthest away from the entrance is idle?
INDEX 425
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to Exercise 4.1
The Erlang B–formula is derived in the textbook in Sec. 4.3.
Question 1:
The offered traffic is equal to the average number of calls per mean holding time (1.2):
A = λ/µ .
Four potential customers arrive per six minutes:
A = 4 erlang .
Question 2:
Using a table of Erlang’s B-formula (see table in the collection of exercises) we find for thetraffic carried:
Y = A [1− E1,n(A)]
= 4 [1− E1,4(4)]
= 4 [1− 0.3107] ,
Y = 2.757 erlang.
The gross income is 60 kr. per erlanghour. Therefore, the net income becomes
R = (2.757× 60− 4× 20) kr/hour,
R = 85.42 kr/hour .
Question 3:
We evaluate the net income for different number number of machines (see table in collectionof exercises, or calculate the Erlang–B formula using the recursion formula (4.29)):
426 INDEX
Number of machines Carried trafficIncome
per hour [kr.]
n Y Gross Net
1 0.8000 48.00 28.00
2 1.5385 92.31 52.31
3 2.1972 131.83 71.83
4 2.7573 165.42 85.42
5 3.2037 192.22 92.22
6 3.5313 211.88 91.88
From the table we see that the optimal number of machines becomes 5.
When adding one machine the income should increase by at least 20 kr./hour. This corre-sponds to that the carried traffic should increase by at least 1/3 erlang when adding onemachine (Moe’s principle). The optimal number of machines can thus be obtained directlyfrom a table of the improvement function.
Question 4:
The carried traffic on the individual machines becomes when using sequential hunting (4.16):
ai = F1,i−1(A) = A [E1,i−1(A)− E1,i(A)] .
For a period of 12 opening hours we get:
a1 = 0.8000 erlang ∼ 576 coins ,
a2 = 0.7385 erlang ∼ 532 coins ,
a3 = 0.6587 erlang ∼ 474 coins ,
a4 = 0.5601 erlang ∼ 403 coins .
Question 5:
After closing time the arrival rate is is λ = 0. For the departure process we exploit that theexponential distribution has no memory:
a) If only one customer is present at closing hour, then the time until this customer departs(independent of how time the customer has already played) is exponentially distributedwith mean value 1/µ = 6 minutes:
w1 = 6 minutes .
INDEX 427
b) If two customers are present at closing time, then the first one departs according to anexponential distribution with mean value (2µ)−1 = 3 minutes. Subsequently, the lastone departs as mentioned under a). The total mean value thus becomes:
w2 = 9 minutes .
c) If three customers are present, the first one departs according to an exponential distri-bution with mean value (3µ)−1 = 2 minutes. Afterwards the sequence is as describedunder b), and the total mean value becomes:
w3 = 11 minutes .
d) In similar way we derive the average waiting time for the case with four customers presentat closing time:
w4 = 12.5 minutes .
Question 6:
The probability of having just one idle machine is equal to the probability of having justthree busy machines. From the truncated Poisson distribution (4.9) we get:
P (3) =A3
3!/
4∑
ν=0
Aν
ν!
=
4
A· E1,4(A) = 0.3107 .
The traffic ai carried by machine i at a given point of time is correlated with the traffic carriedby the other machines at the same time. The values given in Question 4 are mean values andcannot be used for solving this question.
Question 7:
The last machine is (of course) idle, when it is not working. From Question 4 we thereforeget:
Plast machine idle = 1− a4 = 0.4399 .
Updated 2005-02-14
428 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 4.4 (exam 1980)
M/E2/2 - LOSS SYSTEM
We consider a loss system with two channels (servers). Call attempts arrive according to aPoisson process with intensity λ calls per time unit. The service time is Erlang-2 distributedwith intensity 2µ in each of the two phases.
1. Find the offered traffic.
2. Construct a state transition diagram of the system, where a state both denotes thenumber of calls in the system and the phase of the calls. Apply the following states,where a and b denotes the two phases.
0
b
a
aa
ab
bb
3. Find under the assumption of statistical equilibrium the state probabilities of the systemby exploiting the fact that the truncated Poisson distribution for a given mean holdingtime is valid for any service time distribution (insensitivity property).
4. The blocking state “both channels occupied” is initiated from either state a or b. Denotefor both of these cases the (Cox)–distribution of the duration of the blocking state byapplying a graphical (phase diagram) representation of the Cox–distribution.
5. (Advanced, excluded) Write down the Laplace transform of the distribution of theduration of the periods when both channels are busy.
6. (Advanced) Find the mean value and variance of the number of call attempts which areblocked during a period when both channels are busy.
INDEX 429
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to Exercise 4.4
The service time is Erlang-2 distributed with intensity (rate) 2µ in each phase. Therefore,the mean service time becomes:
s =1
2µ+
1
2µ=
1
µ
Question 1:
The offered traffic is equal to the average number of call attempts per mean service time:
A = λ/µ
Question 2:
The given states define the state transition diagram in a unique way:
*
*
*
PPPPi
0
b
bb
ab
a
aa
HHHHY
HHHHY2µ
2µ
2µ
2µ
4µ
4µλ
λ
λ
?
?
?
Question 3:
As the truncated Poisson distribution is valid, we have the following state probabilities:
p(0) =1
1 + A+ A2/2
p(1) = p(a) + p(b) = A · p(0)
p(2) = p(aa) + p(ab) + p(bb) =A2
2· p(0)
430 INDEX
For node “0” we have the following node balance equation:
2µ · p(b) = λ · p(0)
p(b) =A
2· p(0)
and thus from the equation for p(1):
p(a) =A
2· p(0)
The node balance equation for state “aa” is a follows:
4µ · p(aa) = λ · p(a)
p(aa) =A2
8· p(0)
In a similar way we get for state “ab”:
4µ · p(aa) + λ · p(b) = (2µ+ 2µ) · p(ab)
p(ab) =A2
4· p(0)
and for state “bb”:
4µ · p(bb) = 2µ · p(ab)
p(bb) =A2
8· p(0)
We notice that the two phases are symmetric. This is not obvious from the beginning. Thestate “ab” is in fact composed of the two micro-states “ab” and “ba”. Therefore, p(ab) istwice as big as p(aa) and p(bb).
Summarising, we have::
p(0) =1
1 + A+ A2
2
p(a) = p(b) =A/2
1 + A+ A2
2
p(aa) = p(bb) =1
2· p(ab) =
A2/8
1 + A+ A2
2
INDEX 431
There will be (n + 1) different (macro-)states with all n channels busy. The macro-state(i, n − i) = “i channels in phase a and (n − i) servers in phase b” is made up from
(ni
)
different micro-states
The probability of finding the system in state (i, n− i) thus becomes:
P (i, n− i) =
(n
i
)
n∑
j=0
(n
j
) · p(n)
=
(n
i
)
2n· p(n)
where p(n) is the truncated Poisson distribution(4.9).
In a similar way we can derive the state probabilities for all(n+2
2
)states.
Question 4:
Starting in state “aa”: Starting in state “ab”. This is a subset of the first case:
- 4µ - 4µ - 4µ
’aa’ ’ab’ ’bb’
? ?
- -
1/2
1/2
4µ - 4µ
’ab’ ’bb’
? ?
- -
1/2
1/2
-
432 INDEX
Question 5:
The Laplace-transform of the two distributions obtained in Question 4 can immediately bewritten down from the phase diagrams:
Laa(s) =1
2·(
4µ
s+ 4µ
)3
+1
2·(
4µ
s+ 4µ
)2
Lab(s) =1
2·(
4µ
s+ 4µ
)2
+1
2·(
4µ
s+ 4µ
)
The number of observations per time unit when starting in state “aa” (see the state transitiondiagram in Question 2): λ · p(a).
The number of observations per time unit when starting in state “ab”: λ · p(b) .
As we have p(a) = p(b), the average number of observations of the two distributions is thesame, and the Laplace transform of the distribution of the duration of the periods when bothchannels are busy becomes:
L(s) =1
2Laa(s) + Lab(s)
L(s) =1
4
(4µ
s+ 4µ
)+
1
2
(4µ
s+ 4µ
)2
+1
4
(4µ
s+ 4µ
)3
In fact this corresponds to the following Cox–distribution, which is a weighted sum of the twoCox–distribution in Question 4. But we can no longer identify the individual states as theyare mixed together. From the Laplace-transform it is easy to write down the distribution
*
*
*
PPPPi
0
b
bb
ab
a
aa
HHHHY
HHHHY2µ
2µ
2µ
2µ
4µ
4µλ
λ
λ
?
?
?
functions for the cases we consider here.
Question 6:
The number of blocked call attempts during a busy period has the following mean value and
INDEX 433
variance (2.82) & (2.84):
m1,n = λ ·m1,x
σ2n = λ2 · σ2
x + λ ·m1,x
where m1,x and σ2x are the mean value and variance of the distribution in Question 5. They
can be derived by differentiating L(s). As we consider Erlang-k distributions in parallel theycan also be obtained directly:
m1,x =1
4·(
1
4µ
)+
1
2·(
2
4µ
)+
1
4·(
3
4µ
)
=1
2µ
Second moment of an Erlang-k distribution (which is not normalised) is:
1
λ2·(k2 + k
)
From this we get:
m2 =
(1
4µ
)2
·[
1
4· 2 +
1
2· 6 +
1
4· 12
]
=
(1
4µ
)2
· 13
2
σ2x = m2 −m2
1,x
=5
2·(
1
4µ
)2
This is of course in agreement with (2.89) and (2.90), as q1 = 1, q2 = 3/4, q3 = 1/4, a0 = 1,a1 = 3/4, a2 = 1/3, a3 = 0 and λj = 4µ for all j.
Therefore we have:
m1,n = λ · 1
2µ
mn =A
2
σ2n = λ2 · 5
2·(
1
4µ
)2
+ λ · 1
2µ
σ2n =
5
32· A2 +
A
2
434 INDEX
If the busy periods were constant time intervals, then σ2n would only contribute with A/2,
due to the Poisson process. Because the busy periods are stochastic time intervals we alsoget a contribution 5 · A2/32.
If we look for the variance of the number of call attempts blocked during e.g. a busy hour,then we get an additional contribution to the variance because the number of busy periodsalso is a random variable itself.
Updated: 2004-02-29
INDEX 435
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 4.10 (Exam 1999)
ERLANG’S LOSS SYSTEM
We consider a loss system, which has 4 channels and is offered PCT–1 traffic. The arrivalrate (intensity) is λ = 1 call per time unit, and the mean service time is µ−1 = 2 time units.The system is assumed to be in statistical equilibrium.
1. Find the offered traffic, and set up the state transition diagram of the system.
2. Find the state probabilities, and find the time congestion, the call congestion, and thetraffic congestion.
3. Calculate the time congestion using the recursive formula for Erlang’s B-formula.The individual steps of the recursions should appear in the answer.
4. Assume random hunting, and find the probability that two specific channels are busy(the remaining channels may be busy or idle).
5. How many channels do we need, if the system is dimensioned with an improvementvalue equal to 0.20? (Apply the results from Question 3).
6. Find the distribution of the number of calls, which are lost during a period, when all 4channels are busy.
436 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to Exercise 4.10 (Exam 1999)
The system considered corresponds to Erlang’s loss system, which is dealt with in Chap. 4.
Question 1:
The offered traffic A is:
A = λ/µ = 1 · 2 ,
A = 2 [erlang] .
The state transition diagram becomes (cf. Fig. 4.2):
Y
j1
12
Y
j1
22
Y
j1
32
Y
j1
42
43210
Question 2:
If we denote the relative state probabilities by q(i) and the absolute state probabilities byp(i), then we get:
q(0) = 1 p(0) = 321
= 0.1429
q(1) = 2 p(1) = 621
= 0.2857
q(2) = 2 p(2) = 621
= 0.2857
q(3) = 43
p(3) = 421
= 0.1905
q(4) = 23
p(4) = 221
= 0.0952
Total = 7 Total = 1 = 1.0000
We would of course obtain the same state probabilities by inserting the actual parametersinto the truncated Poisson distribution (4.9).
The time congestion E becomes:
E = p(4) =2
21.
INDEX 437
As the arrival process is a Poisson process, the PASTA–property is valid, and time congestion,call congestion and traffic congestion are all identical (Sec. 4.3.2):
E = B = C =2
21.
We may of course calculate B and C explicitly, but at exam this would be waste of time.
Question 3:
By applying the recursion formula for calculating Erlang’s B-formula (4.29):
Ex(A) =A · Ex−1(A)
x+ A · Ex−1(A), E0(A) = 1 , x = 1, 2, . . . ,
we find, letting A = 2 [erlang]:
x = 1: E1(2) =2 · 1
1 + 2 · 1 =2
3,
x = 2: E2(2) =2 · 2
3
2 + 2 · 23
=2
5,
x = 3: E3(2) =2 · 2
5
3 + 2 · 25
=4
19,
x = 4: E4(2) =2 · 4
19
4 + 2 · 419
=2
21,
x = 5: E5(2) =2 · 2
21
5 + 2 · 221
=4
109,
where we for later use in Question 5 also calculate the blocking probability for 5 channels. Theblocking probability for 4 channels corresponds of course to the result obtained in Question2. As a control we may also verify that the values are in agreement with the table of Erlang’sB-formula in the collection of exercises.
Question 4:
We may ourselves by elementary probability theory carry through the derivations which arebehind Palm-Jacobæus’ formula. If two arbitrary channels are occupied, then the probabilitythat it is just “our ” two channels will be equal to 1/6 (the number of different ways we maychoose 2 out of 4 channels). If three channels are busy, then the probability that “our” twochannels are among these will be 1/2. If all four channels are busy, then “our” two channels
438 INDEX
will always be busy. Therefore, we get:
H(2) =1
6· p(2) +
1
2· p(3) + p(4)
=1
6· 6
21+
1
2· 4
21+
2
21
=5
21.
It was a general error at the exam to assume independence between channels and use theprobability that a random channels is busy.
Question 5:
The traffic lost A`, the traffic carried Y , and the traffic carried additionally Fn(A), when thenumber of channels are increased from n to n + 1 (which is equal to the traffic an+1 carriedby channel n+1 in a system with sequential hunting) becomes:
n Al Y Fn(A) = an+1
0 2.0000 0.0000 0.66671 1.3333 0.6667 0.53332 0.8000 1.2000 0.37893 0.4211 1.5789 0.23064 0.1905 1.8095 0.11715 0.0734 1.9266
As:F1,n−1(A) > FB = 0.20 ≥ F1,n(A) ,
we notice that we need n = 4 channels, because we choose an integral number of channels,which is on the safe side (corresponding to an improvement value less than or equal to thedimensioning criteria). The same result may of course be obtained by using the table of theImprovement function of Erlang’s B-formula, given in the collection of exercises. Fig. 4.5gives the same result for the curve A = 2, but it is more difficult to read accurately.
Question 6:
The duration of state “all channels busy” is exponentially distributed with the intensity:4 · µ = 2 [calls/time unit]. New call attempts arrive according to a Poisson process with rate(intensity) λ = 1 [calls/time unit]. If we are in state “4 channels busy” (busy period), thenthe next event is either:
• a call attempt which is blocked or
INDEX 439
• termination of the busy period.
From Sec. 2.2.7 (minimum of k exponentially distributed stochastic variables) we know theprobability of next event:
p(call attempt) =λ
λ+ 4µ=
1
3,
p(call terminate) =4µ
λ+ 4µ=
2
3.
Because of the exponentially distributed time intervals the process is a Markov process with-out memory, and the above probabilities are thus independent of the number of calls alreadyblocked. The probability that i call attempts are blocked during a busy period thereforebecomes geometrically distributed:
p(i) =
(1
3
)i· 2
3, i = 0, 1, 2 · · · .
Comments:
This version of the geometric distribution begins with the value zero and thus the mean valueand variance becomes (cf. the text of Table. 3.1):
m1 =1
2/3− 1 =
1
2,
σ2 =1− 2/3
(2/3)2=
3
4.
On the average we stay in state [4] half a time unit. On the average one call arrives per timeunit. Therefore, the mean value 1
2is correct.
Alternative solution 1: Direct calculation The distribution of the number of calls duringa busy period with all four channels occupied can also be derived directly.
The busy period has the density distribution:
f(t) dt = nµ e−nµt dt = 2 e−2t dt .
If the busy period has a duration inside the interval (t, t+dt), then the number of calls duringthis (constant) time interval is Poisson distributed:
p(i | t) =(λt)i
i !· e−λt
=ti
i !· e−t , i = 0, 1, · · · .
440 INDEX
The unconditional distribution for the number of call attempts then becomes:
p(i) =
∫ ∞
t=0
p(i | t) · f(t) dt
=
∫ ∞
t=0
ti
i !· e−t · 2 e−2t dt
=2
i !·∫ ∞
t=0
ti e−3t dt
=2
i !· 1
3i+1·∫ ∞
t=0
(3t)i e−3t d(3t)
=2
i !· 1
3 i+1· i !
=2
3 i+1, i = 0, 1, . . . q.e.d.
where be exploit the definition of the gamma function.
Revised 2008-03-05
INDEX 441
Alternative solution to Question 4:The probability that k specific channels (“our” channels) are busy is given by Palm-Jacobæus’formula (Sec. ??). For Erlang’s loss system we use (??):
H(k) =En(A)
En−k(A), k = 1, 2, . . . n .
For n = 4 and k = 2 we find using the values from Question 3:
H(2) =2/21
2/5,
H(2) =5
21q.e.d.
Alternative solution 2 to question 6: Generating functionsBy applying the theory for the number of events in a Poisson process during a stochastictime interval (Sec. ??), we find the Z-transform of the distribution of the number of eventsduring a busy period (??):
Zn(z) = ϕ(λ(1−z)) .
For the exponential distribution we have the Laplace-transform (??):
ϕ(s) =4 · µ
4 · µ+ s=
2
2 + s,
and thus:
Zn(z) =2
2 + λ · (1− z)=
2
2 + 1 · (1− z),
which may be written as:
Zn(z) =23
1− (1− 23) · z .
This is the Z-transform of the geometric distribution, derived above.This is seen as follows:
Z =∞∑
i=0
α (1− α)i · zi
=α
1− (1− α) z,
which corresponds to the above with α = 23.
442 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 4.11 (Exam 2006)
Erlang’s Loss system with ordered hunting
We consider Erlang’s loss system with n = 3 channels. The arrival rate is λ = 2 calls pertime unit, and the mean holding time is µ−1 = 1/2 time unit.
1. Find the offered traffic.
2. Construct the state transition diagram and find the state probabilities under the as-sumption of statistical equilibrium
3. Assume sequential (ordered) hunting of idle channels and find the traffic carried byeach channel (the improvement function), using the recursion formula for the Erlang-Bformula.
Denote the three channels by a, b, c (order of hunting).
4. Set up a state transition diagram which keeps record of the state of each channel, wherethe state is defined by the busy channels as shown in the figure below.
0 b a c
b cc
a a b
a b c
5. Find the remaining state probabilities using the results above and the following state
INDEX 443
probabilities:
p(b) =19
240
p(c) =6
240
p(ac) =7
240
p(bc) =5
240
6. Find the traffic carried by each channel expressed by the state probabilities.What is the proportion of time channel a is busy and the other channels b and c areidle?
444 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to Exercise 4.11 Exam 2006
Question 1:
By definition the offered traffic is the average number of calls per mean service time:
A =λ
µ=
2
2= 1 [erlang]
Question 2:
The state transition diagram becomes as follows:
Y
j
Y
j
Y
j0 1 2 3
2 4 6
2 2 2
The relative state probabilities q(i) = p(i)/p(0), respectively the absolute state probabilitiesp(i) become as follows:
q(0) = 1 p(0) = 616
q(1) = 1 p(1) = 616
q(2) = 12
p(2) = 316
q(3) = 16
p(3) = 116
Question 3:
Applying the recursion formula for Erlang-B (4.27) and using the formula (4.14) for carried
INDEX 445
traffic per channel in a system with ordered hunting we get:
E0 = 1
E1 =A · E0
1 + A · E0
=1 · 1
1 + 1 · 1 =1
2
E2 =A · E1
2 + A · E1
=1 · 1
2
2 + 1 · 12
=1
5
E3 =A · E2
3 + A · E2
=1 · 1
5
3 + 1 · 15
=1
16
a1 = A (E0 − E1) =1
2[erlang]
a2 = A (E1 − E2) =
(1
2− 1
5
)=
3
10[erlang]
a3 = A (E2 − E3) =
(1
5− 1
16
)=
11
80[erlang]
The total carried traffic becomes:
Y = a1 + a2 + a3 =225
240=
15
16,
which is in agreement with:
Y = A 1− p(3) = 1 ·
1− 1
16
=
15
16.
Question 4:
The state transition diagram becomes as shown in the following figure. All arrivals (arrowsto the right) have the rate λ = 2 and all departures (arrows to the left) have the rate µ = 2.
0 b
c
a ab
ac
bc
abc
3
+
QQQsQ
QQk
-
-
3
+
3
+
3
+Q
QQk
QQQk
QQQk
446 INDEX
Question 5:
From Question 2 we have the global state probabilities p(i), which are independent of theorder of hunting, and also may be obtained by aggregating the states in Question 4 (we usethe denominator 240 to get integer values. The missing state probabilities may be obtainedwithout using flow balance equations (independent of the state transition diagram):
p(0) =6
16=
3
8
p(0) =90
240.
p(1) = p(a) + p(b) + p(c)
6
16= p(a) +
19
240+
6
240
p(a) =65
240.
p(2) = p(ab) + p(ac) + p(bc)
3
16=
45
240= p(ab) +
7
240+
5
240
p(ab) =33
240.
p(abc) = p(3) =1
16
p(abc) =15
240.
We may control the result by node balance equations which must be fulfilled.
Question 6:
We known that a channel carries one erlang when it is busy. Leaving out the factor one, we
INDEX 447
get when denoting the carried traffic carried on channel x by ax:
aa = p(a) + p(ab) + p(ac) + p(abc)
= (65 + 33 + 7 + 15)/240
aa =120
240=
1
2.
ab = p(b) + p(ab) + p(bc) + p(abc)
= (19 + 33 + 5 + 15)/240
ab =72
240=
3
10.
ac = p(c) + p(ac) + p(bc) + p(abc)
= (6 + 7 + 5 + 15)/240
ac =33
240=
11
80.
This is in agreement with the values obtained in Question 3: aa = a1, ab = a2, ac = a3, andalso with the total carried traffic Y = aa + ab + ac = 15/16.
The proportion of time channel a is busy and channels b and c are idle is:
p(a) =65
240=
13
48.
Updated 2006-06-30
448 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 4.12 (Exam 2007)
Erlang’s Loss system
We consider Erlang’s loss system with n = 4 channels.The offered traffic is 3 erlang, and the mean holding time is chosen as one time unit.
1. Construct the state transition diagram and find the state probabilities.
2. Find the blocking probability for a random call attempt by using the recursion formulafor Erlang-B formula (show the recursions).
3. Given a call attempt has been blocked, what is the probability that the next call attemptalso is blocked.
4. Find the distribution of number of calls blocked during a period when all channels arebusy.
5. Find the proportion of time the first channel is idle and the other three channels arebusy, under the assumption of random hunting.
The following question was not included at exam:
6. Find the proportion of time the first channel is idle and (at the same time) the otherthree channels are busy under the assumption of ordered (sequential) hunting.
INDEX 449
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 4.12 (exam 2007)
Question 1:
The state transition diagram of the system becomes as follows:
Y
j3
1
Y
j3
2
Y
j3
3
Y
j3
4
43210
The state probabilities are:
p(0) =8
131,
p(1) =24
131,
p(2) =36
131,
p(3) =36
131,
p(4) =27
131.
Question 2:
Using the recursion formula (4.29)
Ex(A) =A · Ex−1(A)
x+ A · Ex−1(A), E0(A) = 1 ,
450 INDEX
we find for A = 3:
E0(3) = 1 ,
E1(3) =3 · 1
1 + 3 · 1 =3
4,
E2(3) =3 · 3
4
2 + 3 · 34
=9
17,
E3(3) =3 · 9
17
3 + 3 · 917
=9
26,
E4(3) =3 · 9
26
4 + 3 · 926
=27
131,
which is in agreement with question 1.
Question 3:
If a call has been blocked, then we know that the system is in state [ 4 ]. Next call attempt willbe blocked if it arrives before next departure. As the arrival rate is λ = 3 and the departurerate in state [ 4 ] is 4µ, then the probability of blocking next call attempt becomes:
p =λ
λ+ 4µ=
3
7.
Question 4:
Every time a call attempts has been blocked the system is in the same state (no memory), sothe number of blocked call attempts during a busy period becomes a geometrical distributionwith mean value m1 = 7/4 = 1.75:
p(i) =
(λ
λ+ 4µ
)i·(
4µ
λ+ 4µ
)
=
(3
7
)i· 4
7, i = 0, 1, 2, . . . .
Question 5:
When the system is in state [ 3 ] three channels are busy and one channel is idle. Whenwe have random hunting the idle channel is a random channel, so the probability that first
INDEX 451
channel is idle becomes:
p =p(3)
4=
9
131.
Question 6: (Not included at exam)
When we have ordered (sequential) hunting the state ”first channel idle and three otherchannels busy” can only arise by going through the state ”all channels busy” followed byfirst channel becoming idle.
State [3] may arise in two ways:
a: by jumping from state 4 to state 3 with intensity 4µ · p(4).
b: by jumping from state 2 to state 3 with intensity λ · p(2).
Knowing that we are in state 3, the conditional probability of arriving from state 4 thusbecomes:
4µ · p(4)
4µ · P (4) + λ · p(2).
Only every fourth time is will be channel number one which first becomes idle. As theprobability of being i state 3 is p(3), we find the unconditional probability as
p1 idle | 2− 4 busy =1
4· 4µ · p(4)
4µ · p(4) + λ · p(2)· p(3)
Inserting the probabilities obtained in Question 1 we get:
p1 idle | 2− 4 busy =1
4· 4 · 27
131
4 · 27131
+ 3 · 36131
· 36
131
=9
262.
Alternative method of solution:
If we split state [3] up into two states, corresponding to channel one is idle or busy, then weget the following state transition diagram:
The node equation for state (0, 3) becomes:
(λ+ 3µ) · p(0, 3) = µ · p(4)
p(0, 3) =µ
λ+ 3µ· p(4)
=1
A+ 3· p(4) q.e.d.
452 INDEX
BBM
W
9
:3
+
3
+
BBM
W
?
0 1 2
0,3
4
1,2
λ λ
λ
λ
λ
µ 2µ
3µ
3µ 3µ
µ
Updated: 2009-03-04
INDEX 453
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 5.7 (exam 1997)
ENGSET’S SYSTEM WITH HYPER-EXPONENTIAL HOLDING TIMES
We consider Engset’s loss system with S=5 sources. The holding times are hyper-exponentiallydistributed with the following parameters:
• With probability 13
the holding time is exponentially distributed with mean value 2(intensity 1/2) (phase a).
• With probability 23
the holding time is exponentially distributed with mean value 12
(intensity 2) (phase b).
12
2
-
-
-
-
13
23
1. Find the mean value m1 and the form factor ε of the holding time distribution.
An idle source generates one call attempt per time unit.
2. Find the offered traffic per idle source β, and the total offered traffic A.
The above-mentioned traffic is offered to a full accessible group with n = 3 channels. Thestate of the system is given by (i, j), where i denotes the number of busy servers in phase a,and j denotes the number of busy servers in phase b. (Notice that the number of idle sourcesis S−i−j).
3. Construct the state transition diagram, where we now have the states shown in thefollowing figure.
454 INDEX
-
-
-
-
-
-
03
02 12
01 11 21
00 10 20 30
6
?
6
?
6
?
6
?
6
?
6
?
4. Show that the state transition diagram is reversible.
5. Find the relative state probabilities expressed by state p(0, 0), and find then by nor-malisation the absolute state probabilities.
6. Find the aggregated state probabilities p(x), which indicates that a total of x (x =0, 1, 2, 3) channels are busy (x = i+ j), and show that we find the same state probabil-ities, when the holding times are exponentially distributed with the same mean value(Engset’s system is insensitive to the holding time distribution).
The following questions were not part of exam.
7. Has the system product form?
8. Find for both traffic streams the time congestion E, the call congestion B, and thetraffic congestion C from the two-dimensional state transition diagram.
INDEX 455
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 5.7: (exam 1997)
Question 1:
The values asked for are obtained from (2.67), respectively (2.68):
m1 =1
3· 2 +
2
3· 1
2= 1 ,
ε = 2
1
3· 1
(1/2)2+
2
3· 1
22
/12 = 3 .
We may also use the formulæ for combination of stochastic variables in parallel (2.58):
mν =l∑
i=1
pi ·mν,i .
The second moment of an exponential distribution with intensity µ is 2/µ2 (??), and wefind:
m2 =1
3· 2
(1/2)2+
2
3· 2
22
= 3 .
As ε = m2/m2 we of course get the same as above.
Question 2:
From Sec. 5.2.2, formulæ (5.9), (5.10) and (5.11), we get:
Offered traffic per idle source: β = λ ·m1 = 1 erlang ,
Offered traffic per source: α = β/(1 + β) = 1/2 erlang ,
Total offered traffic: A = S · α = 5 · 1/2 = 2.5 erlang .
Question 3:
Question 4: (Based on the theory from Sec. 7.2)
We find that the circulation flow in all three squares is zero, and therefore the process is
456 INDEX
12
11
10
21
20 30
02
01
00
03
2
1/2 1 3/2
1
222
4/3
1
1
421/2
4/3
48/3
10/3 2
5/3
8/31/2
1
6
reversible:
Square (1,1):5
3· 8
3· 1
2· 2 =
10
3· 4
3· 2 · 1
2,
Square (2,1):4
3· 2 · 1 · 2 =
8
3· 1 · 2 · 1 ,
Square (1,2):4
3· 2 · 1
2· 4 =
8
3· 1 · 4 · 1
2.
Question 5:
Relative state probabilities:
3 1027
2 109
209
1 53
409
409
0 1 103
409
8027
0 1 2 3
3 10
2 30 60
1 45 120 120
0 27 90 120 80
0 1 2 3
∑= 702
INDEX 457
The absolute state probabilities become:
310
702
230
702
60
702
145
702
120
702
120
702
027
702
90
702
120
702
80
702
0 1 2 3
i.e. p00 =27
702=
1
26, p10 =
90
702, p20 =
120
702, etc.
Question 6:
10
2 3
3451 2 3
For the Engset case with 5 sources, 3 channels, β = 1 and average holding time m1 = 1 wefind:
q0 = 1, q1 = 5 , q2 = 10, q3 = 10 ,
p0 =1
26, p1 =
5
26, p2 =
10
26, p3 =
10
26.
From the two-dimensional system with hyper-exponential holding times we find the followingglobal (aggregated) state probabilities:
p(0) = p00 =1
26= 0.0385 ,
p(1) = p01 + p10 =135
702=
5
26= 0.1923 ,
p(2) = p20 + p11 + p02 =270
702=
10
26= 0.3846 ,
p(3) = p30 + p21 + p12 + p03 =270
702=
10
26= 0.3846 ,
which is the same as for the one-dimensional system. We have thus shown that Engset’smodel is valid for both exponential and hyper-exponential holding times. In fact, it is validfor any holding time distribution (insensitivity).
458 INDEX
Question 7:
By calculating the marginal distributions p(i, ·) and p(·, j) we notice that there is no productform. Product form requires that:
p(i, j) = p(i, ·) p(·, j) ,
apart from a constant normalisation factor. Product form requires e.g. that the relative ratiobetween p(0, j) and p(1, j) is the same for all j (all rows). This is not fulfilled in our case.
We may thus have a process, which is reversible (has local balance) and insensitive, withouthaving product form.
Question 8:
Of course, we may also find time congestion E, call congestion B, and traffic congestion Cfrom the two-dimensional state transition diagram directly from the definitions. We find thesame results as for the one-dimensional Engset case.
Updated: 2008-03-13
INDEX 459
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 5.9 (Exam 2000)
ENGSET’S LOSS SYSTEM
We consider Engset’s loss system with 3 servers, which are offered traffic from 4 homogeneoussources. An idle source generates calls with the intensity γ = 1/2 [calls/time unit], and theservice time is exponentially distributed with the mean value µ−1 = 1 [time unit].
1. Find the total offered traffic from the 4 sources.
2. Set up the state transition diagram, and find the state probabilities under the assump-tion of statistical equilibrium.
3. Find the time congestion, call congestion, and traffic congestion, using results fromQuestion 1 and 2.
4. Find the distribution (density function) of the number of calls, which are blocked duringa period where all three servers are busy.
5. Derive the state probabilities of the system by convolving the state probabilities of 4single sources and truncate the state probabilities at state 3.
460 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to Exercise 5.9 (Exam 2000)
Question 1:
When there is no blocking, a single source has the following time-dependent evolution: The
.............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................. ..................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.................................
..........................
......................................................................................................................... ............................................................................................... .......................... ................................................................................................................................................................................................... ......................................................................................................................................................................... ..........................µ−1 λ−1
Off
On
Time
state probabilities of a single source becomes:
p(0) =2
3,
p(1) =1
3.
Offered traffic per idle source (5.9):
β = γ/µ =1
2.
Offered traffic per source (= carried traffic in a system with no blocking (5.10):
α =β
1 + β=
1
3.
The total carried traffic of four sources becomes (5.11):
A =4
3[erlang] .
Question 2:
The resulting state transition diagram is shown in the following figure: The relative stateprobabilities q(i), respectively the absolute state probabilities p(i), becomes:
q(0) = 1 p(0) = 210
q(1) = 2 p(1) = 410
q(2) = 32
p(2) = 310
q(3) = 12
p(3) = 110
INDEX 461
"!# "!#
Y
j
42
1
"!#
Y
j
32
2
"!#
Y
j
22
3
3210
Question 3:
The time congestion E is the proportion of time all three channels are busy:
E = p(3) =1
10.
The call congestion B is the proportion of call attempts which are blocked. If we denote thearrival intensity (rate) in state i by γi, then we have:
B =γ3 · p(3)
γ0 · p(0) + γ1 · p(1) + γ2 · p(2) + γ3 · p(3)
=12· 1
1042· 2
10+ 3
2· 4
10+ 2
2· 3
10+ 1
2· 1
10
,
B =1
27= 0.0370 .
The traffic congestion C is the proportion of the offered traffic which is blocked:
C =A− YA
,
where Y is the carried traffic:
Y =3∑
i=0
i · p(i)
= 1 · 4
10+ 2 · 3
10+ 3 · 1
10
=13
10.
C =43− 13
1043
,
C =1
40= 0.0250 .
C may also be obtained from (5.34):
C =S − nS· E =
4− 3
4· 1
10=
1
40q.e.d.
462 INDEX
Question 4:
During a busy period we are in state 3. The next call attempt is blocked, if it arrives beforean existing call terminates.As in state 3 the arrival intensity is 1
2and the departure intensity is 3, we get:
p(blocking) = 1− p =12
3 + 12
,
1− p =1
7.
In a similar way the probability that the next call attempt is accepted becomes:
p =6
7.
Due to the Markov property (exponentially distributed time intervals) the process has nomemory, and after blocking a call attempt the system is still in the same state. Therefore,the number of blocked call attempts during a busy period becomes geometric distributed:
p(i) = (1− α)i · α
=
1
7
i· 6
7, i = 0, 1, 2 . . . .
The distribution starts in i=0 and thus has the mean value (cf. the text of Tab. 3.1):
1
p− 1 =
7
6− 1 =
1
6.
Question 5:
From the state probabilities of a single source, which we obtained in Question 1 we get:
# sources 1 1 2 1 3 1 4
0 23
23
49
23
827
23
1681
1 13
13
49
13
1227
13
3281
2 19
627
2481
3 127
881
4 181
INDEX 463
Note that the call congestion for 4 sources in Question 2 is equal to the time congestion with3 sources ( 1
27).
We truncate the state probabilities at 3 channels and get:
State Unnormalised Normalised
0 1681
1680
= 210
1 3281
3280
= 410
2 2481
2480
= 310
3 881
880
= 110
This is of course the same as we found in Question 2.
Updated 2005-02-23
464 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 5.10 (exam 2001)
LOSS SYSTEM WITH STATE–DEPENDENT ARRIVAL INTENSITY
We consider a loss system with n = 2 channels. The state of the system i is defined as thenumber of busy channels. Customers arrive according to a state–dependent Poisson processwith intensity
γ(i) =3− i4− i · γ [customers per time unit] , 0 ≤ i ≤ 3 .
For all other states γ(i) = 0. We choose γ = 1 customer per time unit, and the service timeis exponentially distributed with intensity µ = 1 customer per time unit.
1. Construct the state transition diagram of the system.
2. Find the state probabilities of the system under the assumption of statistical equilib-rium, and give the time congestion E.
3. Find the state probabilities π(i) as they are observed by an arbitrary arriving customer,and find the call congestion B.
4. Find the offered traffic, which is defined as the traffic carried in a system withoutblocking, and find the traffic congestion C.
5. Assume that both channels are busy. What is the probability that the next event is acall attempt (which of course will be blocked)? Find the distribution of the number ofcalls, which are blocked during a busy period.
6. Give the state probabilities as they are seen by a customer, which just has departedfrom the system. We include customers, which are blocked.
INDEX 465
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 5.10 (exam 2001)
Question 1:
With the given arrival rates (= intensities) we get the following state transition diagram (theclosed loop in state 2 is usually not included):
........
............................
.................................................................................................................................................................................................................................................... ........
............................
.................................................................................................................................................................................................................................................... ........
............................
....................................................................................................................................................................................................................................................
.........................................
.................................................................................. .........................................
.................................................................................. .........................................
................................................................................................................................................................................................................................................................................................
..........................................................................................................................
.... .......................................................................................................................
....
...................... ................ ...................... ............
.... ...................... ................
......................
............................................................................................
0 1 2
34
23
12
1 2
We notice that γ(3) is also given, but it has no influence upon the state transition diagrambecause it is zero: γ(3) = 0.
Question 2:
Denoting the non-normalized state probabilities by qi and the normalized state probabilitiesby pi we find:
q0 = 1 p0 = 48,
q1 = 34
p1 = 38,
q2 = 14
p2 = 18,
Sum = 2 Sum = 1 .
The time congestion is the proportion of time all channels are busy:
E = p2 =1
8.
Question 3:
The number of customers arriving when the system is in a given state i is both proportionalwith the state probability pi and the arrival rate γi. Per time unit so many customers arrivein the different states:
466 INDEX
State 0: 34· p0 = 6
16,
State 1: 23· p1 = 4
16,
State 2: 12· p2 = 1
16.
Total: = 1116.
The above is the average number of calls arriving during one time unit. Thus we get thefollowing call-average arrival state probabilities:
π0 =6
11, π1 =
4
11, π2 =
1
11.
The call congestion is the proportion of all call attempts blocked:
B = π2 =1
11.
Question 4:
The offered traffic is defined as the traffic carried in a system without blocking. If we havethree channels, then no calls are blocked (γ(3) = 0). We get the following state transitiondiagram: Extending the results in question 2 with one state more, we find:
........
............................
.................................................................................................................................................................................................................................................... ........
............................
.................................................................................................................................................................................................................................................... ........
............................
.................................................................................................................................................................................................................................................... ........
............................
....................................................................................................................................................................................................................................................
.........................................
.................................................................................. .........................................
.................................................................................. .........................................
..................................................................................
.......................................................................................................................
.... .......................................................................................................................
.... .......................................................................................................................
....
...................... ................ ...................... ............
.... ...................... ................
............................................................................
0 1 2 3
34
23
12
1 2 3
q0 = 1 p0 = 2449,
q1 = 34
p1 = 1849,
q2 = 14
p2 = 649,
q3 = 124
p3 = 149,
Sum = 4924
Sum = 1 .The offered traffic then becomes:
A =3∑
i=0
i · p(i) =33
49= 0.6735 .
INDEX 467
The carried traffic is obtained from the state probabilities in question 2:
Y =2∑
i=0
i · p(i) =5
8= 0.6250 .
Thus the traffic congestion becomes:
C =3349− 5
83349
=19
264= 0.0720 .
Question 5:
In the state where both channels are occupied, the arrival rate is γ2 = 12
and the departurerate is 2µ = 2. Thus the probability that the next event is an arrival becomes (2.42):
p(arrival) =γ2
γ2 + 2µ
=12
12
+ 2
=1
5.
The call attempt is blocked and the state does not change. As the process is without memory,the number of blocked calls during a busy period becomes geometrically distributed:
p(i) =
(1
5
)i·(
1− 1
5
), i = 0, 1, . . . .
This is a geometric distribution starting in i = 0. The mean value is (text to Tab. 3.1):
m1 =1
p− 1 =
5
4− 1 =
1
4.
Question 6:
We proceed in the same way as in question 3. Per time unit the number of customers departingfrom the different states observe the following state when they look back:
State 0: 1 · p1 = 611,
State 1: 2 · p2 = 411,
State 2: 12· p2 = 1
11.
468 INDEX
Notice that we have two types of calls departing from state 2: served calls and blocked calls.The customers who see state 2 are the blocked customers. Thus we get the following call-average departure state probabilities by normalizing by the total number of calls departingper time unit (which happens to be one):
π0 =6
11, π1 =
4
11, π2 =
1
11.
These state probabilities are identical with the state probabilities observed by an arrivingcustomer in question 3, because the process is reversible. The customers on the average seethe same state, when they arrive, as they see when they depart.
Updated: 2009-03-24
INDEX 469
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 5.11 (exam 2003)
Aloha model with Engset traffic
We consider an Engset model with S = 4 sources. The mean holding time is chosen astime unit (µ−1 = 1). The arrival rate of an idle source is γ = 1/3. Both time intervals areexponentially distributed. The number of channels is infinite, i.e. n ≥ S. The state of thesystem is defined as the number of busy channels.
The above system is a model of a non-slotted Aloha system with S transmitters and expo-nentially distributed packet lengths.
1. Find the offered traffic A.
2. Construct the state transition diagram and find under the assumption of statisticalequilibrium the state probabilities p(i), (i = 0, 1, . . . , 4).
3. Find the state probabilities π(i), (i = 0, 1, . . . , 4), as they are observed by an arrivingcustomer just before arrival (call averages). (Use either the state probabilities obtainedin question 2 as starting point, or use the arrival theorem).
4. What is the probability that a call arriving in state zero (and thus changing the stateof the system into state one) will complete service before next call arrives? This corre-sponds to a successful call transmission in the Aloha protocol.
5. What is the mean holding time of successfully transmitted calls?
470 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 5.11 (exam 2003)
Question 1:
The Engset case is dealt with in Chap. 5. The offered traffic becomes (5.11):
A = S · α = S · β
1 + β,
where β =γ
µ=
1
3,
A = 4 ·13
1 + 13
= 1 [erlang] .
Alternative approach:If no calls are blocked, a source will change between being idle for three time units and busyfor one time unit (mean values). In this case the carried traffic is equal to the offered trafficequal to 1/4 (the source is busy 25 % of the time. As we have 4 sources the total offeredtraffic becomes 1 [erlang].
Question 2:
We get the following state transition diagram (cf. Fig. 5.4): If we denote the non-normalised
Y
j
Y
j
Y
j
Y
j0 1 2 3 4
1 2 3 4
4/3 3/3 2/3 1/3
state probabilities by qi and the normalised state probabilities by pi we find:
q0 = 1 p0 = 81256
,
q1 = 43
p1 = 108256
,
q2 = 23
p2 = 54256
,
q3 = 427
p3 = 12256
,
q4 = 181
p4 = 1256
,
Sum = 25681
Sum = 1 .
INDEX 471
Question 3:
During one time unit the average number of customers arriving in each state is:
state [0] : 43· 81
256= 27
64
State [1]: 33· 108
256= 27
64
State [2]: 23· 54
256= 9
64
State [3]: 13· 12
256= 1
64
state [4] : 03· 1
256= 0
Total : = 1
In general, the numbers do of course not add to one. It is number of calls and not probabilites.If we considered two time units, they would add to two.
After normalising the number of calls arriving in each state by the total number of callarriving per time unit, we obtain the state probabilities observed by arriving calls:
π(0) =27
64,
π(1) =27
64,
π(2) =9
64,
π(3) =1
64,
π(4) = 0 .
Alternatively we could use the Arrival Theorem saying that the call averages are equal tothe time averages with one source less, i.e. the state probabilities of a system with S = 3sources and n ≥ 3 channels. The non-normalised state probabilities qi and the normalised
Y
j
Y
j
Y
j0 1 2 3
1 2 3
3/3 2/3 1/3
472 INDEX
state probabilities pi become:q0 = 1 p0 = 27
64,
q1 = 1 p1 = 2764,
q2 = 13
p2 = 964,
q3 = 127
p3 = 164,
Sum = 6427
Sum = 1 .
This is of course the same result as obtained above.
Question 4:
A customer arriving in state zero brings the system into state one. In state one new callsarrive with rate 3 · γ = 1, and the holding time of the considered customer terminates withrate 1.
The probability that the first event is completion of the holding time becomes (Sec. 2.2.7):
pcomplete service before new arrival =1
1 + 1=
1
2.
Question 5:
........................................................................................................................................................ .......................... ........................................................................................................................................................................................................................................................................................................................................................................................................................................... ......................................................................................................................................................................................................................................................................................
........
........
..........
............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................ ..........................
..........................................................................................................................................................................................................................................
..................
........
..................
............................................................................2 1
12
Collission
12
Success
In Question 4 the next event took place after an exponential time interval with total rateequal to 2, i.e. successful calls terminates after an exponentially distributed time interval withrate 2 and therefore have the mean holding time:
m1,success =1
2[time units] .
Unsuccessful calls have two phases. First they have an exponential time interval with mean1/2 just as the successful calls. After collission they still have a remaining service timeequal to one as the exponential service time has no memory. So the total mean value forunsuccessful calls become:
m1,collission =
1
2+ 1 =
3
2[time units] .
INDEX 473
Thus short calls are in general more successful than longer calls, which is quite obvious.
The global mean value for successful and unsuccessful calls becomes one, which is the meanservice time for all calls. In the above figure we have a phase-diagram for the distributionof successful calls and unsuccessful calls. By comparison with Fig. 2.11 we see that thisdistribution is equivalent to one exponential distribution with mean value one.
Updated: 2007-03-13
474 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 5.12 (exam 2005)
Engset model with inhomogeneous sources
We consider a full accessible Engset loss system with n = 3 channels. The system is offeredtraffic from S = 4 sources. The arrival rate of an idle source is γ1 = 1/2 call attempts pertime unit. The mean holding time is chosen as time unit (µ−1
1 = 1). All time intervals areexponentially distributed. The state of the system is defined as the number of busy channelsand every busy source occupies one channel.
1. Find the offered traffic.
2. Find the state probabilities of the system by convolving the state probabilities of the 4individual sources, truncating at 3 channels, and normalising.
3. Find the time congestion E, the call congestion B, and the traffic congestion C.
We now add a source different from the above sources having both mean idle time and meanholding time equal to one time unit (γ2 = µ2 = 1). The source occupies one channel when itis busy.
4. Find by convolving this source with the above system the time congestion for bothtypes of calls.
5. Find the call congestion for both types of calls by applying the arrival theorem.
6. Find the traffic congestion for both types of sources by looking at the individual termsduring the convolution (use eventually a two-dimensional state transition diagram asaid).
INDEX 475
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 5.12 (exam 2005)
Question 1:
If there is no congestion, then on the average a source is on one time unit and off two timeunits. So the offered traffic per source is α = 1/3 (5.10). The total offered traffic becomes(5.11):
A = S · α =4
3[erlang].
Question 2:
The state probabilities of a single source is (Binomial distribution for a single source =Bernoulli distribution):
p(0) =2
3, p(1) =
1
3.
By using the relative state probabilities 2 : 1 we get by convolving 4 sources:
State S1 S2 S12 S3 S123 S4 S1234 Truncated Normalised
0 2 2 4 2 8 2 16 16 210
1 1 1 4 1 12 1 32 32 410
2 0 0 1 0 6 0 24 24 310
3 0 0 0 0 1 0 8 8 110
4 0 0 0 0 0 0 1
To the right we have the state probabilities. As a control we may of course use the formulæin the textbook (5.26) or a state transition diagram:
"!# "!#
Y
j
42
1
"!#
Y
j
32
2
"!#
Y
j
22
3
3210
We see that the above state probabilities fullfil the node balance equations and add to one.Therefore, this is the unique solution.
476 INDEX
Question 3:
From the state probabilities we get:
E = p(3) =1
10= 0.1000 .
Using the arrival theorem, the call congestion is the time congestion with one source less, i.e.three sources. From the table for 3 sources (S123) we get for this inhomogeneous system:
E =1
8 + 12 + 6 + 1=
1
27= 0.0370 .
The carried traffic is:
Y =3∑
i=0
i · p(i)
= 0 · 2
10+ 1 · 4
10+ 2 · 3
10+ 3 · 1
10
=13
10.
As the offered traffic is A = 4/3 (Question 1) we get:
C =A− YA
=43− 13
1043
C =1
40= 0.0250 .
We may of course also use the formulæ of the textbook, e.g. (5.34):
C =S − nS· E =
4− 3
4· 1
10=
1
40, q.e.d.
Question 4:
State S1234 S5 Convolution S12345 Normalised
0 2 1 2 · 1 = 2 219
= 0.1052
1 4 1 2 · 1 + 4 · 1 = 6 619
= 0.3158
2 3 0 2 · 0 + 4 · 1 + 3 · 1 = 7 719
= 0.3684
3 1 0 2 · 0 + 4 · 0 + 3 · 1 + 1 · 1 = 4 419
= 0.2105
INDEX 477
We still consider a system with 3 channels. The time congestion for both types of sources(both requiring one channel) becomes:
E1 = E2 =4
19= 0.2105
Question 5:
The arrival theorem tells that the call congestion of a source is equal to the time congestionof the same system without this source. Thus we get from Question 3:
B2 =1
10= 0.1000 .
The call congestion of type 1 is the time congestion of a system with 3 type 1 (S123) sourcesand the type 2 source (S5):
State S123 S5 S1235 Normalised
0 8 1 8 853
= 0.1509
1 12 1 20 2053
= 0.3774
2 6 0 18 1853
= 0.3396
3 1 0 7 753
= 0.1321
Thus the call congestion of type 1 sources becomes:
B1 =7
53= 0.1321 .
Question 6:
From the convolution scheme in the table in Question 4 we see that all the terms in thefirst column corresponds to zero erlang of type 1 sources, the terms in the second columncorresponds to one erlang, the terms in the third column corresponds to 2 erlang, and theterm in the last column corresponds to 3 erlang. Thus we get the carried traffic, rememberingthat the normalisation factor is 1/19 and looking away from zero terms:
Y1 = 1 · (4 · 1 + 4 · 1) + 2 · (3 · 1 + 3 · 1) + 3 · (1 · 1)/19
Y1 =23
19
As the offered traffic is A1 = 43
we get the following traffic congestion for sources of type 1:
C1 =43− 23
1943
C1 =7
76= 0.0921
478 INDEX
In a similar way we identify the terms contribution to traffic of type 2 (all by 1 erlang) asterms on the second diagonal in the table of Question 4:
Y2 = 2 · 1 + 4 · 1 + 3 · 1/19
=9
19
C2 =A2 − Y2
A2
=12− 9
1912
=19− 18
19
C2 =1
19= 0.0526 .
This question may also be solved by considering the two-dimensional state transition diagramof two streams: one with the four identical sources (horizontally), and one with the additionalsource (vertically):
4
2
?
6
-
11
1
32
?
6
-
1122
?
6
-
11
2 3
42 -
1
32 -
2
01
00 10 20 30
11 21
We get the following relative state probabilities as the state transition diagram is reversible(Chapter 7):
9 2 4 310 2 4 3 1
19 4 8 6 1
The normalisation constant is 19. From the marginal state probabilities we get the carriedtraffic of each type:
Y1 = (0 · 4 + 1 · 8 + 2 · 6 + 3 · 1) /19
=23
19,
Y2 = (0 · 10 + 1 · 9) /19
=9
19,
which are the values obtained above.
Updated: 2006-02-27
INDEX 479
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 5.13 (Exam 2008)
Engset’s loss system
We consider Engset’s loss system with n = 4 channels. The traffic is generated by S = 6sources. An idle source generates call attempts with intensity γ = 1 call attempt per timeunit. The mean service time is µ−1 =1 time unit.
1. Find the offered traffic.
2. Construct the state transition diagram and find the state probabilities, time congestion,traffic congestion, and call congestion.
3. Find the time congestion by using the recursion formula for an increasing number ofchannels (show details of the recursions).
4. Given a call attempt has been blocked, what is the probability that the next call attemptalso will be blocked.
5. Find the proportion of time the first channel is idle and the other three channels arebusy under the assumption of random hunting.
480 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 5.13 (exam 2008)
Question 1:
The offered traffic is given by (5.9)(5.10)(5.11):
β = S · a ,
A = S · a = S · β
1 + β
= 6 · 1
1 + 1,
A = 3 [erlang]
Question 2:
The state transition diagram of the system becomes as follows: The state probabilities are:
Y
j
Y
j
Y
j
Y
j0 1 2 3 4
1 2 3 4
6 5 4 3
p(0) =1
57,
p(1) =6
57,
p(2) =15
57,
p(3) =20
57,
p(4) =15
57.
Time congestion E:
E = p(4) =15
57=
5
19= 0.2632
INDEX 481
Traffic congestion C:
Carried traffic:
Y =4∑
i=1
i · p(i)
= 0 · 1
57+ 1 · 6
57+ 2 · 15
57+ 3 · 20
57+ 4 · 15
57
=156
57=
52
19= 2.7368
C =A− YA
=3− 52
19
3
=5
57= 0.0877
B =λ(n) · p(n)∑ni=0 λ(i) · p(i)
=2 · p(4)
6 · p(0) + 5 · p(1) + 4 · p(2) + 3 · p(3) + 2 · p(4)
=3057
657
+ 3057
+ 6057
+ 6057
+ 3057
=5
31= 0.1613
Alternatively or as a control we could also use (5.46) to find C from E:
C =S − nS· E =
6− 4
6· 5
19
=5
57q.e.d.
and B can then be obtained from C by (5.49):
B =(1 + β)C
1 + β C=
2C
1 + C
=5
31q.e.d.
482 INDEX
Question 3:
Using the recursion formula (5.52) for Engset’s formula for increasing number of channels:
Ex,S(β) =(S− x+1)β · Ex−1,S(β)
x+ (S− x+1)β · Ex−1,S(β)5.52 , E0,S(β) = 1 ,
we get for S = 6 and denoting Ex,S(β) by Ex:
Ex =(7− x) · Ex−1
x+ (7− x) · Ex−1
, E0 = 1
E0 = 1
E1 =6 · 1
1 + 6 · 1 =6
7
E2 =5 · 6
7
2 + 67
=15
22
E3 =4 · 15
22
3 + 4 · 1522
=10
21
E4 =3 · 10
21
4 + 3 · 1021
=15
57
in agreement with earlier results.
Question 4:
If a call has been blocked, then we know that the system is in state [4]. Next call attemptwill be blocked if it arrives before next departure. As the arrival rate in state 4 is λ(4) = 2and the departure rate in state [4] is 4µ, then the probability of blocking next call attemptbecomes:
p =λ(4)
λ(4) + 4µ=
1
3.
Question 5:
When the system is in state [3] three channels are busy and one channel is idle. When wehave random hunting this is a random channel, so the probability that first channel is idlebecomes:
p =p(3)
4=
5
57.
Note: this exercise with Engset traffic is similar to exercise 4.13 from exam 2007 which hadErlang- traffic.
Updated: 2008-05-27
INDEX 483
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 5.14 (Exam 2009)
Engset’s loss system and insensitivity to idle times
We consider Engset’s loss system with n = 3 channels. The traffic is generated by S = 5sources. An idle source generates call attempts with intensity γ=1 [call attempt/time unit].The mean service time is µ−1 =1 [time unit].
1. Find the offered traffic.
2. Construct the state transition diagram and find the state probabilities, time congestion,traffic congestion, and call congestion.
We now want to indicate by an example that the above state probabilities are insensitive tothe idle-time distribution. We assume that the idle-time distribution is Erlang-2 distributed(phase a and b) with the same rate 2 γ in both phases so that the mean-idle time still is one[time unit] as above. We define the state of the system as (i, j, k), where i is the number of
- 2 γ
phase a
- 2 γ -
phase b
idle sources in first phase (a), j is the number of idle sources in second phase (b), and k isthe number of busy sources (or channels). Note that i+ j+k = 5 so that the state-transitiondiagram is only two-dimensional.
3. Fill in the transition rates in the two-dimensional state transition diagram given below.
To be insensitive it can be shown that for a number of busy sources k, corresponding to arow in the state transition diagram, the distribution of the number of idle sources in phase aand b must be Binomial distributed so that
p(i, j | k) =
(5− ki
)·(
m1,a
m1,a +m1,b
)i·(
m1,b
m1,a +m1,b
)5−k−i.
Inserting the actual values we get:
p(i, j, k) =
(5− ki
)·(
1
2
)5−k· p(k) , k = 0, 1, 2, 3 ,
where p(k) are the state probabilities obtained above in Question 2.
484 INDEX
4. Find these state probabilities (express for example all state probabilities by the fractionx/832, then all values of x becomes integers and p(5, 0, 0) = 1/832.Show the state probabilities fulfil the node balance equations by considering the nodebalance equations for state (1, 2, 2).
5. Find an expression for the call congestion B from the two-dimensional state probabil-ities. (As a control we get the same numerical value as in Question 2, which indicatesthat the the Engset model is insensitive to the idle time distribution).
5,0,0 4,1,0 3,2,0 2,3,0 1,4,0 0,5,0
4,0,1 3,1,1 2,2,1 1,3,1 0,4,1
3,0,2 2,1,2 1,2,2 0,3,2
2,0,3 1,1,3 0,2,3
INDEX 485
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 5.14 (exam 2009)
Question 1:
From the given parameters we get:
β =γ
µ= 1
a =β
1 + β=
1
2
A = S · a =5
2[erlang]
Question 2:
Y
j
Y
j
Y
j0 1 2 3
1 2 3
5 4 3
State q(i) p(i)
0 1 126
1 5 526
2 10 1026
3 10 1026
Time congestion becomes:
E = p(3) =10
26
=5
13= 0.3846
To find the traffic congestion C we first find the carried traffic:
Y =3∑
i=0
i · p(i)
486 INDEX
Y = 1 · 5
26+ 2 · 10
26+ 3 · 10
26
=55
26
C =A− YA
=52− 55
2652
=2
13= 0.1538
It is simpler to use (5.46):
C =S − nS· E
=5− 3
5· 5
13
=2
13q.e.d.
The call congestion becomes (5.49):
B =(1 + β) · C1 + β · C =
(1 + 1) · 213
1 + 213
=4
15= 0.2667
Question 3:
The completed state transition diagram is shown below.
We may also find the call congestion B as the time congestion of a system with one sourceless (Arrival theorem). This system has the relative state probabilities q(i) 1 : 4 : 6 : 4
Y
j
Y
j
Y
j0 1 2 3
1 2 3
4 3 2
and thus the time congestion 415
, q.e.d.
Alternatively, in (5.45) B is expressed directly by E.
INDEX 487
5,0,0 4,1,0 3,2,0 2,3,0 1,4,0 0,5,0- - - - -10γ 8γ 6γ 4γ 2γ
8γ 6γ 4γ 2γ
6γ 4γ 2γ
4γ 2γ
2γ 4γ
1 1 1 1 1
2 2 2 2
3 3 3
2γ 4γ 6γ 8γ 10γ
2γ 4γ 6γ 8γ
2γ 4γ 6γ
4,0,1 3,1,1 2,2,1 1,3,1 0,4,1- - - -
? ? ? ? ? @@
@@
@@I
@@
@@
@@I
@@
@@
@@I
@@
@@
@@I
@@
@@
@@I
3,0,2 2,1,2 1,2,2 0,3,2- - -
? ? ? ? @@
@@
@@I
@@
@@
@@I
@@
@@
@@I
@@
@@
@@I
2,0,3 1,1,3 0,2,3- -
? ? ? @@
@@
@@I
@@
@@
@@I
@@
@@
@@I
Question 4:
The relative state probabilities, which all should be divided by 832 to be normalized, becomes:
k q(k) q(i, j, k)
3 320 80 160 80
2 320 40 120 120 40
1 160 10 40 60 40 10
0 32 1 5 10 10 5 1
q(j) 832 131 325 270 90 15 1
j 0 1 2 3 4 5
488 INDEX
For state (1,2,2) we have the following flows:
Flow out = 4γ + 2γ + 2µ · p(1, 2, 2)
= 8 · 120
832
=960
832
Flow in: = 4γ · p(2, 1, 2) + 3µ · p(0, 2, 3) + 6 · γp(1, 3, 1)
= 4 · 120
832+ 3 · 80
832+ 6 · 40
=960
832
Thus Flow out = Flow in for this node. This flow balance is fulfilled for all nodes.
Question 5:
In state (i, j, k), j is the number of idle sources generating new call attempts, each sourcehaving the rate 2γ. From the state transition diagram we notice that a fixed value of jcorresponds to a column of states. Thus the number of call attempts per time unit can bewritten as (remember that i is a function of j and k):
na =S∑
j=0
min(n,S−j)∑
k=0
j · (2γ) · p(i, j, k)
=5∑
j=0
min(3,5−j)∑
k=0
j · (2γ) · p(i, j, k)
= 1 · (2γ) · p(4, 1, 0) + p(3, 1, 1) + p(2, 1, 2) + p(1, 1, 3)
+ 2 · (2γ) · p(3, 2, 0) + p(2, 2, 1) + p(1, 2, 2) + p(0, 2, 3)
+ 3 · (2γ) · p(2, 3, 0) + p(1, 3, 1) + p(0, 3, 2)
+ 4 · (2γ) · p(1, 4, 0) + p(0, 4, 1)
+ 5 · (2γ) · p(0, 5, 0)
INDEX 489
na = 2 · 325
832+ 4 · 270
832+ 6 · 90
832+ 8 · 15
832+ 10 · 1
832
=2400
832[call attempts per time unit]
Call attempts arriving in states (i, j, 3) are blocked. So number of blocked call attempts pertime unit becomes:
nb =S−n∑
j=0
j · (2γ) · p(i, j, n)
=2∑
j=1
j · (2γ) · p(2− j, j, 3)
= 1 · (2γ) · p(1, 1, 3) + 2 · (2γ) · p(0, 2, 3)
= 2 · 160
832+ 4 · 80
832
=640
832
Thus the call congestion becomes:
B =nbna
=640
2400
B =4
15q.e.d.
Updated: 2009-06-23
490 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 6.14 (Exam 2004)
Loss system and overflow traffic
We consider a full accessible loss system with n = 3 channels. The system is offered Pascaltraffic, and in state i the arrival rate is γ (S+i). The number of sources is S = 4. The arrivalrate of an idle source is γ = 1/3. The mean holding time is chosen as time unit (µ−1 = 1).All time intervals are exponentially distributed. The state of the system is defined as thenumber of busy channels.(Note: An Erlang-B table covering n =1, 10 (step = 1) and A = 0, 10 (step = 0.25) wasattached).
1. Show that the offered traffic is A = 2 [erlang] and that the peakedness is 1.5 .
2. Construct the state transition diagram and find under the assumption of statisticalequilibrium the state probabilities p(i), i = 0, 1, . . . , 3.
3. Find the time congestion E, the call congestion B, and the traffic congestion C.
4. Calculate the traffic congestion C by using Fredericks-Hayward’s method.
5. Calculate the traffic congestion C using Sander’s method.
Assume that the above traffic is overflow traffic from an equivalent system with 4 channelswhich are offered 5 erlang.
6. Find the traffic congestion C using Wilkinson-Bretschneider’s ERT-method.
INDEX 491
Antal betjeningssteder n
A 1 2 3 4 5 6 7 8 9 10
0.25 0.2000 0.0244 0.0020 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.00000.50 0.3333 0.0769 0.0127 0.0016 0.0002 0.0000 0.0000 0.0000 0.0000 0.00000.75 0.4286 0.1385 0.0335 0.0062 0.0009 0.0001 0.0000 0.0000 0.0000 0.00001.00 0.5000 0.2000 0.0625 0.0154 0.0031 0.0005 0.0001 0.0000 0.0000 0.00001.25 0.5556 0.2577 0.0970 0.0294 0.0073 0.0015 0.0003 0.0000 0.0000 0.0000
1.50 0.6000 0.3103 0.1343 0.0480 0.0142 0.0035 0.0008 0.0001 0.0000 0.00001.75 0.6364 0.3577 0.1726 0.0702 0.0240 0.0069 0.0017 0.0004 0.0001 0.00002.00 0.6667 0.4000 0.2105 0.0952 0.0367 0.0121 0.0034 0.0009 0.0002 0.00002.25 0.6923 0.4378 0.2472 0.1221 0.0521 0.0192 0.0061 0.0017 0.0004 0.00012.50 0.7143 0.4717 0.2822 0.1499 0.0697 0.0282 0.0100 0.0031 0.0009 0.0002
2.75 0.7333 0.5021 0.3152 0.1781 0.0892 0.0393 0.0152 0.0052 0.0016 0.00043.00 0.7500 0.5294 0.3462 0.2061 0.1101 0.0522 0.0219 0.0081 0.0027 0.00083.25 0.7647 0.5541 0.3751 0.2336 0.1318 0.0666 0.0300 0.0120 0.0043 0.00143.50 0.7778 0.5765 0.4021 0.2603 0.1541 0.0825 0.0396 0.0170 0.0066 0.00233.75 0.7895 0.5968 0.4273 0.2860 0.1766 0.0994 0.0506 0.0232 0.0096 0.0036
4.00 0.8000 0.6154 0.4507 0.3107 0.1991 0.1172 0.0627 0.0304 0.0133 0.00534.25 0.8095 0.6324 0.4725 0.3343 0.2213 0.1355 0.0760 0.0388 0.0180 0.00764.50 0.8182 0.6480 0.4929 0.3567 0.2430 0.1542 0.0902 0.0483 0.0236 0.01054.75 0.8261 0.6624 0.5119 0.3781 0.2643 0.1730 0.1051 0.0587 0.0301 0.01415.00 0.8333 0.6757 0.5297 0.3983 0.2849 0.1918 0.1205 0.0700 0.0375 0.0184
5.25 0.8400 0.6880 0.5463 0.4176 0.3048 0.2106 0.1364 0.0821 0.0457 0.02345.50 0.8462 0.6994 0.5618 0.4358 0.3241 0.2290 0.1525 0.0949 0.0548 0.02935.75 0.8519 0.7101 0.5764 0.4531 0.3426 0.2472 0.1688 0.1082 0.0646 0.03586.00 0.8571 0.7200 0.5902 0.4696 0.3604 0.2649 0.1851 0.1219 0.0751 0.04316.25 0.8621 0.7293 0.6031 0.4851 0.3775 0.2822 0.2013 0.1359 0.0862 0.0511
6.50 0.8667 0.7380 0.6152 0.4999 0.3939 0.2991 0.2174 0.1501 0.0978 0.05986.75 0.8710 0.7462 0.6267 0.5140 0.4096 0.3155 0.2332 0.1644 0.1098 0.06907.00 0.8750 0.7538 0.6375 0.5273 0.4247 0.3313 0.2489 0.1788 0.1221 0.07877.25 0.8788 0.7611 0.6478 0.5400 0.4392 0.3467 0.2642 0.1932 0.1347 0.08897.50 0.8824 0.7679 0.6575 0.5521 0.4530 0.3615 0.2792 0.2075 0.1474 0.0995
7.75 0.8857 0.7744 0.6667 0.5637 0.4663 0.3759 0.2939 0.2216 0.1602 0.11058.00 0.8889 0.7805 0.6755 0.5746 0.4790 0.3898 0.3082 0.2356 0.1731 0.12178.25 0.8919 0.7863 0.6838 0.5851 0.4912 0.4031 0.3221 0.2493 0.1860 0.13318.50 0.8947 0.7918 0.6917 0.5951 0.5029 0.4160 0.3356 0.2629 0.1989 0.14468.75 0.8974 0.7970 0.6992 0.6047 0.5141 0.4285 0.3488 0.2761 0.2117 0.1563
9.00 0.9000 0.8020 0.7064 0.6138 0.5249 0.4405 0.3616 0.2892 0.2243 0.16809.25 0.9024 0.8067 0.7133 0.6226 0.5353 0.4521 0.3740 0.3019 0.2368 0.17979.50 0.9048 0.8112 0.7198 0.6309 0.5452 0.4633 0.3860 0.3143 0.2491 0.19149.75 0.9070 0.8155 0.7261 0.6390 0.5548 0.4741 0.3977 0.3265 0.2613 0.2030
10.00 0.9091 0.8197 0.7321 0.6467 0.5640 0.4845 0.4090 0.3383 0.2732 0.2146
Erlang’s B–formel E1,n(A)
492 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to Exercise 6.14 (Exam 2004)
This is the truncated Pascal system dealt with in Sec. 5.7.
Question 1:
The offered traffic is obtained by using the formulæ for the Engset case by letting S = −4sources (5.68) and γ = −1
3(5.69).
By using (5.10) and (5.11) we get:
β =γ
µ
= −1
3,
A = S · β
1 + β
= (−4) · −13
1− 13
A = 2 [erlang] .
In a similar way we find from (5.23):
Z =1
1 + β
=1
1− 13
Z =3
2
Question 2:
The state transition diagram becomes (cf. Fig. 8.6): By using the cut equations we get thefollowing relative state probabilities:
INDEX 493
........
.............................
................................................................................................................................................................................................................................................... ........
.............................
................................................................................................................................................................................................................................................... ........
.............................
................................................................................................................................................................................................................................................... ........
.............................
...................................................................................................................................................................................................................................................
.......................................
.................................................................................... .......................................
.................................................................................... .......................................
....................................................................................
.......................................................................................................................
.... .......................................................................................................................
.... .......................................................................................................................
....
...................... ................ ...................... ............
.... ...................... ................
............................................................................
0 1 2 3
43
53
63
1 2 3
q(0) = 1 p(0) = 27113
= 0.2389
q(1) = 43
p(1) = 36113
= 0.3186
q(2) = 109
p(2) = 30113
= 0.2655
q(3) = 2027
p(3) = 20113
= 0.1770
Sum = 11327
Sum = 1 = 1.0000
Question 3:
Time congestion
The time congestion E is the proportion of time all channels are busy and it is obtained fromthe state probabilities:
E = p(3) =20
113= 0.1770
Call congestion
The call congestion B is the proportion of call attempts blocked and it is obtained as theratio between the number of blocked call attempts per time unit and the total number of callattempts per time unit. We find:
B =γ (S + 3) · p(3)
γ (S + 0) · p(0) + γ (S + 1) · p(1) + γ (S + 2) · p(2) + γ (S + 3) · p(3)
=73· 20
11343· 27
113+ 5
3· 36
113+ 6
3· 30
113+ 7
3· 20
113
B =35
152= 0.2303
Alternatively we may use the formulæ for the Engset case, remembering that S and β are
494 INDEX
negative. For example by using (5.45) we get:
B =(S − n) · E · (1 + β)
S + (S − n) · E · β
=(−4− 3) · 20
113· (1− 1
3)
−4 + (−4− 3) · 20113· (−1
3)
=35
152q.e.d.
We may also use the arrival theorem and find the call congestion as the time congestion in asystem with one source less, that is with S = −5 sources.
Traffic congestion
The traffic congestion C is the ratio between the blocked traffic and the offered traffic:
C =A− YA
=2− Y
2
The carried traffic Y is obtained from the state probabilities:
Y =3∑
i=0
i · p(i)
= 0 · 27
113+ 1 · 36
113+ 2 · 30
113+ 3 · 20
113
Y =156
113
C =2− 156
113
2
C =35
113= 0.3097
Also in this case we may use formulæ for the Engset system, for example the simple formula(5.46):
C =S − nS· E
=−4− 3
−4· 20
113
=35
152
INDEX 495
Question 4
Fredericks-Hayward’s method is presented in Sec. 6.5. A traffic with mean value A = 2erlang and peakedness Z = 1.5 offered to n = 3 channels has the same blocking probabilityas Erlang’s loss system with A/Z = 4/3 erlang offered to n/Z = 2 channels. Thus the callcongestion is obtained by using Erlang’s B-formula: C = E2(1.3333). In this case we caneasily use the recursion formula (4.29) to find the numerical value:
E0 = 1
E1 =43· 1
1 + 43· 1 =
4
7
E2 =43· 4
7
2 + 43· 4
7
=8
29
Thus we find:
C =8
29= 0.2759
We may also use the table and e.g. use linear interpolation between the values for A = 1.25and A = 1.50 for n = 2 channels. Then we find C = 0.2752 which is very close to the abovevalue.
Question 5
Sander’s method is described in Sec. 6.6.2. The variance of the offered traffic is:
Var = mean · peakedness = A · Z
= 2 · 1.5 = 3
As the mean value (A = 2) is less than the variance (var = 3) we add one channel witha constant traffic 1 erlang. Then the total offered traffic is 3 erlang, and the number ofchannels is 4. The peakedness is now one, as the variance is still three. The lost traffic fromthis system is the same (approximate method) as the traffic lost from the original system.Using the table we find the lost traffic:
Alost = 3 · E4(3) = 3 · 0.2061 = 0.6183
As this traffic is lost from the original system, the call congestion becomes:
C =0.6183
2= 0.3092
Question 6
Using the ERT-method we have a system of 4 + 3 = 7 channels which are offered 5 erlang.The traffic lost becomes:
5 · E7(5) = 0.6026
496 INDEX
This is the traffic lost from the (original) system and thus the call congestion of the originalsystem becomes:
C =0.6021
2= 0.3013
When we offer 5 erlang to 4 channels, then the overflow traffic will have the mean value (6.15)m1 = 1.992 and the peakedness (6.16) Z = 1.519, corresponding to a variance Var = 3.038.This is very close to the values in the exercise.
The exact values of the parameters of the equivalent group for obaining an overflow trafficwith mean 2 and peakedness 1.5 are obtained by using a computer program:
Ax = 4.876
nx = 3.826
Using these values the ERT-method yields the traffic congestion C = 0.3007
In conclusion, we obtain the values:
BPP C = 0.3097
Fredericks-Hayward C = 0.2759
Sanders C = 0.3093
ERT C = 0.3007
We notice that the values are very similar except for Fredericks-Hayward’s method, whichusually is one of the best. But this is an extreme case with very few channels and very highblocking.We also notice that only the BPP-model works with different types of blocking probabilities,and that the traffic congestion C is the relevant measure. Historically only the time congestionE and the call congestion B have been considered, and they are quite different.
Updated: 2008-03-27
INDEX 497
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 7.1 (exam 1983)
LOSS SYSTEM WITH MULTIPLE ACCESSIBILITY
We consider a loss system with 3 identical servers, serving two different types of customers,which arrive according to Poisson processes with intensities:
type 1: λ1 [customers/time unit],
type 2: λ2 [customers/time unit].
Both types of customers have the same exponentially distributed service time distributionwith mean value m = 1/µ [time units].
Customers of type 1 has full accessibility to all three servers. Customers of type 2 are blockedif more than one server is busy at the arrival time.
The state of the system is defined as the total number of customers being served.
1. Construct a one-dimensional state transition diagram for this system.
2. Find under the assumption of statistical equilibrium the state probabilities of the sys-tem.
3. Find, expressed by the state probabilities, the call congestion for customers of type 1and type 2.
We now define the state of the system as (i, j), where i is the number of type 1 customersbeing served, and j is the number of type 2 customers being served.
4. Construct a two-dimensional state transition diagram for this system.
5. Assume, that the state probabilities are known, and find the traffic carried by customersof type 2, when a total of (i+ j =) 0, 1, 2, or 3 customers are being served.
498 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 7.1
Background: The model considered can be applied if we want to give two traffic streamsdifferent service levels on the same channel group, i.e. one traffic stream higher priority thananother traffic stream. This principle, which is called trunk reservation, is different from theprinciple class limitation, where it is the total number of calls of a given type, which decideswhether a new call attempt of this type can be accepted.
One example is incoming (type 1) and outgoing (type 2) traffic between a PABX (PrivateAutomatic Branch eXchange) of a company and the public telephone exchange.Another application is for cellular systems, where we want to give priority to hand-over callsover new call attempts (guard channels).
Question 1:
λ1 + λ2
0 1 2 3q
i
i q
i q
λ1 + λ2 λ1
µ 2µ 3µ
Question 2:
The conditions for statistical equilibrium will always be fulfilled (loss system with a finitenumber of states), and we get by applying the cut equations and expressing all state proba-bilities by state p(0):
p(1) =λ1 + λ2
µ· p(0) ,
p(2) =λ1 + λ2
2µ· p(1)
=(λ1 + λ2)2
2µ2· p(0) ,
p(3) =λ1
3µ· p(2)
=(λ1 + λ2)2 · λ1
3! · µ3· p(0) ,
INDEX 499
where p(0) is obtained from the normalisation condition:
p(0) + p(1) + p(2) + p(3) = 1 .
Question 3:
As the arrival process is a Poisson process, then we have call congestion B = time congestionE = traffic congestion C (PASTA–property):
B1 = E1 = C1 = p(3) ,
B2 = E2 = C2 = p(2) + p(3) .
Question 4:
µ
λ1
2µ
λ1
0,0 1,0 2,0
µ
λ1
2µ
λ1
0,1 1,1 2,1
µ
λ1
0,2 1,2
µλ2µλ2
2µλ2
3µ
λ1
3,0
µ
2µ
-
?
6
-
?
6
-
-
?
6
-
-
?
?
We notice that this state transition diagram is not reversible (cf. Sec. 7.2). We have to solve9 linear equations with 9 unknown to find the state probabilities.
When the mean holding time is the same for all types of calls, we can reduce the statetransition diagram to one dimension. But when the mean holding times are different, wehave to solve the multi-dimensional state transition diagram.
500 INDEX
Question 5:
In state (i, j) the carried traffic if type 1 is i erlang and the carried traffic type 2 is j erlang.Therefore, we have:
p(i+ j = 0) = p(0, 0) ,
Type 1: = Y01 = 0 ,
Type 2: = Y02 = 0 .
p(i+ j = 1) = p(1, 0) + p(0, 1) ,
Type 1: = Y11 = p(1, 0) ,
Type 2: = Y12 = p(0, 1) .
p(i+ j = 2) = p(2, 0) + p(1, 1) + p(0, 2) ,
Type 1: = Y21 = 2 · p(2, 0) + 1 · p(1, 1) ,
Type 2: = Y22 = 1 · p(1, 1) + 2 · p(0, 2) .
p(i+ j = 3) = p(3, 0) + p(2, 1) + p(1, 2) ,
Type 1: = Y31 = 3 · p(3, 0) + 2 · p(2, 1) + 1 · p(1, 2) ,
Type 2: = Y32 = 1 · p(2, 1) + 2 · p(1, 2) .
The total carried traffic thus becomes:
Type 1: Y1 = Y01 + Y11 + Y21 + Y31 ,
Type 2: Y2 = Y02 + Y12 + Y22 + Y32 .
As a control, we of course have (cf. Question 3):
Y1 = A1 · (1−B1) ,
Y2 = A2 · (1−B2) .
p(i+ j = x) corresponds to p(x) in the one-dimensional model (cf. Question 1).
Updated: 2008-03-27
INDEX 501
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 7.4 (exam 1990)
LOSS SYSTEMS WITH MUTUAL OVERFLOW
We consider a loss system with 2 servers. Call attempts arrive according to a Poisson processwith intensity 20 calls per hour. The holding times are exponentially distributed with meanvalue 180 seconds. In the following it will be sufficient to give numerical answers.
1. Find the offered traffic.
2. Calculate by using the recursion formula for Erlang’s B-formula the congestion of thesystem (time congestion equals call congestion and traffic congestion). (Show the indi-vidual steps of the recursion).
The above system is in the following called a subsystem. We now consider a system madeup of two subsystems of the above type (total arrival rate equals 40 calls per hour, in total 4fully accessible servers).
3. Construct the one-dimensional state transition diagram for the total system, and cal-culate under the assumption of statistical equilibrium the state probabilities p(i) (i =0, 1, 2, 3, 4).
We now keep record of the system a call (a server) belongs. A call attempts first looks for anidle server in its own subsystem. If both servers in this system are busy, it looks for an idleserver in the other subsystem. If both servers also are busy in this system the call is blocked.The state of the system is denoted with
(i, j) 0 ≤ i, j ≤ 2 ,
where i, respectively j, denotes the number of busy servers in subsystem 1, respectively 2.
4. Construct the two-dimensional state transition diagram for this system, using the fol-lowing states.
502 INDEX
02
01
00 10 20
11
12 22
21
5. Calculate the state probabilities of the two-dimensional state transition diagram by ex-ploiting the symmetry and using the aggregated state probabilities calculated in Ques-tion 3. (All states in Question 5 with a certain number of busy servers are in Question 3aggregated into a single state).
INDEX 503
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 7.4
Question 1:
The offered traffic A is equal to the average number of call attempts per mean holding time:
A =λ
µ,
A = 20 · 1
20 [hour/hour] ,
A = 1 erlang .
There is one call attempt per mean holding time.
Question 2:
We calculate Erlang’s B–formula for n = 2 circuits and A = 1 erlang. We use the recursionformula in Chap. 4 (4.29):
En(A) =A · En−1(A)
n+ A · En−1(A),
where:E0(A) = 1 .
For A = 1 we get:
E1(1) =1 · 1
1 + 1 · 1 =1
2,
E2(1) =1 · 1
2
2 + 1 · 12
=1
5= 0.2 .
We may control this by using Erlang’s B–formula (4.10) directly:
E2(A) =A2
2
1 + A+ A2
2
.
E2(A) =1
5q.e.d.
504 INDEX
Question 3:
We consider a full accessible loss system with n = 4 channels and A = 2 erlang. If we chooseµ−1 = 3 minutes as time unit, we get λ = 2 and the following state transition diagram:
Y
j2
1
Y
j2
2
Y
j2
3
Y
j2
4
43210
If we put the relative value of state zero equal to one, we find the relative probabilities:
qr(0) = 1 ,
qr(1) = 2 ,
qr(2) = 2 ,
qr(3) =4
3,
qr(4) =2
3,
Sum = 7 .
As the total sum must add to one, we get the following absolute probabilities:
p(0) =3
21,
p(1) =6
21,
p(2) =6
21,
p(3) =4
21,
p(4) =2
21.
This may also be obtained directly from the truncated Poisson distribution (4.9) (A = 2
INDEX 505
erlang):
p(i) =
Ai
i!4∑
ν=0
Aν
ν!
, i = 0, 1, · · · , 4 .
Question 4:
It is understood that a call keeps the allocated circuit in the other subsystem, even though acircuit becomes idle in its own system.
i
j
i
j
i
j
i
j
i
j
i
j
02 12 22
211101
00 10 20
1 1 2 222
2 2
21
1 1
1 1 2
21
1 1
21
1 1 1
The departure (death) intensities are obvious. The total arrival intensity is in every stateequal to 2 (in state (2, 2) all call attempts are blocked, and therefore a call attempt doesnot result in a state transition, and the arrow/intensity is not shown on the figure). If it ispossible, the total arrival intensity is divided into 1 in each of the two directions. If thereonly is one direction, this will get the total intensity (rate) 2.Extra: It is easy to generalize the model. For example, we can introduce restrictions so thata call from one group (subsystem) only is allowed to occupy a circuit in the other group ifboth circuits of this are idle. The intensities (2,1) → (2,2) and (1,2) → (2,2) then becomesequal to one in stead of two.
Question 5:
Notice, that the state transition diagram is not reversible. (This is e.g. done by looking atthe “flow clockwise” and the “flow anti-clockwise” in 4 neighbouring states). As we both inQuestion 3 and Question 4 have an offered traffic 2 erlangs to 4 circuits with full accessibility,then it is in both cases the same system we consider. Therefore, we have from Question 3:
506 INDEX
p(0) =3
21= p(0, 0) ,
p(1) =6
21= p(0, 1) + p(1, 0) ,
p(2) =6
21= p(0, 2) + p(1, 1) + p(2, 0) ,
p(3) =4
21= p(2, 1) + p(1, 2) ,
p(4) =2
21= p(2, 2) .
Furthermore, because of symmetry we must have:
p(0, 1) = p(1, 0) =3
21,
p(2, 1) = p(1, 2) =2
21,
p(0, 2) = p(2, 0) .
The only unknown are thus states, where in total two channels are busy:
p(0, 2) + p(1, 1) + p(0, 2) = 2 · p(0, 2) + p(1, 1) =6
21.
By looking at the node balance equation for state (0, 2) we find:
4 · p(0, 2) = p(0, 1) + p(1, 2) =5
21,
=⇒ p(0, 2) = p(2, 0) =5
84,
=⇒ p(1, 1) =7
42=
1
6.
In summary, we thus have:
p(0, 2) =5
84, p(1, 2) =
8
84, p(2, 2) =
8
84,
p(0, 1) =12
84, p(1, 1) =
14
84, p(2, 1) =
8
84,
p(0, 0) =12
84, p(1, 0) =
12
84, p(2, 0) =
5
84.
We notice that the total sum of probabilities is one. Another control also shows that all nodebalance equations are fulfilled.
Updated: 2010-03-24
INDEX 507
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 7.8 (exam 1998)
MOBILE COMMUNICATION SYSTEM WITH TWO TYPES OF TRAFFIC
A mobile communication system with S = 4 subscribers has access to n = 3 channels. Allcalls accepted occupy one channel during an exponentially distributed time interval withmean value µ−1 = 1 time unit. The system is operated as a loss system. There are twoarrival processes:
a. Outgoing calls, generated by the S = 4 subscribers (PCT–II traffic). An idle sourcegenerates γ = 1
4call attempts per time unit.
b. Incoming calls, arriving according to a Poisson process with arrival rate λ = 0.8 callattempts per time unit (PCT–I traffic). An incoming call, which is accepted, occupyboth an idle channel and one of the idle sources, which thus becomes busy withoutmaking a call attempt itself.
The number of busy sources thus always equals the number of busy channels.
1. Find the incoming offered traffic Ai, the outgoing offered traffic Ao, and the total offeredtraffic At (assume the traffic stream considered is alone).
2. Construct the one-dimensional state transition diagram for the system, when the stateof the system is defined as number of busy channels. Find the state probabilities underthe assumption of statistical equilibrium.
3. Find the time congestion E, the call congestion B, and the traffic congestion C forboth traffic streams. (The traffic congestion for outgoing calls is obtained from thetotal traffic congestion and the known traffic congestion for incoming calls).
4. Show that the state transition diagram can be interpreted as a state transition diagramfor a single PCT-II traffic stream, and find the equivalent number of sources (non-integral) and the arrival intensity per idle source.
5. We now distinguish between the two types of traffic. Construct the two–dimensionalstate transition diagram, where the state of the system (i, j) denotes that there arei incoming calls and j outgoing calls. Is the state transition diagram reversible?
The following question was not included at exam.
6. Find time congestion E, call congestion B, and traffic congestion C for both trafficstreams, when we know the state probabilities of the two-dimensional state transitiondiagram.
508 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to Exercise 7.8
Question 1:
It should be pointed out that the offered traffic is defined as the traffic carried, when thecapacity of the system is infinite and other traffic streams do not exist.
Ain =λ
µ=
0.8
1= 0.8 [erlang] .
From the formulæ (5.7 – 5.11):
γ =1
4, β =
γ
µ=
1
4,
a =β
1 + β=
1
5,
Aout = S a =4
5= 0.8 [erlang] .
Total offered traffic becomes:
At = Ain + Aout = 0.8 + 0.8 = 1.6 [erlang] .
Question 2:
The arrival rate (intensity) in state i is:
(4− i) · γ + λ = (4− i) · 1
4+
4
5.
The departure rate (intensity) in state i is:
i · µ = i .
Therefore, we get the following state transition diagram:
INDEX 509
Y j36/201 Y j31/202 Y j26/203 3210The relative state probabilities become:
1 1.8 1.395 0.6045
which add to a total of 4.7995.
The absolute state probabilities then become:
p(0) = 0.2084 =2000
9599,
p(1) = 0.3750 =3600
9599,
p(2) = 0.2907 =2790
9599,
p(3) = 0.1260 =1209
9599.
Question 3:
Due to the PASTA–property time, call and traffic congestion of the incoming traffic are equal:
Ein = Bin = Cin = p(3) = 0.1260 .
For the outgoing traffic we find:
Time congestion:Eout = p(3) = 0.1260 .
Call congestion:
Bout =0.25 · p(3)
1 · p(0) + 0.75 · p(1) + 0.50 · p(2) + 0.25 · p(3)= 0.0473 .
510 INDEX
Traffic congestion:
We cannot find this directly, but indirectly, e.g. from the total carried traffic:
Yt = 0 · p(0) + 1 · p(1) + 2 · p(2) + 3 · p(3) = 1.3342 .
Yout = Yt − Ain (1− Cin)
= 1.3342− 0.8 · (1− 0.1260) = 0.6350 .
Then we can find the traffic congestion:
C =Aout − Yout
Aout= 0.206 .
Question 4:
For a system with PCT-II traffic with a non-integral number of sources, the departure inten-sity in state i is given by i µ∗ and the arrival intensity by (S∗− i) γ∗. A comparison with thefigure in question 2 shows us that:
µ∗ = 1 , γ∗ =1
4, S∗ =
1.8
0.25= 7.2 .
Question 5:
00
2
10 20 30
211101
02 12
03
0.8
0.8
0.8
1
1
1
2
2 3
0.8
0.8 0.81
0.75
0.5
0.5
0.75 0.51 11
2
3
The system is not reversible (cf. Fig. 7.2 in the textbook). For example, the circulation flowof the following states are:
00 → 10 → 11 → 01 → 00:Clockwise: 1 · 1 · 0.8 · 1 = 0.8
Counter–clockwise: 0.8 · 0.75 · 1 · 1 = 0.6
INDEX 511
Question 6 (extra):
The incoming traffic arrives according to a Poisson process and due to the PASTA–propertywe have:
Ein = Bin = Cin = p(3, 0) + p(2, 1) + p(1, 2) + p(0, 3) .
The outgoing traffic is from a finite number of sources and we have for the time congestion:
Eout = Ein .
The call congestion is obtained by looking at the total number of call attempts per time unitnt and the number of blocked call attempts per time unit nb. We get:
nt = 4 · 1
4· p(0, 0)
+ 3 · 1
4p(1, 0) + p(0, 1)
+ 2 · 1
4p(2, 0) + p(1, 1) + p(0, 2)+
+ 1 · 1
4p(3, 0) + p(2, 1) + p(1, 2) + p(0, 3)
nb = 1 · 1
4p(3, 0) + p(2, 1) + p(1, 2) + p(0, 3)
Thus we findBout =
nbnt.
The trafic congestion Cout is obtained fra the offered traffic Aout = 0.8 [erlang] and the carriedtraffic Yout:
Cout =Aout − Yout
Aout,
where
Yout = 1 · p(0, 1) + p(1, 1) + p(2, 1)
+ 2 · p(0, 2) + p(1, 2)
+ 3 · p(0, 3)
The congestion values will of course be the same as found in question 3.
Updated: 2008-04-07
512 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 7.10 (exam 2005)
CDMA cellular system with two service classes
We consider a lost calls cleared system with n = 5 channel. Two types of calls are offered tothe system. They both have exponentially distributed service times with mean value µ−1 = 1[time unit]. Both types arrive according to a Poisson process:
• Type one: arrival rate is λ1 = 1 call attempt per time unit.Each call requires d1 = 1 channel.
• Type two: arrival rate is λ2 = 0.5 call attempts per time unit. Each call requires d2 = 2channels. If a call attempt does not obtain both channels, then it is lost.
1. Find the offered traffic for each type expressed in number of channels.
We define the state of the system as the total number of busy channels. A type one callattempt is always accepted in state 0 and 1. In state i it is blocked with probability:
b1(i) =
(i2
)(
52
) , 2 ≤ i ≤ 5 .
As we usually define(i2
)= 0 when i < 2, this expression is valid for all states 0 ≤ i ≤ 5.
Thus the probability of accepting a type one call in state i becomes:
a1(i) = (1− b1(i)) , 0 ≤ i ≤ 5 .
A type two call attempt behaves like two single-channel calls and is only accepted if bothchannels are obtained. Thus the probability of accepting a type two calls in state i becomes:
a2(i) = (1− b1(i))(1− b1(i+1)) , 0 ≤ i ≤ 5 .
2. Show that the arrival rates of accepted type one calls, respectively type two calls, as afunction of state becomes:
INDEX 513
λ1(0) = 1 λ2(0) = 12· 1
λ1(1) = 1 λ2(1) = 12· 90
100
λ1(2) = 910
λ2(2) = 12· 63
100
λ1(3) = 710
λ2(3) = 12· 28
100
λ1(4) = 410
λ2(4) = 0
λ1(5) = 0 λ2(5) = 0
We now define the two-dimensional state of the system as (i, j) , i + j ≤ 5, where i(0 ≤ i ≤ 5) is the number of channels occupied by type one calls, and j (j = 0, 2, 4) is thenumber of channels occupied by type 2 calls. The above arrival rates which are functions ofthe total number of busy channels are still valid.
3. Construct the state transition diagram, using the following structure of states:
04 14
02
00 10 20 30 40 50
12 22 32
4. Show that the state transition diagram is reversible.
5. Find the state probabilities (given: p(0, 0) = 20000/78342).
6. Find the carried traffic and the traffic congestion for each traffic type.
7. Show that the call congestion for a type of call is equal to the traffic congestion for thesame type of call.
514 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 7.10 (exam 2005)
Background: In third generation (3G) cellular communication systems based on CDMA(Code Division Multiple Access) the capacity of a cell depends on the number of active usersin both own cell and neighbouring cells (or more correct: the power used for transmitting thesignals). Assume that the nominal capacity of a cell is n channels. Then call attempts maywith a positive probability be blocked even if less than n channels are occupied.This is a multi-dimensional system (Chapter 7) with limited accessibility (Chapter 6).
Question 1:
As we have PCT-I traffic, we get:
A1 =λ1
µ1
· d1 =1
1· 1 = 1 [erlang · channel]
A2 =λ2
µ2
· d2 =1
2· 2 = 1 [erlang · channel]
Question 2:
We have chosen the same blocking probabilities as for Erlang’s Interconnection formula =Erlang’s ideal grading, where we choose k = 2 channels at random out of n = 5. If i channelsare busy, then ri is the probability that both channels we choose are among these i busychannels. We directly find:
b1(0) = 0 a1(0) = 1
b1(1) = 0 a1(1) = 1
b1(2) = 110
a1(2) = 910
b1(3) = 310
a1(3) = 710
b1(4) = 610
a1(4) = 410
b1(5) = 1 a1(5) = 0
Introducinga2(i) = 1− b2(i) = (1− bi)(1− bi+1) ,
which means that a two channel call has the same blocking as two successive single chan-nel calls we find the following state-dependent acceptance a2(i) probabilities and blockingprobabilities b2(i) for calls requiring two channels:
INDEX 515
a2(0) = 1 b2(0) = 0
a2(1) = 90100
b2(1) = 10100
a2(2) = 63100
b2(2) = 37100
a2(3) = 28100
b2(3) = 72100
a2(4) = 0 b2(4) = 1
a2(5) = 0 b2(5) = 1
From:
λ1(i) = 1− b1(i) · λ1 ,
λ2(i) = 1− b1(i)1− b1(i+1) · λ2 ,
we get the result shown in the text of the exercise.
Question 3:
We find the following state transition diagram:
1
?
6
-
112
1
1
?
6
-
1920
910
?
6
-
163200
710
?
6
-
1750
410 -
2 3 4 5
910 -
1
710 -
410 -
2 3
?
6263
200
?
627
50
410 -
1
04 14
02
00 10 20 30 40 50
12 22 32
Question 4:
From the state transition diagram it is seen that the flow clockwise is equal to the flowcounter-clockwise for all four squares, so the process is reversible. Introducing
1− b2(i) = 1− b1(i)1− b1(i+1
we have in general as shown in the following figure:
516 INDEX
........................................................................................................................................................................................................................................................................................................................... ................
...........................................................................................................................................................................................................................................................................................................................................
................................................................................. ................
.................................................................................................
............................................................................................... ................
...............................................................................................................
........................................................................................................................................................................................................................................................................................................................... ................
...........................................................................................................................................................................................................................................................................................................................................
................................................................................. ................
.................................................................................................
............................................................................................... ................
...............................................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........................
........................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.....................
.............................................................................................................................................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.....................
.............................................................................................................................................................................................................................................................................................................................................
i, j i+1, j
i, j+2 i+1, j+2
· · ·· · ·
· · ·· · ·
· · ·· · ·
· · ·· · ·
......
......
......
......
1−b2(i+j) λ2
(i + 1) µ1
1−b2(i+j+1) λ2
(i + 1) µ1
1−b1(i+j) λ1
j+2
2µ2
1−b1(i+j+2) λ1
j+2
2µ2
Question 5:
We notice that the system does not have product form. By choosing q(0, 0) = 20 000 we findthe following relative state probabilities using local balance equations (reversibility):
q•,4 2205 1575 630
q•,2 22570 10000 9000 3150 420
q•,0 53567 20000 20000 10000 3000 525 42
78342 31575 29630 13150 3420 525 42
Total q0,• q1,• q2,• q3,• q4,• q5,•
As the total sum adds to 78342 we obtain the state probabilities by diving all terms by thisconstant.
INDEX 517
Question 6:
The carried traffic is obtained from the marginal state probabilities:
Y1 =5∑
i=0
i · p(i, •)
=1 · 29630 + 2 · 13150 + 3 · 3420 + 4 · 525 + 5 · 42
78342
=68500
78342
= 0.8744
As the offered traffic is A1 = 1 [erlang] we find the traffic congestion:
C1 =A1 − Y1
A1
C1 =9842
78342= 0.1256 .
In a similar way we find:
Y2 =5∑
j=0
j · p(•, j)
=2 · 22570 + 4 · 2205
78342
=53960
78342
= 0.6888
C2 =A2 − Y2
A2
C2 =24382
78342= 0.3112
518 INDEX
Question 7
To find the call congestion we have to sum over all states. For type one the number of callattempts blocked per time unit n1 becomes, using the blocking probabilities from Question2 (taking the states row by row):
n1
λ1
= 0 · p(0, 0) + 0 · p(1, 0) +1
10· p(2, 0) +
3
10· p(3, 0) +
6
10· p(4, 0) + 1 · p(5, 0)
+1
10· p(0, 2) +
3
10· p(1, 2) +
6
10· p(2, 2) + 1 · p(3, 2)
+6
10· p(0, 4) + 1 · p(1, 4)
Inserting the state probabilities we find:
n1
λ1
· 78342 =1
10· 10000 +
3
10· 3000 +
6
10· 525 + 42
+1
10· 10000 +
3
10· 9000 +
6
10· 3150 + 420
+6
10· 1575 + 630
n1
λ1
=9842
78342.
As the number of call attempts offered per time unit is λ1 = 1 we get:
B1 =9842
78342= C1 , q.e.d.
Thus we get the same value as for the traffic congestion C1 obtained in Question 6.
In a similar way we find for traffic stream 2 the average number of call attempts blocked per
INDEX 519
time unit (taking the states row by row):
n2
λ2
= 0 · p(0, 0) +1
10· p(1, 0) +
37
100· p(2, 0) +
72
100· p(3, 0) + 1 · p(4, 0) + 1 · p(5, 0)
+37
100· p(0, 2) +
72
100· p(1, 2) + 1 · p(2, 2) + 1 · p(3, 2)
+ 1 · p(0, 4) + 1 · p(1, 4)
n2
λ2
· 78342 =1
10· 20000 +
37
100· 10000 +
72
100· 3000 + 525 + 42
+37
100· 10000 +
72
100· 9000 + 3150 + 420
+ 1575 + 630
n2
λ2
=24382
78342
Thus we get the same value as for the traffic congestion obtained in question 6:
B2 =24382
78342= 0.3112 = C2 , q.e.d.
We may also find the call congestion values from the global state probabilities because thearrival process is a Poisson process.
In fact, due to the PASTA-property the traffic, call, and time congestions are equal (properdefinition of time congestion).
Updated: 2008-04-03
520 INDEX
Additional discussion on global state probabilities
From the above we get the following global state probabilities:
p(0) = p(0, 0) =20000
78342
p(1) = p(1, 0) =20000
78342
p(2) = p(2, 0) + p(0, 2) =20000
78342
p(3) = p(3, 0) + p(1, 2) =12000
78342
p(4) = p(4, 0) + p(2, 2) + p(0, 4) =5250
78342
p(5) = p(5, 0) + p(3, 2) + p(1, 4) =1092
78342
We may obtain the global state probabilities in an alternative way by:
1. Calculating global state probabilities assuming full accessibility.
2. Down-scaling the state probabilities by the blocking probabilities for single-slot traffic,as a two-slot call behaves like 2 single-slot calls. This is valid because blocking proba-bilities only depend on the global state probabilities, and not upon the number of callsof a given type.
Without state-dependent blocking we have the state transition diagram shown in the followingfigure:
INDEX 521
1
?
6
-
112
1
1
?
6
-
112
1
?
6
-
112
1
?
6
-
112
1 -
2 3 4 5
1 -
1
1 -
1 -
2 3
?
621
2
?
621
2
1 -
104 14
02
00 10 20 30 40 50
12 22 32
By choosing q(0, 0) = 120 we find the relative state probabilities using local balance equations(reversibility):
q•,4 30 15 15
q•,2 160 60 60 30 10
q•,0 326 120 120 60 20 5 1
426 195 195 90 30 5 1
Total q0,• q1,• q2,• q3,• q4,• q5,•
The relative global state probabilities become:
p(0) = p(0, 0) = 120
p(1) = p(1, 0) = 120
p(2) = p(2, 0) + p(0, 2) = 120
p(3) = p(3, 0) + p(1, 2) = 80
p(4) = p(4, 0) + p(2, 2) + p(0, 4) = 50
p(5) = p(5, 0) + p(3, 2) + p(1, 4) = 26
The scaling probabilities are shown below together with the resulting state probabilities:
522 INDEX
State Relative Down-scaling Relative Absolute
probability factor probability probability
0 120 1 120 2000078342
1 120 1 120 2000078342
2 120 1 120 2000078342
3 80 910
72 1200078342
4 50 910· 7
10632
525078342
5 26 910· 7
10· 4
10819125
109278342
We observe that we get the same result as above.
For Poisson arrival process we can find the call congestion from the global state probabilities:
B1 = 0 · p(0) + 0 · p(1) +1
10· p(2) +
3
10· p(3) +
6
10· p(4) + 1 · p(5)
=0 + 0 + 2000 + 3600 + 3150 + 1092
78342,
B1 =9842
78342q.e.d.
For type 2 we get:
B2 = 0 · p(0) +1
10· p(1) +
37
100· p(2) +
72
100· p(3) + 1 · p(4) + 1 · p(5)
=0 + 2000 + 7400 + 8640 + 5250 + 1092
78342,
B2 =24382
78342q.e.d.
Due to the Pasta-property we of course get identical values for time, call, and traffic conges-tion.
For non-Poisson arrival processes (e.g. Engset and Pascal) we cannot calculate time call andtraffic congestion from the global state probabilities, only the time congestion.
Updated: 2008-04-03
INDEX 523
Solution by generalized algorithm
The exercise is from year 2005. The generalized algorithm had not been published at thattime, but it is in fact the most appropriate algorithm for this problem. Let us first calculatethe global state probabilities without state-dependent blocking. We of course get the sameas by the convolution algorithm above:
State Type 1 Type 2 Total
x q1(x) q2(x) q(x) p(x)
0 0 0 1 120516
1 1 0 1 120516
2 12
12
1 120516
3 13
13
23
80516
4 16
14
512
50516
5 112
215
1360
26516
Total 516120
1
We may modify these state probabilities to take account of the state dependent blocking asabove. A more elegant solution is, however, to modify the algorithm so that formula (7.40)becomes:
pi(x) =
dix· AiZi· p(x− di)−
x− dix· 1− Zi
Zi· pi(x− di)
· 1− bd(x− di)
For Poisson arival processes (Z = 1) the second term becomes zero and we get:
pi(x) =
dix· Ai · p(x− di)
· 1− bd(x− di)
The results are shown in the following table:
524 INDEX
State Type 1 Type 2 Total Type 1 Type 2 Total
x q1(x) q2(x) q(x) p1(x) p2(x) p(x)
0 0 0 1 0 0 2000078342
1 1 0 1 2000078342
0 2000078342
2 12
12
1 1000078342
1000078342
2000078342
3 310
310
35
600078342
600078342
1200078342
4 21200
63400
2180
210078342
315078342
525078342
5 211000
842500
2735000
42078342
67278342
109278342
Total 516120
1
From the table we find the carried traffic of each type:
Yi =5∑
j=0
i · pi(j) , i = 1, 2 .
We find the same results as in Question 6. Due to the Poisson arrival process (Pasta-property),this is equal to the call congestion and time congestion. The call congestion may alwaysbe obtained from traffic congestion by using (7.46). For systems with limited accessibilitythe time congestion is obtained by summing up over all states the global state probalilitymultiplied by the blocking probability in this state:
E1 =5∑
j=0
b1(i)p(i) ,
E2 =5∑
j=0
b2(i)p(i) ,
For non-Poisson arrivals time, call and traffic congestion will of course be different.
2008-04-07
INDEX 525
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 7.11 (Exam 2008)
Service-integrated system with two service classes
We consider a blocked calls cleared system with n= 5 channel. This system is offered twotraffic streams:
• Stream one: This is PCT-I traffic with Poisson arrival rate λ=2 call attempts per timeunit. The mean service time is µ−1
1 =0.5 [time unit]. Each call requires d1 =2 channels.If a call attempt does not obtain both channels simultaneously, then it is blocked.
• Stream two: This is PCT-II Engset traffic offered by S = 6 sources. When a source isidle it generates γ = 1 call attempt per time unit, and the mean service time is µ−1
2 =1time unit. Each call requires d2 =1 channel.
1. Find for each stream the offered traffic and peakedness expressed in
• number of connections,
• number of channels.
2. Find the one-dimensional state probabilities for each of the two traffic streams.
3. Find the global state probabilities of the system using the convolution algorithm.
4. Find the global state probabilities of the system using the generalized state-space basedalgorithm.
5. Find time congestion E, traffic congestion C, and call congestion B for both trafficstreams.
526 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 7.11 (exam 2008)
Question 1:
Offered traffic in unit [number of connections]:
• Stream 1: PCT-I traffic
A1 = λ1 · s1 = 2 · 1
2
= 1 erlang [connections]
Z1 = 1 [channels]
• Stream 2: PCT-II traffic
A2 = S · β
1 + β
= 3 erlang [connections]
Z2 =1
1 + β
=1
2[connections]
The peakedness for PCT-II traffic is for example given in (5.23).
Offered traffic in unit [number of channels]:
• Stream 1:
A1 = λ1 · s1 · d1 = 2 · 1
2· 2
= 2 erlang [channels]
Z1 = 2 [channels]
See Example 2.3.3 or Sec. 7.5.
INDEX 527
• Stream 2: The same as above (channels = connections).
Question 2:
State probabilities for Poisson stream:
p(0) =2
5
p(1) = 0
p(2) =2
5
p(3) = 0
p(4) =1
5
p(5) = 0
State probabilities for Engset stream:
p(0) =1
63
p(1) =6
63
p(2) =15
63
p(3) =20
63
p(4) =15
63
p(5) =6
63
528 INDEX
Question 3:
Convolving the two streams we get the following result:
p(0) =2
217
p(1) =12
217
p(2) =32
217
p(3) =52
217
p(4) =61
217
p(5) =58
217
Question 4:
We of course get the same result as in question 3.
State Poisson Engset Total
x q1(x) q2(x) q(x)
2x· q(x−2) = q1(x) 6
x· 1 · q(x−1) – x−1
x· 1 · q2(x−1) = q2(x)
0 0 0 1
1 21· 0 = 0 6
1· 1 · 1 – 0
1· 1 · 0 = 6 6
2 22· 1 = 1 6
2· 1 · 6 – 1
2· 1 · 6 = 15 16
3 23· 6 = 4 6
3· 1 · 16 – 2
3· 1 · 15 = 22 26
4 24· 16 = 8 6
4· 1 · 26 – 3
4· 1 · 22 = 45
2612
5 25· 26 = 52
565· 1 · 61
2– 4
5· 1 · 45
2= 93
529
Total 2172
INDEX 529
Question 5:
Stream one is a two-slot Poisson arrival stream, for which the PASTA property is valid:
E1 = B1 = C1 = p(4) + p(5) =61 + 58
217
=119
217=
17
31= 0.5484
Stream two is single-slot Engset traffic:
E2 = p(5) =58
217= 0.2673
Carried traffic can be obtained from the contributions p2(i) to the global state probabilitiescalculated in question 3:
Y2 =2
217
1 · 6 + 2 · 15 + 3 · 22 + 4 · 45
2+ 5 · 93
5
=570
217= 2.6267
C2 =A2 − Y2
A2
=3− 570
217
3
=27
217= 0.1244
Call congestion is obtained from the traffic congestion by the general formula (5.49) = (7.46):
B2 =(1 + β)C2
1 + β C2
=2C2
1 + C2
=27
122= 0.2213
Updated: 2010-03-22
530 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 9.19 (exam 1997)
QUEUEING SYSTEM M/M/3
We consider Erlang’s classical queueing system M/M/3 having 3 servers and an unlimited number ofqueueing positions. Customers arrive according to a Poisson process with intensity λ = 2 customersper time unit, and the service time is exponentially distributed with intensity µ = 1 time unit−1.We assume the queueing discipline is FCFS (FIFO). The state of the system is defined as the totalnumber of customers in the system.
1. Find the offered traffic. Are the conditions for statistical equilibrium fulfilled?
2. Construct the state transition diagram, and find the state probabilities when the system is instatistical equilibrium.
3. Calculate the probability for waiting time (Erlang’s C–formula) by using the recursive formulafor Erlang’s B-formula for calculating the C-formula. The individual steps of the recursionshall appear in the solution.
4. Find (a) the mean queue length at a random point of time, (b) the mean waiting time for allcustomers, and (c) the mean waiting time for customers experiencing a waiting time > 0.
5. (Advanced question) Assume that the three servers are hunted in sequential order and findthe traffic carried by each of the three servers (use the numerical values from Question 3).
6. Draw a phase-transition diagram for the response time (service time + eventual waiting time),and find the mean value and form factor of this response time.
INDEX 531
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to Exercise 9.19: (Exam 1997)
Question 1:
A = λ · s = λ · 1µ
= 2 · 1 = 2 erlang.
The conditions for statistical equilibrium are fulfilled because A < n (9.3).
Question 2:
The state transition diagram becomes as follows (cd Fig. 9.1):
Y
j
Y
j
Y
j
Y
j
Y
j
Y
j0 1 2 3 4 5
1 2 3 3 3 3
2 2 2 2 2 2
. . .
State probability p(0) is obtained by using (9.4) and becomes:
p(0) =1
2∑
ν=0
2ν
ν!+
23
3!· 3
3− 2
=19
The other state probabilities are given by (9.2):
p(i) =
19· 2i
i!, 0 ≤ i ≤ 3
19· 2i
3! · 3i−3, i ≥ 3
Thus:
p(0) =19, p(3) =
427,
p(1) =29, p(4) =
881,
p(2) =29, p(5) =
16243
,
etc.
532 INDEX
Question 3:
The probability of experiencing waiting time is given by (9.9):
E2,n(A) =n
n−A · 1− E1,n(A) · E1,n(A) .
In this case we get:
pW > 0 = E2,3(2) =3
3− 2 · (1− E1,3(2))· E1,3(2) .
We want to obtain E1,3(2) by means of the recursion formula (4.29):
E1,x(A) =A · E1,x−1(A)
x+A · E1,x−1(A), E1,0(A) = 1 .
We get:
E1,0(2) = 1 ,
E1,1(2) =2 · 1
1 + 2 · 1 =23
= 0.6667 ,
E1,2(2) =2 · 2/3
2 + 2 · 2/3 =25
= 0.4000 ,
E1,3(2) =2 · 4/10
3 + 2 · 4/10=
419
= 0.2105 .
The probability of experiencing a positive waiting time thus becomes:
pW > 0 = E2,3(2) =3
3− 2 · (1− 4/19)· 4
19=
49
= 0.4444 .
Question 4:
The mean queue length is given by (9.12):
L3 = E2,3 ·A
n−A =49· 2
3− 2=
89
= 0.8889 .
The mean waiting time for all customers is given by (9.15):
W3 =L3
λ=
49
= 0.4444
The mean waiting time for customers which experience a positive waiting time > 0 is obtainedfrom(9.17):
w =W3
E2,3=
s
n−A =4/94/9
= 1
INDEX 533
Question 5:
The carried traffic per channel for sequential hunting is given in the advanced part of the textbook:
bi = yi +A
n· (1− yi) · E2,n(A) ,
where yi is the traffic carried by the i’th channel in a loss system with sequential hunting (4.13):yi = Fn(A) = A · [E1,n(A)− E1,n+1(A)]:
y1 = A · E1,0(2)− E1,1(2) =23,
y2 = A · E1,1(2)− E1,2(2) =815,
y3 = A · E1,2(2)− E1,3(2) =3695.
Using the results obtained in Question 3 we get:
b1 =23
+23· 1
3· 4
9=
6281
= 0.7654 ,
b2 =815
+23· 7
15· 4
9=
272405
= 0.6715 ,
b3 =3695
+23· 59
95· 4
9=
14442565
=76135
= 0.5629 .
As a controlwe have b1 + b2 + b3 = 2 [erlang].
Question 6:
We get the following phase diagram for the response time (it is convenient to put the service timebefore the waiting time, but it can easily be done in the right order):
........................................................................................................................................................ .......................... ..................................................................................................................................................................................................................................................................................... ......................................................................................................................................................................................................................................................................................
........
........
..........
................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................. ..........................
..........................................................................................................................................................................................................................................
..................
........
..................
............................................................................µ = 1 1w
= 149
59
p0 = 1, p1 = 4/9, p2 = 0
q1 = 1, q2 = 4/9, q3 = 0
534 INDEX
Referring to the notation in Sec. 2.3.3 and in Fig. 2.10 we get the following results. Formula (2.89)yields:
m1 =2∑
i=1
qiλi
=11
+49
=139
= 1.4444 .
The variance is obtained from (2.91):
σ2 = 22∑
i=1
i∑
j=1
1λj
· qi
λi
−m1
2
= 2(
11· 1
1+(
11
+11
)· 4/9
1
)−(
139
)2
=349−(
139
)2
= 1.6914 ,
from which the form factor is obtained using (2.13):
ε = 1 +σ2
m21
,
ε = 1 +1.6914(13/9)2
= 1.8107 .
The above procedure is general. It is simpler to consider the response time distribution as a parallelcombination of an Erlang-2 distribution and an exponential distribution, all phases having theintensity one.
The Erlang-2 distribution has the mean value 2, and the second moment is given by (2.51), whichbecomes 6. This branch has the weighting factor 4/9.
The exponential distribution has the mean value 1 and the second moment 2. This branch has theweighting factor 5/9.
The response time then has the mean value, 2. moment, and form factor as follows:
m1 =49· 2 +
59· 1 =
139,
m2 =49· 6 +
59· 2 =
349,
ε =m2
m21
= 1.8107.
Updated: 2010-04-08
INDEX 535
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 9.21 (exam 2000)
TWO QUEUEING SYSTEM IN PARALLEL
We Consider a system composed of two subsystems, each made up of one server and one waitingposition. Thus there may at most be 4 customers in the system. When a new customer arrives hechooses the subsystem with fewest customers. If there is the same number of customers in bothsystems, he chooses at random between the two subsystems. If both systems are busy, then the newcustomer is blocked (lost calls cleared). When a customer has chosen a subsystem, he stays in thissystem.
Customers arrive according to a Poisson process with arrival rate λ > 0. All holding times areexponentially distributed with mean value 1/µ (µ > 0).
The state of the system can be described by the following 6 states:
(2, 2)
(1, 1) (2, 1)
(0, 0) (1, 0) (2, 0)
where the first index denotes the number of customers in the subsystem having most customers,and the second index denotes the number of customers in the other system.
1. Construct the two-dimenional state transition diagram of the system.Is the state transition diagram reversible?
2. Set up the node balance equations of the system.
In the following we consider the case λ = µ = 1 [time unit−1].
3. Find the state probabilities under the assumption of statistical equilibrium.(Hint: p(0, 0) = 7
21 and p(1, 1) = 321).
4. What is the proportion of customers experiencing
a. Immediate service?
b. Waiting before service?
c. Blocking?
5. When the system is in state (2,0) one server is idle, even though a customer is waiting inqueue. Which benefits would be obtained by moving the waiting customer to the idle server?
536 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 9.21 (exam 2000)
........
.....................
...........................................................................................................................................
........
.....................
...........................................................................................................................................
..................................................................................................................................... .......................... .................................................................................................................................................................................
..........................
............................................................................................................................................................................................
...............
................................................................................ ..........................
................................................................................ ..........................
..................................................................................................................................... ..........................
..................................................................................................................................... ..........................
................................................................................ ..........................
................................................................................ ..........................
µ
µ
λ
Question 1:
The state transition diagram has in all states an arrival intensity λ and a departure intensity equalto the number of busy servers: The state transition diagram is not reversible as we e.g. may transitfrom state (2,0) to (1,0), but not in the opposite direction.
i
j
i
j
22
2111
00 10 20
λ 2µ
λ
λ λ
µ
λµ
µ
2µ µ
Question 2:
The node balance equations become:
(0, 0) : λ · p(0, 0) = µ · p(1, 0)
(1, 0) : (λ+ µ) · p(1, 0) = λ · p(0, 0) + 2µ · p(1, 1) + µ · p(2, 0)
(2, 0) : (λ+ µ) · p(2, 0) = µ · p(2, 1)
INDEX 537
(1, 1) : (λ+ 2µ) · p(1, 1) = λ · p(1, 0) + µ · p(2, 1)
(2, 1) : (λ+ 2µ) · p(2, 1) = λ · p(2, 0) + λ · p(1, 1) + 2µ · p(2, 2)
(2, 2) : 2µ · p(2, 2) = λ · p(2, 1)
Question 3:
(0, 0) : =⇒ p(1, 0) = 13
(1, 1) : =⇒ p(2, 1) = 3 · p(1, 1)− p(1, 0) = 37 − 1
3 = 221
(2, 0) : =⇒ p(2, 0) = 12 · p(2, 1) = 1
21
(2, 2) : =⇒ p(2, 2) = 12 · p(2, 1) = 2
21
The result thus becomes, arranged in the same way as the state transition diagram:
p(2, 2) = 121
p(1, 1) = 321 p(2, 1) = 2
21
p(0, 0) = 721 p(1, 0) = 7
21 p(2, 0) = 121
Question 4:
pimmediate = p(0, 0) + p(1, 0) + p(2, 0) = 1521
pwaiting = p(1, 1) + p(2, 1) = 521
pblocking = p(2, 2) = 121
Question 5:
We get a system with full accessibility and the following state transition diagram:
Y
j1
1
Y
j1
2
Y
j1
2
Y
j1
2
43210
538 INDEX
The state probabilities for this system are:
p(0) =823
=168483
,
p(1) =823
=168483
,
p(2) =423
=84483
,
p(3) =223
=42483
,
p(4) =123
=21483
.
In comparison with the system above we find the following changes:
pimmediate :1521
=345483
=⇒ 336483
pwaiting :521
=115483
=⇒ 126483
pblocked :121
=23483
=⇒ 21483
• Drawbacks:
The probability for immediate service is reduced by9
483.
The probability for waiting time is increased by11483
• Advantages:
The blocking probability is (only) reduced by2
483.
We thus obtain a better utilization of the system. Furthermore, we reduce the mean waiting timefor calls which experience waiting from w = 1 to w = 2
3 . Given a customer experiences waiting time,he arrives either in state 2 or 3. The probability of state 2 is twice the state probability of state 3.Two out of three waiting customers arrive thus in state 2, where the waiting time is 0.5, whereasone out of three waiting customers arrrives in state 3, where the mean waiting time is 1.
Revideret 2003-03-18
INDEX 539
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 9.22 (exam 2001)
WAITING TIME SYSTEM WITH HYSTERESIS
We consider a pure delay system where customers arrive according to a Poisson process with intensityλ = 2 customers per time unit. The service time is exponentially distributed with mean value 0.5time units. We assume the queueing discipline is FCFS = FIFO.
1. Find the offered traffic.
The customers are served by one or two servers. There is always at least one server available. Theother server opens (start serving customers), when the queue length becomes two. When there areno more customers in the queue, the server which first becomes idle, is closed. The queue length isdefined as the number of waiting customers and does not include customers being served. We definethe state of the system as (i, j), where i is the total number of customers in the system, and j isthe number of open servers. Therefore, we obtain the state transition diagram with the states andstructure shown in the following figure.
........
.............................
................................................................................................................................................................................................................................................... ........
.............................
................................................................................................................................................................................................................................................... ........
.............................
...................................................................................................................................................................................................................................................
........
...........................
..................................................................................................................................................................................................................................................... ........
...........................
..................................................................................................................................................................................................................................................... ........
...........................
.....................................................................................................................................................................................................................................................
.......................................
.................................................................................... .......................................
....................................................................................
.......................................................................................................................
.... .......................................................................................................................
....
........................................
................................................................................... ........................................
................................................................................... ........................................
...................................................................................
.......................................................................................................................
.... .......................................................................................................................
.... .......................................................................................................................
....
.............................................................................................................................................................................................................................................
.............................................................................................................................................................................................................................................
· · ·
01 11 21
22 32 42
2. Construct the state transition diagram of the system.
3. Find the state probabilities under the assumption of statistical equilibrium.(Hint: p(0, 1) = 2
7 , p(2, 2) = 114).
4. Find the average queue length using the state probabilities.
5. Give the waiting time distribution of a customer who arrives in a state (i, 2) , i ≥ 3.
6. Give the waiting time distribution in the form of a phase diagram for a customer, whicharrives in state (1, 1) and brings the system into state (2, 1), and find the mean waiting timeof the customer. (Hints: Next event takes place after a known time interval and is either adeparture or an arrival. In the first case the waiting time terminates. In the last case thesecond server starts operating, and as the queue discipline is FCFS = FIFO, we know theremaining waiting time).
540 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 9.22 (exam 2001)
Question 1:
If we denote the mean service time by s the Offered traffic becomes:
A = λ · s = 2 · 0.5 = 1 [erlang] .
Question 2:
The state transition diagram becomes as follows: The arrival rate is always λ. All states with one
........
................................
........................................................................................................................................................................................ ........
...................
..................................................................................................................................................................................................... ........
................................
........................................................................................................................................................................................
........
.................................
....................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
.......................................................................................................................................................................................
................................................................................................... .................
..................................................................................
................................................................................................... .....................................................................................
..............
................................................................................................... ................
................................................................................... ...................................................................................................
................................................................................................... .....................................................................................
.............. ...................................................................................................
..................... ................ ..................... ............
.... ..................... ................
..........................................................................
.....................................
..................... ................ ..................... ............
....
..........................................................................
..............................................................................................................................................................................................................................
..............................................................................................................................................................................................................................
· · ·
01 11 21
22 32 42
λ λ
λ λ λ
λ
µ µ
2 µ2 µ 2 µ 2 µ
server have departure rate µ, and all states with two servers have departure rate 2µ.
Question 3:
The node balance equations yield (λ = µ = 2 events/time unit):
Node (0,1): 2 · p(0, 1) = 2 · p(1, 1) ,
p(1, 1) = 27 .
Node (2,1): 4 · p(2, 1) = 2 · p(1, 1) ,
p(2, 1) = 17 .
Node (2,2): 6 · p(2, 2) = 4 · p(3, 2) ,
p(3, 2) = 328 ,
p(4, 2) = 356 ,
p(5, 2) = 3112 ,
INDEX 541
or in general: p(i, 2) =3
7 · 2i−1, i ≥ 3 .
where the last equations are obtained by simple cut equations. The tail probabilities are a quotientseries with factor 1/2. As a control we have:
p(0, 1) + p(1, 1) + p(2, 1) + p(2, 2) +∑∞
i=3 p(i, 2) =
27 + 2
7 + 17 + 1
14 +328
1− 12
= 1 .
Question 4:
The average queue length is obtained from the state probabilities:
L = 1 · p(2, 1) +∞∑
i=3
(i− 2) · p(i, 2)
=17
+328
+328
(2 · 1
2+ 3 · 1
4+ 4 · 1
8· · ·)
=17
+328
+328
(∂
∂x
∞∑
i=2
xi
)∣∣∣∣∣x= 1
2
=17
+328
+328
∂
∂x
(x2
1− x
)∣∣∣∣x= 1
2
=17
+328
+328
2x− x2
(1− x)2
∣∣∣∣x= 1
2
=17
+328
+328· 3
=47.
(The most important is the first equation based on the definition).
Question 5:
As we have FCFS queueing discipline, a customer arriving in state (i, 2) , i ≥ 3 , bringing the systeminto state (i+ 1, 2) experiences an Erlang-(i–1) distributed waiting time. (This answer is sufficient).
Additional:If we consider an arbitrary one of the customers arriving in states (i, 2), i ≥ 3 , then the waiting time
542 INDEX
becomes Cox–distributed with constant branching probabilities after the second phase as shown inthe following figure (we have to exclude state (2, 2) because the probabilility of this state is nottwice the probability of state (3, 2)): Thus customers arriving in state (3, 2) leave (end waiting) by
...................................................................................................... .......................... ...................................................................................................... .......................... ............................................................................................................................... .......................... ............................................................................. .......................... ............................................................................................................................... ........................................................................................................................................................................................................................................................................................................................................
........
..................
............................................................................................................................................................................................................................................................................................
..................
........
..................
............................................................................................................................................................................................................................................................................................
..................
........
..................
............................................................................................................................................................................................................................................................ .......................... ...................................................................................................... .......................... ............................................................................................................................................................................................................................................................ ..........................
............................................................................................................................... ..........................· · ·
· · ·
· · ·
· · ·
2 µ 2 µ 2 µ 2 µ
12
12
12
12
12
12
the first branch, customers arriving in state 4 leave by the second branch, etc.
This Cox–distribution is equivalent to a single exponential distribution with intensity 2µ in se-ries with an exponential distribution with intensity µ as the latter is a weighted sum of Erlang-kdistributions with geometric weight factors.
...................................................................................................... .......................... ...................................................................................................... .......................... ...................................................................................................... ..........................2 µ µ
Question 6:
We call the customer bringing the system into state (2,1) for the tagged customer. The waiting timeof the tagged customer terminates when the next event occurs. The next event is either a departureor an arrival. If a customer departs, then the tagged customer starts service. If a new customerarrives, then the second server opens and the tagged customer is served (FCFS). So in both casesthe tagged customer initiates service.
So the waiting time is exponentially distributed with intensity λ+ µ = 4:
F (t) = 1− e−4t , t ≥ 0 .
The phase diagram becomes:
........................................................................................................................................................ .......................... ........................................................................................................................................................ ..........................4
Updated: 2005-03-03
INDEX 543
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 9.25 (Exam 2006)
Preferential traffic
We consider a system with full accessibility and n = 4 servers. Two different types of customersarrive according to Poisson processes.
Type one customers arrive with rate λ1 = 1 customer per time unit. If all servers are busy, thentype one customers wait in an infinite queue until they are served. (These customers are preferentialcustomers, for example hand-over calls in wireless systems).
Type two customers arrive with rate λ2 = 2 customers per time unit. If all servers are busy, thentype two customers are blocked (Lost-Calls-Cleared). (These customers are ordinary customers, forexample new calls in wireless systems).
All customers have the same mean service time µ−1 = 1 [time unit].
1. Find the offered traffic A1 for type one, A2 for type two, and the total offered traffic.Which restrictions should Ai (i =1, 2) fulfill for ensuring statistical equilibrium?
2. Construct a one-dimensional state transition diagram when the state of the system is definedas the total number of customers in the system (either being served or waiting).
3. Assume statistical equilibrium and find the state probabilities.Find the probability that
– a type one customer is delayed.
– a type two customer is blocked.
4. Find the mean queue length, the mean waiting time for all customers of type one, and themean waiting time for delayed customers of type one.
5. Assume the queueing discipline is FCFS. Write down the waiting time distribution for delayedtype one customers (use the analogy with the state transition diagram of an Erlang-C systemwhen all servers are busy).
544 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to Exercise 9.25 Exam 2006
Question 1:
By definition the offered traffic is the average number of calls per mean service time:
A1 =λ1
µ= 1 [erlang]
A2 =λ2
µ= 2 [erlang]
At = A1 +A2 = 3 [erlang]
We have the following restrictions on the offered traffic:
0 ≤ A1 < 4
0 ≤ A2 <∞
A1 has to be less than the number of channels as this traffic is waiting until it is served. A2 mayhave any value as it is lost when all channels are busy. This is also seen from the following statetransition diagram
Question 2:
The state transition diagram becomes as follows:
Y
j
Y
j
Y
j
Y
j
Y
j
Y
j
Y
j
Y
j0 1 2 3 4 5 6 7
1 2 3 4 4 4 4 4
3 3 3 3 1 1 1 1
. . .
Question 3:
The relative state probabilities q(i) = p(i)/p(0), respectively the absolute state probabilities p(i)become as follows:
INDEX 545
q(0) = 1 p(0) = 235
q(1) = 3 p(1) = 635
q(2) = 92 p(2) = 9
35
q(3) = 92 p(3) = 9
35
q(4) = 272 · 4 p(4) = 27
140
q(5) = 272 · 42 p(5) = 27
560
q(6) = 272 · 43 p(6) = 27
2240
q(7) = 272 · 44 p(7) = 27
8960
q(8) = 272 · 45 p(8) = 27
35840
q(9) = 272 · 46 p(9) = 27
143360
. . . . . .
Total = 352 Total = 1
We obtain the total sum of q(i)’s as follows. The tail of the distribution q(i) is a geometric series.If we sum the terms from q(4) we get (cf. the derivations leading to (9.28)):
∞∑
j=4
q(i) =278·
1 +14
+(
14
)2
+(
14
)3
+ · · ·
=278· 1
1− 14
=92
When all channels are busy, calls of the first traffic stream are delayed, and calls of the secondstream are lost:
pdelay type 1 = pblock type 2
=∞∑
i=4
p(i)
=92· 2
35
=935
546 INDEX
Question 4:
The mean queue length becomes:
L =∞∑
i=5
(i− 4)p(i)
= 1 · p(5) + 2 · p(6) + 3 · p(7) + · · ·
=48560
=335
[time units]
This may be obtained as follows. The probability of a queue length greater than zero is:
pL > 0 =∞∑
i=5
p(i) =9
140(13.19)
Given that we have a positive queue, the mean value of these states is the mean value of a geometricdistribution (Table 3.1) starting with class one and p = 1− 1
4 = 34 . This can also be obtained using
(9.13) with n = 4 and A = 1 (when we have queue). The mean value thus becomes:
L4 = Lnq · pL > 0
=43· 9
140=
335
The arrival rate of type one customers is λ1 = 1. Using Little’s theorem we find the mean waitingtime for all customer:
W =1λ1· L =
335
The probability of delay D1 for type one customers was obtained in Question 3, and we find themean waiting time for delayed customers:
w =1D·W =
359· 3
35
=13
[time units]
Question 5:
Comparing the state transition diagram of Fig. 9.1, the waiting distribution for delayed customer(9.28), and the waiting time distribution for all customers (9.28), we conclude that the waiting timedistribution is an exponential distribution with mean value (nµ − λ1) = 3 (in agreement with theabove result. The waiting time for delayed customers becomes:
F (t) = 1− e−3 t , t ≥ 0 .
The waiting time distribution for all customers becomes:
F (t) = 1− 935· e−3 t , t ≥ 0 .
Updated 2010-04-21
INDEX 547
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 9.26 (Exam 2007)
Palm’s call center model
We consider a queueing system with full accessibility, n servers, and an unlimited number of queueingpositions.
The system is offered PCT-I traffic with arrival intensity λ [calls/time unit] and mean service timeµ−1 [time units]. A customer waiting in the queue has limited patience, and chooses to leave(renege) the queue with a constant rate λr [time unit−1]. Thus a customer gives up waiting after anexponential time interval with mean value λ−1
r [time units] when the waiting time becomes longerthan this time interval. Let the state of the system be defined as the total number of customerswhich are being served or are waiting.
1. Construct the one-dimensional state transition diagram of the system.
2. Find the state probabilities expressing all states by state p(0) under the assumption of statis-tical equilibrium.
3. Which classical traffic models correspond to:
(a) λr = 0 ?
(b) λr = µ ?
(c) λr =∞ ?
Which restrictions must be fulfilled by λ and µ in each of the three cases for the system toenter statistical equilibrium?
4. Show that the probability that a random call attempt experiences a positive waiting time isgiven by:
D = p(0) · An
n!· (1 +Q) =
An
n! · (1 +Q)n−1∑
ν=0
Aν
ν!+An
n!· (1 +Q)
,
where Q =∞∑
i=1
Ai
i∏
j=1
(n+ j · λr
µ
)
and A =λ
µ.
548 INDEX
5. Find, expressed by state probabilities, the average queue length L at a random point of time.
6. Find, expressed by L, the proportion of call attempts which become impatient and leave thequeue.
The following question was not included at exam:
7. Find, expressed by L, the average waiting time for all customers.Find the average waiting time for waiting customers (which are either served or leave thequeue).
INDEX 549
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 9.26
Question 1:
The state transition diagram of the system becomes as follows:
I
R
I
R
I
R
λ λ λ
0 1 2
µ 2µ 3µ
I
R
I
R
I
R
λ λ λ
n n+1
nµ nµ + λr nµ + 2λr
I
R
I
R
λ λ
n+i
nµ + iλr nµ + (i + 1)λr
Question 2:
Under the assumption of statistical equilibrium the simplest way to obtain the state probabilitiesare from cut equations:
Cut 0 ↔ 1 : λ · p(0) = µ · p(1)
Cut 1 ↔ 2 : λ · p(1) = 2µ · p(2)
......
...
Cut (i−1) ↔ i : λ · p(i−1) = i µ · p(i)...
......
Cut n ↔ (n+1) : λ · p(n) = (nµ+ λr) · p(n+1)
......
...
Cut (n+i−1) ↔ (n+i) : λ · p(n+i−1) = (nµ+ iλr) · p(n+i)
......
...
These equations are supplemented with the normalization condition:∞∑
i=0
p(i) = 1.
550 INDEX
As we have the offered traffic A = λ/µ we find:
p(i) =Ai
i!· p(0) 0 ≤ i ≤ n
p(n+ i) =Ai
i∏
j=1
(n+ j · λr
µ
) · p(n) 0 < i ≤ ∞
The above with a remark on normalization is the answer.
By normalization we find p(0):
1 = p(0)
1 +A+A2
2+ . . .+
An−1
(n− 1)!
+ p(n) ·
1 +
∞∑
i=1
Ai
∏ij=1
(n+ j · λrµ
)
1 = p(0)
n−1∑
ν=0
Aν
ν!+An
n!· (1 +Q)
.
Then (ref. Quest. 4)
p(0) =1
n−1∑
ν=0
Aν
ν!+An
n!· (1 +Q)
where
Q =∞∑
i=1
Ai
i∏
j=1
(n+ j · λr
µ
) .
Question 3:
a) λr = 0 : In this case the customers have infinite patience and never gives up waiting. Thiscorresponds to Erlang’s classical waiting time system (Sec. 9.1).
b) λr = µ : This corresponds to the Poisson distribution (Sec. 4.2), as the waiting positions “serve”customers with the same rate as the servers. The state probabilities thus correspond to thestate probabilities of M/G/∞.
c) λr =∞ : The customer gives up waiting immediately and this corresponds to the classical Erlang-B loss system (truncated Poisson, Sec. 4.3).
INDEX 551
Statistical equilibrium is only attained if the departure rate is bigger than the arrival rate for allstates above a state [(n+ i)]:
nµ+ i λr > λ .
For case (b) and (c) where λr > 0 this will be fulfilled for all values of i > k, where k is a constant,and we thus always attain statistical equilibrium. For λr = 0 we only attain statistical equilibriumwhen nµ > λ, or A < n, which is the condition for statistical equilibrium in Erlang’s waiting timesystem.
These conditions correspond to that Q must be finite.
Question 4:
The probability D for experiencing waiting time is equal to the probability for arriving in a state[n+i], where i ≥ 0:
D =∞∑
ν=0
p(n+ ν) .
When the arrival process is a Poisson process, time averages are equal to call averages (PASTA-property):
D = p(0) · An
n!(1 +Q) , (see Question 2)
D =
An
n!(1 +Q)
n−1∑
ν=0
Aν
ν!+An
n!(1 +Q)
.
Question 5:
The average queue length at a random point of time is equal to the traffic carried by the queueingpositions:
L = 0 ·n−1∑
i=0
p(i) +∞∑
i=0
i · p(n+ i)
L =∞∑
i=n
(i−n) · p(i) .
Question 6:
Customers leave the queue with drop-out rate λr. The average number of customers which drop-outof the queue per time unit is therefore L · λr. The proportion of all customers dropping out of thequeue becomes:
D2 =L · λrλ
.
552 INDEX
By considering all possible states we get directly:
D2 =
∞∑
x=n+1
(x− n) · λr · p(x)
∞∑
x=0
λ · p(x)
=λr · Lλ
q.e.d.
Question 7: (added after exam)
According to Little’s law (Sec. 3.3) we have for all customers:
L = λ ·W ,
and the mean waiting time W for all customers:
W =1λ· L .
The mean waiting time w for customers who experience a positive waiting time becomes:
w =W
D,
w =L
λ ·D ,
where D has been obtained in Question 4.
L, and thus W , w, D2 etc., can be expressed by Q in the same way as D. It can be shown that:
L = D · λλr·[1− n
A· Q
1 +Q
]
The above model is widely applied for Call Center planning and was first dealt with by Conny Palmi 1937:
Palm, C. (1937): Nagra undersokningar over vantetider vid telefonanlaggningar. Tekniska Med-delanden fran Kungl. Telegrafstyrelsen, 1937, No. 7–9, pp. 109–127.
Palm, C. (1937): Etude des delais d’attente (Some investigations into waiting times in telephoneplants). Ericsson Technics 1937, Nr. 2, pp. 39-56.
In first edition of R.B. Cooper’s bog: “Introduction to Queueing Theory” (New York 1972, 277 pp.)the model is mentioned on p. 100, exercise 18, as R.I. Wilkinson’s “j factor”, and he refers to anunpublished work from 1937. No doubt Wilkinson has “adopted” the idea from Palm.
2008-04-14
INDEX 553
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 10.19 (Exam 1997)
LEAKY BUCKET: M/D/1/2 QUEUEING SYSTEM
Background: Leaky Bucket is a mechanism for controlling the cell (packet) arrival process of aconnection in an ATM-system. The mechanism corresponds to a queueing system with constantservice time (cell length = 53 bytes)) and a limited buffer. If the arrival process is a Poissonprocess, then we have an M/D/1/k system. The size of the leak corresponds to the average arrivalrate accepted in the long run, whereas the size of the bucket (buffer) denotes the excess allowedduring a short time interval. When implemented in an ATM system the mechanism operates as avirtual queueing system, where a cell is either accepted immediately or rejected. A counter indicatesthe value of the load function. A contract between the operator (network) and the user (connection)agrees upon the size of the leak and the bucket, and based on this information the network is ableto guarantee a certain quality-of-service.Exercise: We first consider the queueing system M/D/1, which has Poisson arrival process withintensity λ = 0.6931 calls per time unit, constant service time which we choose as time unit, andone server. The number of queueing positions is unlimited, and we assume that the system is instatistical equilibrium.
1. Find the first state probabilities p(0), p(1) and p(2) (notice that e0.6931 = 2).
We now assume that there is only one queueing position (M/D/1/2).
2. Find from the state probabilities in Question 1 by applying Keilson’s formulæ in Sec. 10.3.4the state probabilities p2(0), p2(1) og p2(2) in the finite system.
3. What is the probability that a call:
(a) is served immediately?
(b) is delayed before service?
(c) is rejected?
4. Find by using Little’s theorem the mean waiting time for customers which experience a positivewaiting time.
5. What is the probability that a busy period (a period where the server is busy) has the durationone time unit?
6. Find the probability that a busy period has the duration i time units.
554 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to Exercise 10.19 (Exam 1997)
The constant service the is chosen as time unit (s = h = 1) and λ = 0.6931. We then findA = λ · s = 0.6931.
Question 1:
For a M/D/1 system we obtain the state probabilities p(0), p(1) and p(2) from the formulæ inSec. 10.4.2:
p(0) = 1−A = 0.3069 ,
p(1) = (1−A)(eA − 1) = 0.3069 ,
p(2) = (1−A)(e2A − eA(1 +A)) = 0.1884 .
Question 2:
We now consider an M/D/1/2 system, and we apply he formulæ in Sec. 10.3.4.
First we find Q2 (10.11):
Q2 =∞∑
j=2
p(j) = 1− p(0)− p(1) ,
Q2 = 0.3862 .
Then the state probabilities of the truncated system p2(0), p2(1) and p2(2) are obtained by rescalingas given in (10.9) and (10.10) :
p2(0) =p(0)
1−A ·Q2=
0.30691− 0.6931 · 0.3862
= 0.4191 ,
p2(1) =p(1)
1−A ·Q2=
0.30691− 0.6931 · 0.3862
= 0.4191 ,
p2(2) =(1−A) ·Q2
1−A ·Q2=
0.3069 · 0.38621− 0.6931 · 0.3862
= 0.1618 .
These state probabilities of course add to one.
We may also apply the solution given in Sec. 10.4.8. Using Fry’s equation of state for state zero we
INDEX 555
get:
p2(0) = p2(0) + p2(1) p(0, h) = p2(0) + p2(1) e−0.6931 ,
p2(0) = p2(0) + p2(1)/2 ,
p2(1) = p2(0) .
From (10.37) we get:
A = 1− p2(0) +A · p2(2) ,
0.6931 = 1− p2(0) + 0.6931 · p2(2) ,
p2(2) = 1− 10.6931
+p2(0)
0.6921.
Using the normalisation restriction (10.36) we find p2(0):
p2(0) + p2(0) + 1− 10.6931
+p2(0)
0.6921= 1 ,
p2(0) = 0.4191 ,
and thus the same results as above.
Question 3:
The probabilities questioned become:
1. pimmediate service = p2(0) = 0.4191 ,
2. pwaiting before service = p2(1) = 0.4191 ,
3. pblocking = p2(2) = 0.1618 .
Question 4:
Customers arriving when the system is in state 0 are served immediately, and customers arrivingwhen the system is in state 2 are rejected. Thus only customers arriving in state 1 experience apositive waiting time > 0. The arrival intensity of these customers is λx = λ · p2(1). The averagequeue length is given by:
L = 0 · p2(0) + 0 · p2(1) + 1 · p2(2) = p2(2) .
We thus find:
w =L
λx=
0.16180.6931 · 0.4191
= 0.5570 .
556 INDEX
Question 5:
The probability that a busy period has the duration one (and only one) time unit is:
pbusy period = one time unit = pno arrivals during one time unit
= 1− F (1) = 1− (1− e−λ·1) = e−0.6931
= 0.5 .
This is the same as class zero in a Poisson distribution with mean value λ · 1.This result is used in the following question.
Question 6:
We notice that there is only one queueing position. Therefore, at least one customer must arriveduring each service time (only the first is accepted) to maintain a busy period. The number ofarriving customers during a service time is independent of the number of customers arriving duringother service times
Immediately after start of a new service time, the queueing position is idle.
We define pa and pb as:
pa = pduring the first i−1 periods at least one customer arrives per period
= (1− pbusy period = 1 time unit)i−1 = 0.5 i−1 ,
pb = pno arrivals during i’th period
= pbusy period = 1 time unit = 0.5 .
The probability that a busy period has the duration i time units then becomes:
pbusy period = i time units = pa · pb ,
= 0.5 i−1 · 0.5 ,
= 0.5 i .
Updated: 2007-04-18
INDEX 557
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 10.22 (Exam 2004)
Airport priority queueing system
Let us consider an airport where the server is a single runway with Poisson arrival processes. Thetraffic during one morning rush hour consists of arriving aircrafts and departing aircrafts both usingthe runway as follows:
a. Landing: 10 aircrafts arrive per hour. The mean service time is m1,1 = 2 [minutes] and secondmoment of the service time is m2,1 = 6 [minutes2 ].
b. Starting: 20 aircrafts departs per hour. The mean service time is m1,2 = 1.5 [minutes] andsecond moment of the service time is m2,2 = 3 [minutes2 ].
1. Find the offered traffic for each type, and the total offered traffic.
2. Find the total arrival rate and the mean service time for all aircrafts and use this to controlthe total offered traffic.
3. Find the mean waiting time for an arbitrary aircraft when there is no priority.
4. Find the mean waiting time for both types when landing aircraft have non-preemptive priorityover starting aircrafts. Show that the conservation law is fulfilled for this system.
558 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to Exercise 10.22 (exam 2004)
Priority queueing systems are dealt with in Sec. 10.6.
Question 1:
We have to choose a common time unit and in the following we choose minutes.
Landing: The arrival rate is λ1 = 1/6 aircrafts per minute, and the mean service time is s1 = 2minutes. So the offered traffic becomes:
A1 = λ1 · s1 =13
[erlang]
Starting: The arrival rate is λ2 = 1/3 aircrafts per minute, and the mean service time is s2 = 1.5minutes. So the offered traffic becomes:
A2 = λ2 · s2 =12
[erlang]
The total offered traffic becomes:
A = A1 +A2 =56
[erlang]
We notice that the total offered traffic is less than one erlang, so the system is in statistical equilib-rium.
Question 2:
The total arrival rate is obtained by adding the two arrival rates (10.54):
λ = λ1 + λ2
=16
+13
λ =12
The total mean service time is obtained by weighting the mean values with the relative number ofcalls of each type (10.55). Considering one minute we find:
s =λ1
λ· s1 +
λ2
λ· s2
=13· 2 +
23· 1.5
s =53
[minutes]
INDEX 559
From the total process we find the total offered traffic:
A = λ · s =12· 5
3
A =56
[erlang ] q.e.d
Question 3:
To find the mean waiting time for all customers in a single server system we use Pollaczek-Khintchine’s formula (10.3) or (10.2). We first have to find the second moment m2 or the form-factorε of the total traffic process.
The second moment is obtained by weighting as for the mean value (10.56) (random variables inparallel). We find:
m2 =λ1
λ·m2,1 +
λ2
λ·m2,2
=13· 6 +
23· 3
m2 = 4 minutes2
The parameter V = V1,2 becomes:
V =λ
2·m2
=12
2· 4
V = 1
We could of course also obtain V by summation over all traffic classes (10.59):
V =2∑
i=1
Vi = V1 + V2 = V1,2
=1/62· 6 +
1/32· 3
= 1
Pollaczek-Khintchine’s formula (10.3) yields:
W =V
1−A =1
1− 56
W = 6 [minutes]
560 INDEX
This is the mean waiting time for all aircraft. If we only consider delayed aircrafts, then the meanwaiting time is:
w =W
A= 7.2
We get of course the same if we use (10.2). The form-factor becomes:
ε =m2
s2=
4(5/3)2
=3625
Question 4:
We now consider a non-preemptive queueing system where landing aircrafts have priority overstarting aircrafts. The formulæ for this are given in Sec. 10.6.3. For highest priority class the meanwaiting time becomes (10.66):
W1 =V
1−A1
=1
1− 13
W1 =32
[minutes]
For lowest priority we find (10.69):
W2 =V
(1−A1)(1− (A1 +A2))=
W1
1− (A1 +A2)
W2 = 9 [minutes]
The conservation law (10.63) for this system relates the mean waiting time W without priority tothe mean waiting times W1 and W2 with non-preemptive priority discipline:
A ·W = A1 ·W1 +A2 ·W2
56· 6 =
13· 3
2+
12· 9
5 =12
+92
q.e.d.
Thus the conservation law is fulfilled
Updated: 2009-05-04
INDEX 561
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 10.24 (Exam 2007)
IPP/M/1 queueing system
We consider a GI/M/1 single server queueing system with Interrupted Poisson arrival process (IPP)and an infinite number of waiting positions. The exponential service times have mean value µ−1
[time units]. The IPP has On → Off rate γ [time units−1] and Off → On rate ω [time units−1].
The arrival rate of the Poisson process during On-periods is λ calls per time unit. Note that almostall questions are independent of previous questions.
1. Find the offered traffic.
The states of the system are defined as (i, j) where i is number of customers in the system(being served or waiting) (i = 0, 1, . . .), and j is the state of the arrival process (j = a (On),or j = b (Off )). The structure of the state transition diagram is shown in the following figure:
........
.................................
....................................................................................................................................................................................... ........
...................
..................................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
.......................................................................................................................................................................................
........
.................................
....................................................................................................................................................................................... ........
...................
..................................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
.......................................................................................................................................................................................
....................................................
.........................................................................................................
........................................................................................................................................................
.....
....................................................
.........................................................................................................
........................................................................................................................................................
.....
....................................................
.........................................................................................................
........................................................................................................................................................
......
....................................................
.........................................................................................................
........................................................................................................................................................
.....
.............................................................................................................................................................
.............................................................................................................................................................
.............................................................................................................................................................
.............................................................................................................................................................
.............................................................................................................................................................
.............................................................................................................................................................
.............................................................................................................................................................
.............................................................................................................................................................
..................... .................
......................................
..................... .................
......................................
..................... .................
......................................
..................... .................
............................................................................
.............
.........................
......................................
.............
.........................
.....................
.................
.............
.........................
......................................
.............
.........................
................................................................................................................................................................................. ................................................................................................................................................................................. ................................................................................................................................................................................. .................................................................................................................................................................................0, b 1, b 2, b 3, b
0, a 1, a 2, a 3, a
· · ·
· · ·
· · ·
2. Fill in all transition rates in the state transition diagram.
We assume that we know the state probabilities p(i, j) (time average) of the above system.The probability that a customer immediately before arriving observes the system in state iis denoted by π(i) (call average). Notice that a customer always arrives in states (i, a).
3. Find π(i) expressed by p(i, j).
We now let π(0) =√
2−1 and µ = 1 [time unit−1] (and λ = γ = ω = 1 [time unit−1]).
562 INDEX
4. Find the state probability distribution (numerical values) of π(i) applying the theoryfor GI/M/1.
5. Find the mean waiting time W for all customers, and the mean waiting time w fordelayed customers.
The following question was not included at exam:
6. Find the waiting distribution of a delayed call.
INDEX 563
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 10.24 (exam 2007)
Question 1:
The offered traffic is λ/µ when the IPP-process is on, and zero when it is off. We find
A =p(on)
p(on) + p(off)· λµ
=
1γ
1γ
+ 1ω
· λµ
=ω
ω + γ· λµ
Question 2:
Remember there is only one server.
........
.................................
....................................................................................................................................................................................... ........
...................
..................................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
.......................................................................................................................................................................................
........
.................................
....................................................................................................................................................................................... ........
...................
..................................................................................................................................................................................................... ........
.................................
....................................................................................................................................................................................... ........
.................................
.......................................................................................................................................................................................
..................................................
...........................................................................................................
........................................................................................................................................................
.....
..................................................
...........................................................................................................
........................................................................................................................................................
.....
..................................................
...........................................................................................................
........................................................................................................................................................
......
..................................................
...........................................................................................................
........................................................................................................................................................
.....
.............................................................................................................................................................
.............................................................................................................................................................
.............................................................................................................................................................
.............................................................................................................................................................
.............................................................................................................................................................
.............................................................................................................................................................
.............................................................................................................................................................
.............................................................................................................................................................
..................... .................
......................................
..................... .................
......................................
..................... .................
......................................
..................... .................
............................................................................
.............
.........................
......................................
.............
.........................
.....................
.................
.............
.........................
......................................
.............
.........................
................................................................................................................................................................................. ................................................................................................................................................................................. ................................................................................................................................................................................. .................................................................................................................................................................................0, b 1, b 2, b 3, b
0, a 1, a 2, a 3, a
· · ·
· · ·
· · ·λ λ λ λ
µ µ µ µ
ω ω ω ωγ γ γ γ
µ µ µ µ
Question 3:
Per time unit we have λ ·p(i, a) , i = 0, 1, 2 . . . , call attempts in state p(i, a). There are no callattempts in states p(i, b) when the IPP-process is off. Thus the proportion of call attempts
564 INDEX
ariving in state (i, a) becomes:
π(i) =λ · p(i, a)∑∞j=0 λ · p(j, a)
=p(i, a)∑∞j=0 p(j, a)
,
=γ + ω
ω· p(i, a) .
as the denominator is the probability that the IPP process is on.
Question 4:
The system considered is a GI/M/1 system and in Section 10.5.2 (10.43) it is given that thestate probabilities just before an arrival are geometrically distributed:
π(i) = (1− α)αi , i = 0, 1, 2, . . .
We know that π(0) = (1− α) =√
2− 1. Thus α = 2−√
2 = 0.5858, and
π(i) =(√
2− 1)·(
2−√
2)i, i = 0, 1, 2, . . . .
From this we may find the state probabilities p(i, a) according to Question 3.
Question 5:
From (10.51), respectively (10.53), we have:
W =1
µ· α
1− α =2−√
2√2− 1
=√
2 = 1.4142 .
w =W
D=
1
µ· 1
1− α
=
√2
2−√
2=√
2 + 1 = 2.4142 ,
as D = 1− π(0) = 2−√
2.
INDEX 565
Question 6:
As mentioned in Section 10.5.4 the waiting time distribution will be exponentially distributed.The weighting time distribution will be given by the following Cox-distribution, which isequivalent to an exponential distribution with intensity µ · (1−α) (cf. Fig. 2.12) in agreementwith the mean value given above.
....................................................................................... .......................... ................................................................................................................................................... .......................... ................................................................................................................................................... .......................... ................................................................................................................................................... .................................................................................................................................................................................................................................................................................
........
..................
.....................................................................................................................................................................................................................................
..................
........
..................
.....................................................................................................................................................................................................................................
..................
........
..................
.....................................................................................................................................................................................................................................
..................
........
..................
........................................................................................................... .......................... ................................................................................................................................................................................................................................... .......................... ................................................................................................................................................... .......................... ....................................................................................................................................................................................................................................................... ..........................
....................................................................................................................................................................... ..........................· · ·· · ·
· · ·· · ·
· · ·
· · ·
µ µ µα α α α
1 − α 1 − α 1 − α 1 − α
Updated: 2009-05-06
566 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 10.25 (Exam 2008)
M/E2/1 queueing system
We consider an M/E2/1 single server queueing system with Poisson arrival process andErlang-2 distributed service times. There is an infinite number of waiting positions. Thearrival rate is λ = 0.4 calls per time unit. The service times are Erlang-2 distributed withsame rate µ = 1 [time units−1] in both phases.
1. Find the offered traffic.
2. Find by using Pollaczek-Khintchine’s formula the mean waiting time W for all calls.Find also the mean queue length L.
3. Set up a state transition diagram for the above system, where the states of the systemare given below. Index a and b specify whether the call being served is in phase a (firstphase) or phase b (second phase), respectively.
- - - - -
- - - -
A
AA
AAK
AA
AAAK
AA
AAAK
AA
AAAK
AA
AAAK
0 1a 2a 3a 4a
1b 2b 3b 4b
We now assume that arriving calls have different priorities (two classes): high priority callsarrive with rate λ1 = 0.1 calls per time unit, and low priority calls arrive with rate λ2 = 0.3calls per time unit. Total arrival rate (λ=λ1+λ2) and service time distribution are the sameas before).
4. Assume non-preemptive priority and find the mean waiting time for each class.
5. Assume preemptive-resume priority and find the mean waiting time for each class.
INDEX 567
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 10.25 (exam 2008)
Question 1:
The offered traffic is given by A = λ · s where s is the mean service time. The arrival rate isλ = 0.4 calls per time unit, and the mean service time is s = 2 time units as each of the twophases is Erlang-2 distributed with mean value one time unit. Thus we get
A = 0.4 · 2 = 0.8 [erlang]
Question 2:
The form factor of an Erlang-2 distribution is ε = 32
(2.56). Using(10.2) we get:
W =A · s
2(1− A)· ε
=0.8 · 2
2(1− 0.8)· 3
2,
W = 6 [time units]
L = λ ·W = 0.4 · 6
L =12
5= 2.4
We may also use (10.3):
W =V
1− A where V =λ
2·m2 .
m2 is the second moment of the Erlang-2 distribution (2.52) with parameter µ = 1 in eachphase :
m2 =k (k + 1)
µ 2=
2 · 312
= 6 .
Thus we get V = 1.2 and W=1.2/2 = 6, the same result as above.
568 INDEX
Question 3:
- - - - -
- - - -
A
AA
AAK
AA
AAAK
AA
AAAK
AA
AAAK
AA
AAAK
0 1a 2a 3a 4a
1b 2b 3b 4b
0.4 0.4 0.4 0.4 0.4
0.4 0.4 0.4 0.4
1 1 1 1 1 1 1 1 1
Question 4:
Non-preemptive queueing discipline
We have to find the remaining service time at a random point of time (10.59):
V = V1,2 =2∑
i=1
Vi = V1 + V2
=2∑
i=1
λi2·m2i
The second moment of the Erlang-2 distribution is obtained above as m2 = 6. We haveλ1 = 0.1 and λ2 = 0.3. Thus we get:
V = V1 + V2
V1 =0.1
2· 6 = 0.3
V2 =0.3
2· 6 = 0.9
V = 1.2 =6
5
We have A1 = λ1 · s = 0.2 [erlang] and A1 = λ2 · s = 0.6 [erlang]. Thus we get from (10.66),
INDEX 569
respectively (10.69):
W1 =V
1− A1
=65
1− 0.2
W1 =3
2[time units]
W2 =W1
1− (A1 + A2)
W2 =15
2
As a control we may use the conservation law (10.63) and the result from question 2:
A ·W = A1 ·W1 + A2 ·W2
0.8 · 6 = 0.2 · 3
2+ 0.6 · 15
2
4.8 = 0.3 + 4.5 q.e.d.
Question 5:
Preemptive resume queueing discipline
Above we have already obtained V1 and V . For the preemptive-resume system we have(10.77):
W1 =V1
1− A1
=0.3
1− 0.2
W1 =3
8[time units]
W2 =V1,2
1− A11− (A1 + A2) +A1
1− A1
· s2
=6/5
(1− 0.2)(1− 0.8)+
0.2
0.8· 2
W2 = 8 [time units]
In this case we cannot use the conservation law as control, because for preemptive-resumequeueing discipline it is only valid for exponential service time distributions.
Updated: 2009-04-28
570 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 10.26 (Exam 2009)
Priority queueing system
We consider a single server queueing system with two types of customers arriving accordingto Poisson processes.
• Customers of type one has arrival rate λ1 = 0.2 [customer/time unit].The service time is constant with mean value m1,1 = 1 [time unit].
• Customers of type two has arrival rate λ2 = 0.2 [customer/time unit].The service times are hyper-exponentially distributed with two branches (H2):90 % of the customers have mean service time m1,a = 1 [time unit],10 % of the customers have mean service time m1,b = 21 [time units].
1. Show that the service time distribution of type two customers has mean value m1,2 = 3[time units], second moment m2,2 = 90 [time units2], and form factor ε = 10.
2. Find the offered traffic for each type of customer, and the total offered traffic.
3. Find by using Pollaczek-Khintchine’s formula the mean waiting time W for all cus-tomers. Also find the mean waiting time w for delayed customers.
We now assume that type one customers have higher priority than type two customers.
4. Assume non-preemptive priority and find the mean waiting time for each type of cus-tomers.
5. Assume preemptive-resume priority and find the mean waiting time for each type ofcustomers.
INDEX 571
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 10.26 (exam 2009)
Question 1:
Type two customers have a hyperexponential distributed service time which is dealt with inSec. 2.3.2.
...................................................................................................... ..........................
...................................................................................................... ..........................
...................................................................................................... ..........................
...................................................................................................... ..........................
............................................................................................................................... .......................... ............................................................................................................................... ..........................
9/10
1/10
m1a = 1
m1b = 21
a
b
...........................................................................................................................................................................................
..........................
..........................................................................................................................................................................................
...........................
..........................................................................................................................................................................................
...........................
...........................................................................................................................................................................................
..........................
m1 = 0.9 · 1 + 0.1 · 21 = 3 [time-units]
m2 = 0.9 · 2
12+ 0.1 · 2
(1/21)2 = 90 [time-units2]
ε =m2
m21
=90
32= 10
Question 2:
A1 = λ1 · s1 = 0.2 · 1 = 0.2 [erlang]
A2 = λ2 · s2 = 0.2 · 3 = 0.6 [erlang]
A = A1 + A2 = 0.2 + 0.6 = 0.8 [erlang]
Question 3:
We may obtain the mean waiting time for all customers W from (10.2). Then we have to findmean mean service time and form-factor for all customers. Above we found this for type twocustomers only. Considering the following questions, it is easier to find W from (10.3). V inthis version of Pollaczek Khintchine’s formula is for the total traffic process and is obtained
572 INDEX
from (10.59). We find:
V1,2 = V1 + V2 =λ1
2·m2,1 +
λ2
2·m2,2
= 0.1 ·(12 + 90
)= 9.1 [time-units]
W =V1,2
1− A =9.1
1− 0.8
= 45.5 [time-units]
Question 4:
Non-preemptive queueing is dealtr with in Sec. 10.6.3. We get by using (10.66) for class oneand (10.68) for class two:
W1 =V1,2
1− A1
=9.1
1− 0.2
= 11.375 [time-units]
W2 =W1
1− A1 − A2
=11.375
1− 0.2− 0.6
= 56.875 [time-units]
Question 5:
Preemptive-resume queueing discilie is dealt with in Sec. 10.6.6, and we get by using (10.78)for class one and the general formula (10.77) for class two:
W1 =V1
1− A1
=0.1 · 12
1− 0.2
= 0.125 [time-units]
W2 =V1,2
(1− A1)(1− A1 − A2)+
A1
1− A1
· s2
=9.1
(1− 0.2)(1− 0.8)+
0.2
1− 0.2· 3
= 57.625 [time-units]
Updated: 2010-04-21
INDEX 573
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 11.1 (Exam 1984)
M/H2/1 QUEUEING SYSTEM WITH PROCESSOR SHARING
Jobs arrive to a computer system according to a Poisson process with intensity λ. Theservice time is distributed as a hyper-exponential distribution with two phases denoted by a,respectively b:
F (t) = p (1− e−µa t) + (1− p) (1− e−µb t) .
-
-
-
µa
µb
-
p
1− p = q
1. Find the offered traffic A.
In the following we assume that A < 1. The computer system operates as a single-serversystem with processor sharing queueing discipline, i.e. if there are x jobs in the system, thena job in phase a is served with rate µa/x and a job in phase b is served with rate µb/x.
The state of the system is defined as (i, j), where i is the number of jobs in phase a, and j isthe number of jobs in phase b. The state transition diagram becomes two–dimensional withthe structure shown in the following figure.
2. Find the missing intensities in connection with the states: (1,1), (1,2), (2,2) and (2,1).
3. Show by considering the above-mentioned four states that the state transition diagramis reversible.
An M/M/1 queueing system with the offered traffic A (A < 1) has the equilibrium stateprobabilities:
p(i) = p(0) · Ai , i = 0, 1, 2, . . .
574 INDEX
4. Show by expressing the state probabilities by state p(0, 0) that the above M/H2/1–processor sharing system has the same state probabilities as M/M/1 when we let:
p(i) =i∑
x=0
p(x, i− x) , i = 0, 1, 2, . . . ,
and we only consider i = 1 and 2.
-
p · λ
µa-
p · λ
µa-
p · λ
µa
-
p · λ
12
· µa-
-
-
p · λ
13
· µa-
-
p · λ
14
· µa
q · λ 14
· µbq · λ 13
· µbq · λ 12
· µbq · λ µb
q · λ µb
q · λ µb
03 13
02 12 22
01 11 21 31
00 10 20 30
? ?
?
?
6
?
6
?
6
?
6
?
6
?
6
?
6
?
6
?
6
?
INDEX 575
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to Exercise 11.1
Question 1:
The average service time s = m1 is (2.67):
s =p
µa+
(1− p)µb
.
The offered traffic the becomes:
A = λ ·p
µa+
(1− p)µb
.
Question 2: The departure rates are obtained as follows. We first calculate the service rate
-
pλ
µa-
pλ
12(2 µa)
-
pλ
13(3 µa)
-
pλ
12(µa)
-
pλ
-
pλ
13(2 µa)
14(3 µa)
-
pλ
13(µa)
-
pλ
14(2 µa)
-
pλ
14(µa)
qλ 14(µb)qλ 1
3(µb)qλ 1
2(µb)qλ µb
qλ 12(2 µb)
qλ 13(3 µb)
qλ 13(2 µb)
qλ 14(3 µb)
qλ 14(2 µb)
03 13
02 12 22
01 11 21 31
00 10 20 30
? ?
?
?
6
?
6
?
6
?
6
?
6
?
6
?
6
?
6
?
6
?
as if the capacity is infinite. This term is put inside brackets on the figure. For state (i, j)this is (iµa) to the left and (jµb) downwards.
576 INDEX
The we reduce the service rate with a factor equal to the number of jobs in the system. Forthe state (i, j) both of the above service rates are divided by (i+j).
Note: The capacity of the server is constant independent of the state. But the service rate(the intensity at which the customers leave the server) depends on the mix of customers. Ifmost customers in the system has a short service time, then service rate will be high, whereasif most customers have a long service time, then the service rate will be low.
Question 3:
For the four states considered we have:
Flow clockwise: qλ · pλ · 2
4µb ·
2
3µa ,
Flow counter clockwise: pλ · qλ · 2
4µa ·
2
3µb .
We notice that they are equal. Thus the process is reversible. The same is valid for the othersquares, and we may express all states by state p(0, 0) (7.15).
Question 4:
We find (q = 1− p):
p(0, 1) =(1− p)λ
µb· p(0, 0) ,
p(1, 0) =p · λµa· p(0, 0) ,
and thus:
p(1) = p(0, 1) + p(1, 0)
= p(0, 0) · λ(p
µa+
1− pµb
),
p(1) = p(0) · A q.e.d.
where p(0) = p(0, 0) and A is obtained in Question 1.
INDEX 577
In a similar way we find:
p(0, 2) =(1− p)λ · (1− p) · λ
µb · µb· p(0, 0) ,
p(1, 1) =(1− p)λ · pλµa · 1
2µb
· p(0, 0) ,
p(2, 0) =pλ · pλµa · µa
· p(0, 0) .
p(2) = p(0, 2) + p(1, 1) + p(2, 0)
= p(0, 0) · λ2 ·[
1− pµb
2
+2p · (1− p)
µa µb+
p
µa
2],
p(2) = p(0, 0) · λ2 ·p
µa+
1− pµb
2
p(2) = p(0) · A2 q.e.d.
Updated: 2008-04-21
578 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 11.2 (Exam 2004)
Palm’s machine-repair model and generalized processor sharing
We consider Palm’s machine-repair model with four terminals and two servers in parallel.The thinking times are exponentially distributed with mean value γ−1 = 2 time units, andthe service times are exponentially distributed with mean value µ−1 = 1 time unit. The stateof the system is defined in the usual way as the number of terminals being served or waiting.
1. Find the traffic offered to the two servers.
2. Construct the state transition diagram, and find assuming statistical equilibrium thestate probabilities p(i), i = 0, 1, . . . , 4 .
................................................................................................................................................
................................................................................
.................................................................................................................................................
...............................................................................
.................................................................................................................................................
...............................................................................
.................................................................................................................................................
...............................................................................
................................................................................................................................................
................................................................................
.................................................................................................................................................
...............................................................................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................
........................................................................................ ................ ........................................................................................ ................ .........................................................................................................................
......................................................................................................... ................
........................................................................................................................... ................
........................................................................................................................... ................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
....................
................
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................
............................................................................................................. ................
............................................................................................................. ................
............................................................................................................. ................
............................................................................................................. .....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.......
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
....
........
........
........
........
........
........
........
........
........
........
........
. ...........................................................................................................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
.
.......................................
.......................................
........................................
...............
.....................................................................................................................................
.........................................................................................................................................................................................................
..............................................................................................................................................................................................................................................................................................
........
........
........
........
........
........
........
.
........
........
........
........
........
........
........
.
........
........
........
........
........
........
........
.
........
........
........
........
........
........
........
.
........
........
........
........
........
........
........
.
........
........
........
........
........
........
........
.
............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............. ............................................................................................................................................................................................................................................................................................................................................................................
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
.............
..
γ
γ
γ
γ
µ
µ
Queue Servers
Terminals Queueing system
3. Find the average number of terminals which are:
a. thinking,
b. waiting,
c. being served.
4. Find the traffic congestion C.
5. Find by applying Little’s theorem to the queue and to both servers the response time,that is the average waiting + service time.
INDEX 579
We now assume that the service times are hyper-exponentially distributed as follows:
F (t) =1
10·(
1− e−t/7)
+9
10·(
1− e−3 t), t ≥ 0 .
The state of the system is now defined as (i, j), where i (i = 0, 1, . . . , 4) is the number of jobsbeing served in phase one, and j (j = 0, 1, . . . , 4) is the number of jobs being served in phasetwo, 0 ≤ i+ j ≤ 4 .We also assume that the two servers operate in processor sharing mode when more than twoterminals are in the queueing system. Thus the service rate in state (i, j) when i+ j > 2 is:
2
i+ j· iµ1 +
2
i+ j· jµ2 ,
where the first term is the total service rate for the i jobs being served in phase one, and thesecond term is the total service rate for the j jobs being served in phase two.
When two or fewer terminals are being served, each terminal has its own server.
6. Construct the two-dimensional state transition diagram.
7. Consider the state transition diagram:
a. show it is reversible,
b. has it product form?
8. Show that the aggregated state probabilities p(i + j = x), x = 0, 1, . . . , 4 , are thesame as the state probabilities obtained in question 2.
580 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to Exercise 11.2 (exam 2004)
This is Palm’s machine-repair model with multiple repairmen which is dealt with in Sec. 9.6.4.
Question 1:
The offered traffic is defined as the traffic carried when the capacity is unlimited. This willbe the case when we have at least 4 servers. Thus we get the same offered traffic as in theEngset case (5.11):
A = S · α = S · β
1 + β
= S · γ/µ
1 + γ/µ
= 4 · 0.5
1 + 0.5
A =4
3
Question 2:
........
.............................
................................................................................................................................................................................................................................................... ........
.............................
................................................................................................................................................................................................................................................... ........
.............................
................................................................................................................................................................................................................................................... ........
.............................
................................................................................................................................................................................................................................................... ........
.............................
...................................................................................................................................................................................................................................................
.......................................
.................................................................................... .......................................
.................................................................................... .......................................
.................................................................................... .......................................
....................................................................................
.......................................................................................................................
.... .......................................................................................................................
.... .......................................................................................................................
.... .......................................................................................................................
....
...................... ................ ...................... ............
.... ...................... ................ ...................... ............
....
............................................................................
............................................................................
0 1 2 3 4
42
32
22
12
1 2 2 2
The relative state probabilities are obtained by using cut equations:
q(0) = 1 p(0) = 1687
= 0.1839
q(1) = 2 p(1) = 3287
= 0.3678
q(2) = 32
p(2) = 2487
= 0.2759
q(3) = 34
p(3) = 1287
= 0.1379
q(4) = 316
p(4) = 387
= 0.0345
Sum = 8716
Sum = 1 = 1.0000
INDEX 581
We could also apply the theory of queueing networks to this model, but the above approachbased on the state transition diagram is indicated in the question.
Question 3:
a. The average number of terminals thinking becomes:
nt =4∑
i=0
(4− i) p(i)
= 4 · p(0) + 3 · p(1) + 2 · p(2) + 1 · p(3) + 0 · p(4)
nt =220
87= 2.5287 .
b. The average number of terminals waiting becomes:
nw =4∑
i=2
(i− 2) p(i)
= 0 · p(2) + 1 · p(3) + 2 · p(4)
nw =18
87= 0.2069 .
c. The average number of terminals being served is:
ns =2∑
i=0
i · p(i) + 2 ·4∑
i=3
p(i)
= 0 · p(0) + 1 · p(1) + 2 · p(2) + 2 · p(3) + 2 · p(4) ,
ns =110
87= 1.2644 .
As a control the three numbers add to S = 4.
Question 4:
The offered traffic obtained in question one is A= 4/3, and the carried traffic is ns = 110/87.The traffic congestion then becomes:
C =A− YA
=43− 110
8743
,
C =6
116= 0.0517 .
582 INDEX
"!#
"!# -
17
"!# -
27
"!# -
23
· 37
"!# -
24
· 47
420
320
220
120
"!#
?
6
3
?
6
3
?
6
23
· 3
?
6
24
· 3
"!# -
17
"!# -
23
· 27
"!# -
24
· 37
320
220
120?
6
6
?
6
24
· 6
?
6
23
· 6
"!#
"!# -
23
· 17
"!# -
24
· 27
220
120?
6
23
· 9
?
6
24
· 9
"!#
"!# -
24
· 17
120?
6
24
· 12
"!#
00 10 20 30 40
01 11 21 31
02 12 22
03 13
04
3620
2720
1820
920
2720
1820
920
1820
920
920
Question 5:
We find the following relation (the mean service time is one):
1
λ=
W
nw=
1
ns
=W + 1
nw + ns=
R
nw + ns=
1
ns,
R = 1 +nwns
,
R =64
55.
We thus have:
thinking time: = 2
waiting time: = 955
service time: = 1
circulation time: = 17455
INDEX 583
The number of terminals in each stage (thinking, waiting, service) is proportional to the timespent in each stage. This is Little’s theorem when we have the same flow in each stage. Thisis in agreement with question 3.
Question 6:
We let state (i, j) correspond to i terminals being served in the phase with mean value 7 andj terminals being served in the phase with mean value 1/3. The state transition diagram isshown in the figure above. For example, in state (0, 0) the total arrival rate is 4/2. Withprobability 1/10 this is a terminal with mean holding time 7, and with probability 9/10 it isa terminal with mean holding time 1/3.
Question 7:
We show that the flow clockwise is equal to the flow counter-clockwise in each square. Thisalso confirms that the state transition program is correct. The state transition diagram doesnot have the product-form property.
Question 8:
As the state transition diagram is reversible we may express all state probabilities by statep(0, 0).
We find the following relative state probabilities when we let q(0, 0) = 160 000 (for convenienceto get integer values which are easy to work with):
243
3 240 2 268
21 600 22 680 7 938
96 000 100 800 52 920 12 348
160 000 224 000 117 600 41 160 7 203
We find:
q(0) = q(0, 0) = 160 000
q(1) = q(1, 0) + q(0, 1) = 320 000
q(2) = q(2, 0) + q(1, 1) + q(0, 2) = 240 000
q(3) = q(3, 0) + q(2, 1) + q(1, 2) + q(0, 3) = 120 000
q(4) = q(4, 0) + q(3, 1) + q(2, 2) + q(1, 3) + q(0, 4) = 30 000
584 INDEX
The sum of the the relative state probabilities is 870.000 After normalization we get:
p(0) =16
87
p(1) =32
87
p(2) =24
87
p(3) =12
87
p(4) =3
87
We see the aggregated state probabilities for hyper-exponential service times relatively arethe same as for exponential service times in question 2. This indicates that the machine-repair model with processor sharing is insensitive to the service time distribution. Thisis analogous to that the two systems M/M/1 and M/G/1–PS also have the same stateprobabilities (Sec. 10.7).
Updated: 2009-05-05
INDEX 585
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 12.6 (exam 2001)
MACHINE–REPAIR MODEL AS A CYCLIC QUEUEING NETWORK
We consider a machine–repair model with 4 customers (sources, terminals). The customershave exponentially distributed thinking times with intensity µ1 = 0.5 customers per timeunit (node 1). The customers are served by two successive single servers. A customer is firstserved at node 2 which is a single server with exponentially distributed service times withmean value µ−1
2 = 1 time units. Then a customer continues to node 3 which is a single serverwith exponentially distributed service times with mean value µ−1
3 = 1/2 time units. Afterfinishing service at node 3 the customers return to node 1 and starts a new thinking time.
This system is a single chain cyclic queueing network with 3 nodes and 4 identical sources.Node 1 (the terminals) corresponds to an M/M/∞ queueing system whereas both node 2and node 3 are M/M/1 single–server systems. The customers circulate between the nodes inthe cyclic sequence 1, 2, 3, 1, 2, . . ..
1. Let the relative load of node 3 be equal to one and find the relative loads of node 1 andnode 2.
2. Find the relative state probabilities of each node considered in isolation.
3. Apply the convolution algorithm to find the absolute state probabilities of each node.
4. Find the average number of customers in each node.
5. Find the average sojourn (waiting + service) time for a customer in each node, and theaverage total cycle time.
6. Find the mean queue length observed by a customer at the three nodes if we increasethe number of customers from 4 to 5.
586 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 12.6 (exam 2001)
Question 1:
As all three nodes serve the same number of customers (λ per time unit), the relative loadsare proportional with the mean service times. The relative load is the relative offered traffic,which in queueing systems is equal to the carried traffic. The offered traffic is A = λ ·s wheres is the mean holding time, but we do not know λ. We get:
α3 = 1 , α2 = 2 , α1 = 4 .
Question 2:
The first node is of type M/M/∞ and has the relative state probabilities (12.3):
p(i) = p(0) · αi1
i!, i = 0, 1, . . . .
The second and the third node are both of type M/M/1 and have the relative state proba-bilities (12.4):
p(i) = p(0) · αi , i = 0, 1, . . . .
Thus we get the following relative state probabilities:
i q1(i) q2(i) q3(i)
0 1 1 1
1 4 2 1
2 8 4 1
3 323
8 1
4 323
16 1
In the following we multiply the relative state probabilities q1(i) of node 1 by 3 to work withintegers.
Question 3:
We now apply the convolution algorithm to find the absolute state probabilities of each node.The node we want to consider should be the last one convolved, i.e. we first aggregate allother nodes into one node by convolution. We find:
INDEX 587
Node 1:
i q2(i) q3(i) q23(i) = q2 ∗ q3 q1(i) q123(i) = q23 ∗ q1
0 1 1 1 3 3
1 2 1 3 12 21
2 4 1 7 24 81
3 8 1 15 32 233
4 16 1 31 32 569
The term of interest is q123(4) = 569, which is made up of the following contributions:
q123(4) = q1(0) · q23(4) + q1(1) · q23(3) + q1(2) · q23(2) + q1(3) · q23(1) + q1(4) · q23(0)
= 3 · 31 + 12 · 15 + 24 · 7 + 32 · 3 + 32 · 1= 569 . +
From this we get the state probabilities of node 1:
p1(0) =93
569= 0.1634 ,
p1(1) =180
569= 0.3163 ,
p1(2) =168
569= 0.2953 ,
p1(3) =96
569= 0.1687 ,
p1(4) =32
569= 0.0562 .
In a similar way we find the state probabilities of node 2 and node 3.
588 INDEX
Node 2:
i q1(i) q3(i) q13(i) = q1 ∗ q3 q2(i) q123(i) = q13 ∗ q2
0 3 1 3 1 3
1 12 1 15 2 21
2 24 1 39 4 81
3 32 1 71 8 233
4 32 1 103 16 569
The term of interest is q123(4) = 569, which is made up of the following elements:
q123(4) = q2(0) · q13(4) + q2(1) · q13(3) + q2(2) · q13(2) + q2(3) · q13(1) + q2(4) · q13(0)
= 1 · 103 + 2 · 71 + 4 · 39 + 8 · 15 + 16 · 3= 569 . +
From this we get the state probabilities of node 2:
p2(0) =103
569= 0.1810 ,
p2(1) =142
569= 0.2496 ,
p2(2) =156
569= 0.2742 ,
p2(3) =120
569= 0.2109 ,
p2(4) =48
569= 0.0844 .
INDEX 589
Node 3:
i q1(i) q2(i) q12(i) = q1 ∗ q2 q3(i) q123(i) = q12 ∗ q3
0 3 1 3 1 3
1 12 2 18 1 21
2 24 4 60 1 81
3 32 8 152 1 233
4 32 16 336 1 569
The term of interest is q123(4) = 569, which is made up of the following elements:
q123(4) = q3(0) · q12(4) + q3(1) · q12(3) + q3(2) · q12(2) + q3(3) · q12(1) + q3(4) · q12(0)
= 1 · 336 + 1 · 152 + 1 · 60 + 1 · 18 + 1 · 3= 569 . +
From this we get the state probabilities of node 3:
p3(0) =336
569= 0.5905 ,
p3(1) =152
569= 0.2671 ,
p3(2) =60
569= 0.1054 ,
p3(3) =18
569= 0.0316 ,
p3(4) =3
569= 0.0053 .
590 INDEX
Question 4:
The average number of customers in each node is obtained from the state probabilities:
Lj =4∑
i=0
i · pj(i) , j = 1, 2, 3 .
We find:
L1 =932
569= 1.6380 ,
L2 =1006
569= 1.7680 ,
L3 =338
569= 0.5940 .
As a control we have the total number of customers:
L1 + L2 + L3 = 4 .
Question 5:
The utilisation of the nodes is obtained from the state probabilities. The first node is aninfinite-server with carried traffic
Y1 =4∑
i=1
i · p1(i)
and the two other nodes are single-server systems, which have the carried traffic
Yi = 1− pi(0) , i = 2, 3 .
For a single-server system the carried traffic equals the utilisation. The carried traffic becomes(we know the ratio between the carried traffic in the nodes from Question 1):
Node 3: Y3 = 1− 336
569= 0.4095 [erlang] ,
Node 2: Y2 = 1− 103
569= 0.8190 [erlang] ,
Node 1: Y1 =932
569= 1.6380 [erlang] .
INDEX 591
The circulation rate is therefore:
Λ =1
2· 1.6380 = 1 · 0.8190 = 2 · 0.4095 = 0.8190 [customers/time unit] .
where we as a control have that all three values are equal.
The sojourn (waiting + service) time in each node becomes by Little’s law using Li fromQuestion 4:
W1 =L1
Λ= 2 [time units],
W2 =L2
Λ=
1006
569 · 0.8190= 2.1587 [time units],
W3 =L3
Λ=
338
569 · 0.8190= 0.7253 [time units].
The total circulation time becomes:
R = 2 + 2.1587 + 0.7253
= 4.8840 [time units].
As a control we have:
R =4
0.8190= 4.8840 [time units].
Question 6:
We use the arrival theorem which says that the fifth customer observes the system as ifhe does not belong to the system himself. So the fifth customer will see the mean valuescalculated in question 4 for a system with 4 customers.
Updated: 2009-05-05
592 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 12.7 (Exam 2002)
ENGSET’s MODEL AS A QUEUEING NETWORK
We consider Engset’s loss system with S = 6 sources and n = 3 channels. The arrival rate ofan idle source is γ = 2 calls per time unit, and the mean service time is µ−1 = 1 (chosen astime unit).
1. Find the offered traffic.
2. Calculate the time congestion E using a formula recursive in n.Each step in the recursion should be visible.
We consider a closed queueing network with K = 2 nodes and S = 6 customers. Node oneis an infinite server (IS) and the service rate is µ1 per server. Node two is an M/M/3–losssystem with service rate µ2 per server, which corresponds to an infinite server (IS) truncatedat state 3.The routing is cyclic so that a customer served in node one goes to node two, and a customerserved in node two goes to node one. A customer blocked in node two returns to (i.e. staysin) node one. Assume the circulation rate is λc, and let α1 = λc/µ1 and α2 = λc/µ2. This isa queueing network with blocking, and it has product-form.
3. Find the relative state probabilities of each node as independent systems.
4. Convolve the two nodes into one under the assumption that the total number of cus-tomers is 6, and show that the state probabilities p(i), (i = 0, 1, 2, 3) of node two cor-respond to an Engset loss system with S = 6 sources, n = 3 channels, and β = µ1/µ2.
INDEX 593
Danmarks Tekniske Universitet Data– & TeletrafikteoriDTU–Fotonik, Netværksgruppen kursus 34 340
Solution to Exercise 12.7: (Exam 2002)
Question 1:
As given in the textbook (5.10) & (5.11) we have:
β =γ
µ= 2 ,
α =β
1 + β=
2
3,
A = S · β
1 + β,
A = 6 · 2
3= 4 [erlang ] .
Question 2:
Using the recursion formula for E (5.52) we get:
Ex,S(β) =(S − x+ 1)β · Ex−1,S(β)
x+ (S − x+ 1)β · Ex−1,S(β), E0,S(β) = 1 .
E0,6(2) = 1 ,
E1,6(2) =12 · 1
1 + 12 · 1 =12
13,
E2,6(2) =10 · 12
13
2 + 10 · 1213
=60
73,
E3,6(2) =8 · 60
73
3 + 8 · 6073
=160
233= 0.6867 .
We may also use the corresponding recursive formula for the inverse blocking probability
Ix,S(β) =1
Ec,S(β)(5.53) given in the textbook:
Ix,S(β) = 1 +x
(S − x+ 1)β· Ix−1,S(β) , I0,S(β) = 1 .
594 INDEX
Question 3:
According to the description in the exercise we get the following state probabilities:
State Node 1 Node 2
0 1 1
1 α1 α2
2α2
1
2!
α22
2!
3α3
1
3!
α32
3!
4α4
1
4!0
5α5
1
5!0
6α6
1
6!0
Question 4:
By convolution we get for a network with 6 customers the following contributions:
C =α6
1
6!· 1 +
α51
5!· α2 +
α41
4!· α
22
2!+α3
1
3!· α
32
3!
=α6
1
6!
1 +
6
1· α2
α1
·+6 · 51 · 2 ·
(α2
α1
)2
+6 · 5 · 41 · 2 · 3 ·
(α2
α1
)3
=α6
1
6!q(0) + q(1) + q(2) + q(3)
=α6
1
6!·Q ,
whereα2
α1
=λcµ2
/λcµ1
=µ1
µ2
and Q = q(0) + q(1) + q(2) + q(3) .
Term 1 corresponds to 6 customers in the first node and 0 customers in the second node.Term 2 corresponds to 5 customers in the first node and 1 customers in the second node.
INDEX 595
Term 3 corresponds to 4 customers in the first node and 2 customers in the second node.Term 4 corresponds to 3 customers in the first node and 3 customers in the second node.
We can at most have 3 customers in the second node.
We notice that the normalised state probabilities correspond to an Engset system (5.27) withS = 6 customers and n = 3 channels:
p(i) =q(i)
Q,
p(i) =
(6
i
) (µ1
µ2
)i
3∑
j=0
(6
j
) (µ1
µ2
)j .
The offered traffic per idle source, usually called β, is:
β =µ1
µ2
.
Set up the state transition diagram for this system!
Updated: 2004-05-04
596 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 12.8 (Exam 2006)
Closed and mixed queueing networks
We consider a closed queueing network with one chain of customers and two nodes. Nodeone is of type M/M/1, and node two is of type M/M/2. After finishing service in a nodea customers passes on to the other node. There is a total of 4 customers in the queueingnetwork. The mean service times is s1 = 1 [time unit] in node one, and s2 = 2 [time units]in node 2.
We assume the system is in statistical equilibrium. The state of the system is defined as thenumber of customers in node one.
1. Construct the one-dimensional state transition diagram of the system.
2. Find the state probabilities of the system and the average number of customers in eachnode.
3. Find from the above state probabilities the intensity Λ by which the customers circulatein the system.
4. Find the mean sojourn times (sojourn time = waiting time + service time) in the twonodes (apply Little’s law), and find the mean cycle time of a customer.
5. Apply the convolution algorithm to find the state probabilities of the system and cal-culate the average number of customers in the two nodes from the convolution terms.
We add an open chain which loads node one with a0 erlang (0 ≤ a0 < 1). We denote theunknown load from the closed chain in node one by a1. The state of node one is now givenby p(i, j), where i is the number of customers from the closed chain and j is the number ofcustomers from the open chain.
6. Write down the two-dimensional state probabilities of node one expressed by p(0, 0):
p(i, j), 0 ≤ i ≤ 4, 0 ≤ j <∞ .
7. Show by finding the marginal distribution:
p(i, ·) =∞∑
j=0
p(i, j) ,
INDEX 597
and using the identity:∞∑
j=0
(i+ j)!
i! · j! aj =1
(1− a)i+1,
that we can eliminate the open chain by reducing the service rate of node one by afactor (1− a0).
598 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to Exercise 12-8 Exam 2006
Question 1:
The state transition diagram becomes as follows. Observe that the arrival process to nodeone is the departure process from node 2, and that both servers of node 2 work in states0, 1, 2, only one server works in state 3, and no server of node two works in state 4.
Y
j
Y
j
Y
j
Y
j0 1 2 3 4
1 1 1 1
1 1 1 12
Question 2:
The relative state probabilities q(i) = p(i)/p(0), respectively the absolute state probabilitiesp(i), for node one become as follows:
q(0) = 1 p(0) = 29
q(1) = 1 p(1) = 29
q(2) = 1 p(2) = 29
q(3) = 1 p(3) = 29
q(4) = 12
p(4) = 19
The average number of customers in node one, respectively node two, is:
L1 =4∑
i=0
i · p(i) =16
9,
L2 =4∑
i=0
(4− i) · p(i) =20
9,
L = L1 + L2 = 4
INDEX 599
Question 3:
Node one is working with rate µ = 1 except in state zero. So the average number of customersflowing through node one per time unit becomes:
Λ1 = 1 · 1− p(0) =7
9.
If we consider node two we get the same result:
Λ2 = 0 · p(4) +1
2· p(3) + 1 · p(2) + p(1) + p(0)
=7
9.
We of course have:
Λ = Λ1 = Λ2
(This will also be fulfilled even if the state probabilities are erroneous).
Question 4:
The mean sojourn times Wi in the two nodes become (time units):
W1 =L1
Λ=
16
7,
W2 =L2
Λ=
20
7.
The circulation time becomes:
R = W1 +W2 =36
7=
4
Λ=
4
7/9,
The term 4/Λ is obtained by aplying Littles’s law to the total system. The service times inthe nodes are given so the average waiting time becomes 9/7 in node one and 6/7 in nodetwo.
Question 5:
If we let the relative load of node one α1 = 1, then the relative load of node two becomesα2 = 2. Node one is a single server node, whereas node two has two servers. The relativestate probabilities become as follows:
600 INDEX
State Node 1 Node 2
0 1 1
1 1 2
2 1 2
3 1 2
4 1 2
Only the convolution term with 4 customers is of interest:
q12(4) = 1 · 2 + 1 · 2 + 1 · 2 + 1 · 2 + 1 · 1
We notice that this is similar to the relative state probabilities in Question 2, and thus weget the same result for average number of customers in the two nodes.
Question 6:
The two-dimensional state probabilities become (11.22):
p(i, j) =ai1i !· a
j0
j !· (i+ j)! · p(0, 0) , 0 ≤ i ≤ 4 , 0 ≤ j <∞ .
Question 7:
Summing over all values of j we get:
p(i, ·) =∞∑
j=0
p(i, j)
=∞∑
j=0
ai1i !· a
j0
j !· (i+ j)! · p(0, 0)
= ai1 · p(0, 0) ·∞∑
j=0
(i+ j)!
i! · j! aj0
=ai1 · p(0, 0)
(1− a0)i+1
=
a1
1− a0
i· p(0, 0)
1− a0
.
INDEX 601
We notice that state zero has changed its relative load as compared with other nodes inthe network from 1 to 1/(1− a0) and that state probability p(i) in this single server M/M/1queueing system is equal to state probabilities for a single server queueing system with offeredtraffic a1/(1−a0), i.e. the service rate is reduced by a factor (1−a0). The state probabilitiesshould add to one as for M/M/1. Thus we find p(0, 0) as follows:
p(0, 0)
1− a0
= 1− a1
1− a0
.
p(0, 0) = (1− a0)− a1 .
Updated 2008-05-04
602 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Exercise 12.9 (Exam 2009)
Queueing network with three nodes
We consider an open queueing network with three nodes as shown in the figure.
node 1
node 2
node 3
p1,2 = 12
p1,3 = 12
p2,3 = 12
p2,∞ = 12
p3,∞ = 1
λ1 = 2 n1 =3µ1=1
n2 =1µ2 =2
n3 =1µ3 =2
1
PPPPPPPPPPPq
-
?
-
-
• Node one is an M/M/3 queueing system with mean service time s1 = 1 [time unit].Calls arrive from outside to node one according to a Poisson process with rate λ1 = 2[customers/time unit]. From node one the routing probability is p1,2 = 1/2 to nodetwo, and p1,3 = 1/2 to node three.
• Node two is an M/M/1 queueing system with mean service time s2 = 1/2 [time units].From node two the routing probability is p2,3 = 1/2 to node three, and with probabilityp2,∞ = 1/2 a customer leave the network.
• Node three is an M/M/1 queueing system with mean service time s3 = 1/2 [time units].From node three customers leave the network (p3,∞ = 1).
1. Find the traffic offered to each node.
2. Find the state probabilities pi(j) for state j = 0, 1, 2, 3, 4 for each node (i = 1, 2, 3), andthe state probability p(x1, x2, x3) = p(1, 1, 1) for the whole queueing network.
3. Find the mean waiting time for all customers in each node.
INDEX 603
We now close the network by fixing the total number of customers to 4 customers. Thus weonly look at states with a total number of 4 customers. (Customers which leave the networkin node 2 and 3 immediately go to node one).
4. Find by convolving the above state probabilities the state probabilities of node three.
5. Find the carried traffic in node three, and then the carried traffic in the other two nodes.
604 INDEX
Technical University of Denmark Teletraffic Engineering & Network PlanningDTU–Photonics, Networks group Course 34 340
Solution to exercise 12.9 (exam 2009)
Question 1:
The solution to the flow balance equations (12.5) is easy to obtain. The arrival rates to thethree nodes become:
λ1 = 2 [customers per time unit]
λ2 =1
2· λ1 = 1 [customers per time unit]
λ3 =1
2· λ1 +
1
2· λ2 = 1 +
1
2=
3
2[customers per time unit]
Then the offered traffic to the three nodes become:
A1 = λ1 · s1 = 2 · 1
= 2 [erlang]
A2 = λ2 · s2 = 1 · 1
2
=1
2[erlang]
A3 = λ3 · s3 =3
2· 1
2
=3
4[erlang]
We notice that for all three nodes Ai < ni so that the conditions for statistical equilibriumare fulfilled.
Question 2:
All three nodes are M/M/n queueing systems with state probabilities given by (9.2) wherep(0) is given by (9.4). This is used for Node 1 with n = 3. Node 2 and 3 are M/M/1 single
INDEX 605
server systems which have the simple state probabilities given by (9.30).
State Node 1 Node 2 Node 3
0 p1(0) = 19
p2(0) = 12
p3(0) = 14
1 p1(1) = 29
p2(1) = 14
p3(1) = 316
2 p1(2) = 29
p2(2) = 18
p3(2) = 964
3 p1(3) = 427
p2(3) = 116
p3(3) = 27256
4 p1(4) = 881
p2(4) = 132
p3(4) = 811024
Due to product form property for this Jackson network we have:
p(1, 1, 1) = p1(1) · p2(1) · p3(1)
=2
9· 1
4· 3
16
=1
96
Question 3:
From the Erlang-C model we have for a system with n servers the mean waiting time (9.15)
W = E2,n(A) · s
n− A .
For single server systems this simplifies to:
W =A · s1− A =
V
1− A ,
where (10.4):
V =λ
2·m2 =
λ
2· 2
µ2= A · s .
606 INDEX
For the three nodes we get the following mean waiting times Wi , i = 1, 2, 3:
W1 =4
9· 1
3− 2
=4
9
W2 =1
2· 1/2
1− 1/2
=1
2
W3 =3
4· 1/2
1− 3/4
=3
2
Question 4:
To get the state probabilities of node three we choose the order of convolution ((1 ∗ 2) ∗ 3).
State Node 1 Node 2 Node 1 ∗ 2 Node 3
0 19
12
118
12
–
1 29
14
536
14
–
2 29
18
1372
18
–
3 427
116
71432
116
–
4 881
132
3412592
132
p123(4)
Question 5:
We have
q123(4) =144 · 1 + 360 · 2 + 468 · 4 + 422 · 8 + 341 · 16
2592 · 32=
11568
2592 · 32The last term in the numerator corresponds to that node three is idle:
p(Node 3 idle) =341 · 16
11568=
341
723
Y3 = 1− p(Node 3 idle) = 1− 341
723=
382
723
=1146
2169= 0.5284 [erlang]
INDEX 607
As we know the relative load of the nodes we then get:
Y1 =8
3· Y3
=3056
2169= 1.4089 [erlang]
Y2 =2
3· Y3
=764
2169= 0.3522 [erlang]
Updated: 2009-06-23
608 INDEX
INDEX 609
610 INDEX
Number of Servers n
A 1 2 3 4 5 6 7 8 9 10
0.25 0.2000 0.0244 0.0020 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.00000.50 0.3333 0.0769 0.0127 0.0016 0.0002 0.0000 0.0000 0.0000 0.0000 0.00000.75 0.4286 0.1385 0.0335 0.0062 0.0009 0.0001 0.0000 0.0000 0.0000 0.00001.00 0.5000 0.2000 0.0625 0.0154 0.0031 0.0005 0.0001 0.0000 0.0000 0.00001.25 0.5556 0.2577 0.0970 0.0294 0.0073 0.0015 0.0003 0.0000 0.0000 0.0000
1.50 0.6000 0.3103 0.1343 0.0480 0.0142 0.0035 0.0008 0.0001 0.0000 0.00001.75 0.6364 0.3577 0.1726 0.0702 0.0240 0.0069 0.0017 0.0004 0.0001 0.00002.00 0.6667 0.4000 0.2105 0.0952 0.0367 0.0121 0.0034 0.0009 0.0002 0.00002.25 0.6923 0.4378 0.2472 0.1221 0.0521 0.0192 0.0061 0.0017 0.0004 0.00012.50 0.7143 0.4717 0.2822 0.1499 0.0697 0.0282 0.0100 0.0031 0.0009 0.0002
2.75 0.7333 0.5021 0.3152 0.1781 0.0892 0.0393 0.0152 0.0052 0.0016 0.00043.00 0.7500 0.5294 0.3462 0.2061 0.1101 0.0522 0.0219 0.0081 0.0027 0.00083.25 0.7647 0.5541 0.3751 0.2336 0.1318 0.0666 0.0300 0.0120 0.0043 0.00143.50 0.7778 0.5765 0.4021 0.2603 0.1541 0.0825 0.0396 0.0170 0.0066 0.00233.75 0.7895 0.5968 0.4273 0.2860 0.1766 0.0994 0.0506 0.0232 0.0096 0.0036
4.00 0.8000 0.6154 0.4507 0.3107 0.1991 0.1172 0.0627 0.0304 0.0133 0.00534.25 0.8095 0.6324 0.4725 0.3343 0.2213 0.1355 0.0760 0.0388 0.0180 0.00764.50 0.8182 0.6480 0.4929 0.3567 0.2430 0.1542 0.0902 0.0483 0.0236 0.01054.75 0.8261 0.6624 0.5119 0.3781 0.2643 0.1730 0.1051 0.0587 0.0301 0.01415.00 0.8333 0.6757 0.5297 0.3983 0.2849 0.1918 0.1205 0.0700 0.0375 0.0184
5.25 0.8400 0.6880 0.5463 0.4176 0.3048 0.2106 0.1364 0.0821 0.0457 0.02345.50 0.8462 0.6994 0.5618 0.4358 0.3241 0.2290 0.1525 0.0949 0.0548 0.02935.75 0.8519 0.7101 0.5764 0.4531 0.3426 0.2472 0.1688 0.1082 0.0646 0.03586.00 0.8571 0.7200 0.5902 0.4696 0.3604 0.2649 0.1851 0.1219 0.0751 0.04316.25 0.8621 0.7293 0.6031 0.4851 0.3775 0.2822 0.2013 0.1359 0.0862 0.0511
6.50 0.8667 0.7380 0.6152 0.4999 0.3939 0.2991 0.2174 0.1501 0.0978 0.05986.75 0.8710 0.7462 0.6267 0.5140 0.4096 0.3155 0.2332 0.1644 0.1098 0.06907.00 0.8750 0.7538 0.6375 0.5273 0.4247 0.3313 0.2489 0.1788 0.1221 0.07877.25 0.8788 0.7611 0.6478 0.5400 0.4392 0.3467 0.2642 0.1932 0.1347 0.08897.50 0.8824 0.7679 0.6575 0.5521 0.4530 0.3615 0.2792 0.2075 0.1474 0.0995
7.75 0.8857 0.7744 0.6667 0.5637 0.4663 0.3759 0.2939 0.2216 0.1602 0.11058.00 0.8889 0.7805 0.6755 0.5746 0.4790 0.3898 0.3082 0.2356 0.1731 0.12178.25 0.8919 0.7863 0.6838 0.5851 0.4912 0.4031 0.3221 0.2493 0.1860 0.13318.50 0.8947 0.7918 0.6917 0.5951 0.5029 0.4160 0.3356 0.2629 0.1989 0.14468.75 0.8974 0.7970 0.6992 0.6047 0.5141 0.4285 0.3488 0.2761 0.2117 0.1563
9.00 0.9000 0.8020 0.7064 0.6138 0.5249 0.4405 0.3616 0.2892 0.2243 0.16809.25 0.9024 0.8067 0.7133 0.6226 0.5353 0.4521 0.3740 0.3019 0.2368 0.17979.50 0.9048 0.8112 0.7198 0.6309 0.5452 0.4633 0.3860 0.3143 0.2491 0.19149.75 0.9070 0.8155 0.7261 0.6390 0.5548 0.4741 0.3977 0.3265 0.2613 0.2030
10.00 0.9091 0.8197 0.7321 0.6467 0.5640 0.4845 0.4090 0.3383 0.2732 0.2146
Erlang-B formula E1,n(A)
INDEX 611
Number of Servers n
A 1 2 3 4 5 6 7 8 9 10
10.25 0.9111 0.8236 0.7378 0.6541 0.5728 0.4946 0.4200 0.3499 0.2849 0.226010.50 0.9130 0.8274 0.7433 0.6612 0.5813 0.5043 0.4307 0.3611 0.2964 0.237410.75 0.9149 0.8310 0.7486 0.6680 0.5895 0.5137 0.4410 0.3721 0.3077 0.248611.00 0.9167 0.8345 0.7537 0.6745 0.5974 0.5227 0.4510 0.3828 0.3187 0.259611.25 0.9184 0.8378 0.7586 0.6809 0.6050 0.5315 0.4607 0.3931 0.3295 0.2704
11.50 0.9200 0.8410 0.7633 0.6869 0.6124 0.5400 0.4701 0.4033 0.3400 0.281111.75 0.9216 0.8441 0.7678 0.6928 0.6195 0.5482 0.4792 0.4131 0.3504 0.291612.00 0.9231 0.8471 0.7721 0.6985 0.6264 0.5561 0.4880 0.4227 0.3604 0.301912.25 0.9245 0.8499 0.7763 0.7039 0.6330 0.5638 0.4966 0.4320 0.3703 0.312012.50 0.9259 0.8527 0.7804 0.7092 0.6394 0.5712 0.5049 0.4410 0.3799 0.3220
12.75 0.9273 0.8553 0.7843 0.7143 0.6456 0.5784 0.5130 0.4498 0.3892 0.331713.00 0.9286 0.8579 0.7880 0.7192 0.6516 0.5854 0.5209 0.4584 0.3984 0.341213.25 0.9298 0.8603 0.7917 0.7239 0.6574 0.5921 0.5285 0.4667 0.4073 0.350513.50 0.9310 0.8627 0.7952 0.7285 0.6630 0.5987 0.5359 0.4749 0.4160 0.359613.75 0.9322 0.8650 0.7986 0.7330 0.6684 0.6050 0.5431 0.4828 0.4245 0.3686
14.00 0.9333 0.8673 0.8019 0.7373 0.6737 0.6112 0.5500 0.4905 0.4328 0.377314.25 0.9344 0.8694 0.8051 0.7415 0.6788 0.6172 0.5568 0.4979 0.4408 0.385814.50 0.9355 0.8715 0.8081 0.7455 0.6837 0.6230 0.5634 0.5052 0.4487 0.394214.75 0.9365 0.8735 0.8111 0.7494 0.6886 0.6286 0.5698 0.5123 0.4564 0.402415.00 0.9375 0.8755 0.8140 0.7532 0.6932 0.6341 0.5761 0.5193 0.4639 0.4103
15.25 0.9385 0.8774 0.8169 0.7569 0.6978 0.6394 0.5821 0.5260 0.4713 0.418215.50 0.9394 0.8792 0.8196 0.7605 0.7022 0.6446 0.5880 0.5326 0.4784 0.425815.75 0.9403 0.8810 0.8222 0.7640 0.7065 0.6497 0.5938 0.5390 0.4854 0.433316.00 0.9412 0.8828 0.8248 0.7674 0.7106 0.6546 0.5994 0.5452 0.4922 0.440616.25 0.9420 0.8844 0.8273 0.7707 0.7147 0.6594 0.6048 0.5513 0.4988 0.4477
16.50 0.9429 0.8861 0.8297 0.7739 0.7186 0.6640 0.6102 0.5572 0.5053 0.454716.75 0.9437 0.8877 0.8321 0.7770 0.7225 0.6685 0.6153 0.5630 0.5117 0.461517.00 0.9444 0.8892 0.8344 0.7800 0.7262 0.6729 0.6204 0.5687 0.5179 0.468217.25 0.9452 0.8907 0.8366 0.7830 0.7298 0.6772 0.6253 0.5742 0.5239 0.474717.50 0.9459 0.8922 0.8388 0.7859 0.7334 0.6814 0.6301 0.5795 0.5298 0.4811
17.75 0.9467 0.8936 0.8410 0.7887 0.7368 0.6855 0.6348 0.5848 0.5356 0.487418.00 0.9474 0.8950 0.8430 0.7914 0.7402 0.6895 0.6394 0.5899 0.5413 0.493518.25 0.9481 0.8964 0.8450 0.7940 0.7435 0.6934 0.6438 0.5949 0.5468 0.499518.50 0.9487 0.8977 0.8470 0.7966 0.7467 0.6972 0.6482 0.5998 0.5522 0.505318.75 0.9494 0.8990 0.8489 0.7992 0.7498 0.7009 0.6525 0.6046 0.5574 0.5111
19.00 0.9500 0.9002 0.8508 0.8016 0.7529 0.7045 0.6566 0.6093 0.5626 0.516719.25 0.9506 0.9015 0.8526 0.8040 0.7558 0.7080 0.6607 0.6139 0.5677 0.522219.50 0.9512 0.9027 0.8544 0.8064 0.7587 0.7115 0.6647 0.6183 0.5726 0.527519.75 0.9518 0.9038 0.8561 0.8087 0.7616 0.7148 0.6685 0.6227 0.5774 0.532820.00 0.9524 0.9050 0.8578 0.8109 0.7644 0.7181 0.6723 0.6270 0.5822 0.5380
Erlang’s B–formula E1,n(A)
612 INDEX
Number of Servers n
A 11 12 13 14 15 16 17 18 19 20
0.25 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00000.50 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00000.75 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00001.00 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00001.25 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
1.50 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00001.75 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00002.00 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00002.25 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00002.50 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
2.75 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00003.00 0.0002 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00003.25 0.0004 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00003.50 0.0007 0.0002 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00003.75 0.0012 0.0004 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
4.00 0.0019 0.0006 0.0002 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.00004.25 0.0029 0.0010 0.0003 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.00004.50 0.0043 0.0016 0.0006 0.0002 0.0001 0.0000 0.0000 0.0000 0.0000 0.00004.75 0.0060 0.0024 0.0009 0.0003 0.0001 0.0000 0.0000 0.0000 0.0000 0.00005.00 0.0083 0.0034 0.0013 0.0005 0.0002 0.0000 0.0000 0.0000 0.0000 0.0000
5.25 0.0111 0.0048 0.0019 0.0007 0.0003 0.0001 0.0000 0.0000 0.0000 0.00005.50 0.0144 0.0066 0.0028 0.0011 0.0004 0.0001 0.0000 0.0000 0.0000 0.00005.75 0.0184 0.0087 0.0038 0.0016 0.0006 0.0002 0.0001 0.0000 0.0000 0.00006.00 0.0230 0.0114 0.0052 0.0022 0.0009 0.0003 0.0001 0.0000 0.0000 0.00006.25 0.0282 0.0145 0.0069 0.0031 0.0013 0.0005 0.0002 0.0001 0.0000 0.0000
6.50 0.0341 0.0181 0.0090 0.0042 0.0018 0.0007 0.0003 0.0001 0.0000 0.00006.75 0.0406 0.0223 0.0115 0.0055 0.0025 0.0010 0.0004 0.0002 0.0001 0.00007.00 0.0477 0.0271 0.0144 0.0071 0.0033 0.0014 0.0006 0.0002 0.0001 0.00007.25 0.0554 0.0324 0.0177 0.0091 0.0044 0.0020 0.0008 0.0003 0.0001 0.00007.50 0.0636 0.0382 0.0216 0.0114 0.0057 0.0027 0.0012 0.0005 0.0002 0.0001
7.75 0.0722 0.0446 0.0259 0.0141 0.0072 0.0035 0.0016 0.0007 0.0003 0.00018.00 0.0813 0.0514 0.0307 0.0172 0.0091 0.0045 0.0021 0.0009 0.0004 0.00028.25 0.0907 0.0587 0.0359 0.0207 0.0113 0.0058 0.0028 0.0013 0.0006 0.00028.50 0.1005 0.0665 0.0416 0.0247 0.0138 0.0073 0.0036 0.0017 0.0008 0.00038.75 0.1106 0.0746 0.0478 0.0290 0.0166 0.0090 0.0046 0.0022 0.0010 0.0005
9.00 0.1208 0.0831 0.0544 0.0338 0.0199 0.0111 0.0058 0.0029 0.0014 0.00069.25 0.1313 0.0919 0.0614 0.0390 0.0235 0.0134 0.0072 0.0037 0.0018 0.00089.50 0.1418 0.1010 0.0687 0.0445 0.0274 0.0160 0.0089 0.0047 0.0023 0.00119.75 0.1525 0.1103 0.0764 0.0505 0.0318 0.0190 0.0108 0.0058 0.0030 0.0014
10.00 0.1632 0.1197 0.0843 0.0568 0.0365 0.0223 0.0129 0.0071 0.0037 0.0019
Erlang’s B–formula E1,n(A)
INDEX 613
Number of Servers n
A 11 12 13 14 15 16 17 18 19 20
10.25 0.1740 0.1294 0.0926 0.0635 0.0416 0.0259 0.0154 0.0087 0.0047 0.002410.50 0.1847 0.1391 0.1010 0.0704 0.0470 0.0299 0.0181 0.0105 0.0058 0.003010.75 0.1954 0.1490 0.1097 0.0777 0.0527 0.0342 0.0212 0.0125 0.0070 0.003811.00 0.2061 0.1589 0.1185 0.0852 0.0588 0.0389 0.0245 0.0148 0.0085 0.004611.25 0.2167 0.1688 0.1275 0.0929 0.0651 0.0438 0.0282 0.0173 0.0101 0.0057
11.50 0.2271 0.1788 0.1365 0.1009 0.0718 0.0491 0.0321 0.0201 0.0120 0.006911.75 0.2375 0.1887 0.1457 0.1090 0.0786 0.0546 0.0364 0.0232 0.0141 0.008212.00 0.2478 0.1986 0.1549 0.1172 0.0857 0.0604 0.0409 0.0265 0.0165 0.009812.25 0.2579 0.2084 0.1641 0.1256 0.0930 0.0665 0.0457 0.0302 0.0191 0.011612.50 0.2679 0.2182 0.1734 0.1341 0.1005 0.0728 0.0508 0.0341 0.0219 0.0135
12.75 0.2777 0.2278 0.1826 0.1426 0.1081 0.0793 0.0561 0.0383 0.0250 0.015713.00 0.2874 0.2374 0.1919 0.1512 0.1159 0.0860 0.0617 0.0427 0.0284 0.018113.25 0.2969 0.2469 0.2010 0.1598 0.1237 0.0929 0.0675 0.0474 0.0320 0.020713.50 0.3062 0.2562 0.2102 0.1685 0.1317 0.1000 0.0736 0.0523 0.0358 0.023613.75 0.3154 0.2655 0.2192 0.1772 0.1397 0.1072 0.0798 0.0574 0.0399 0.0267
14.00 0.3244 0.2746 0.2282 0.1858 0.1478 0.1145 0.0862 0.0628 0.0442 0.030014.25 0.3333 0.2835 0.2371 0.1944 0.1559 0.1219 0.0927 0.0684 0.0488 0.033614.50 0.3419 0.2924 0.2459 0.2030 0.1640 0.1294 0.0994 0.0741 0.0536 0.037414.75 0.3504 0.3011 0.2546 0.2115 0.1722 0.1370 0.1062 0.0801 0.0585 0.041415.00 0.3588 0.3096 0.2632 0.2200 0.1803 0.1446 0.1132 0.0862 0.0637 0.0456
15.25 0.3670 0.3180 0.2717 0.2284 0.1884 0.1523 0.1202 0.0924 0.0690 0.050015.50 0.3750 0.3263 0.2801 0.2367 0.1965 0.1599 0.1273 0.0988 0.0746 0.054615.75 0.3828 0.3344 0.2883 0.2449 0.2046 0.1676 0.1344 0.1052 0.0802 0.059416.00 0.3905 0.3424 0.2965 0.2531 0.2126 0.1753 0.1416 0.1118 0.0861 0.064416.25 0.3981 0.3503 0.3045 0.2611 0.2205 0.1830 0.1489 0.1185 0.0920 0.0696
16.50 0.4055 0.3580 0.3124 0.2691 0.2284 0.1906 0.1561 0.1252 0.0981 0.074916.75 0.4127 0.3655 0.3202 0.2770 0.2362 0.1983 0.1634 0.1320 0.1042 0.080317.00 0.4198 0.3729 0.3278 0.2847 0.2440 0.2059 0.1707 0.1388 0.1105 0.085917.25 0.4268 0.3802 0.3353 0.2924 0.2516 0.2134 0.1780 0.1457 0.1168 0.091517.50 0.4336 0.3874 0.3427 0.2999 0.2592 0.2209 0.1853 0.1526 0.1232 0.0973
17.75 0.4402 0.3944 0.3500 0.3074 0.2667 0.2283 0.1925 0.1595 0.1297 0.103218.00 0.4468 0.4012 0.3571 0.3147 0.2741 0.2357 0.1997 0.1665 0.1362 0.109218.25 0.4532 0.4080 0.3642 0.3219 0.2814 0.2430 0.2069 0.1734 0.1428 0.115318.50 0.4594 0.4146 0.3711 0.3290 0.2887 0.2502 0.2140 0.1803 0.1493 0.121418.75 0.4656 0.4211 0.3779 0.3360 0.2958 0.2574 0.2211 0.1872 0.1559 0.1275
19.00 0.4716 0.4275 0.3845 0.3429 0.3028 0.2645 0.2282 0.1941 0.1625 0.133819.25 0.4775 0.4337 0.3911 0.3497 0.3098 0.2715 0.2351 0.2009 0.1691 0.140019.50 0.4833 0.4399 0.3975 0.3564 0.3166 0.2784 0.2421 0.2078 0.1757 0.146319.75 0.4889 0.4459 0.4038 0.3629 0.3233 0.2853 0.2489 0.2145 0.1823 0.152620.00 0.4945 0.4518 0.4101 0.3694 0.3300 0.2920 0.2557 0.2213 0.1889 0.1589
Erlang’s B–formula E1,n(A)
614 INDEX
Number of Servers n
A 0 1 2 3 4 5 6 7 8 9
0.25 0.2000 0.0439 0.0056 0.0005 0.0000 0.0000 0.0000 0.0000 0.0000 0.00000.50 0.3333 0.1282 0.0321 0.0055 0.0007 0.0001 0.0000 0.0000 0.0000 0.00000.75 0.4286 0.2176 0.0788 0.0204 0.0040 0.0006 0.0001 0.0000 0.0000 0.00001.00 0.5000 0.3000 0.1375 0.0471 0.0123 0.0026 0.0004 0.0001 0.0000 0.00001.25 0.5556 0.3723 0.2009 0.0845 0.0276 0.0072 0.0016 0.0003 0.0000 0.0000
1.50 0.6000 0.4345 0.2640 0.1296 0.0507 0.0160 0.0042 0.0009 0.0002 0.00001.75 0.6364 0.4877 0.3238 0.1792 0.0809 0.0298 0.0091 0.0024 0.0005 0.00012.00 0.6667 0.5333 0.3789 0.2306 0.1171 0.0492 0.0173 0.0052 0.0013 0.00032.25 0.6923 0.5726 0.4289 0.2815 0.1575 0.0741 0.0293 0.0099 0.0029 0.00072.50 0.7143 0.6065 0.4738 0.3306 0.2005 0.1037 0.0456 0.0172 0.0056 0.0016
2.75 0.7333 0.6360 0.5140 0.3770 0.2444 0.1373 0.0662 0.0275 0.0099 0.00323.00 0.7500 0.6618 0.5498 0.4201 0.2882 0.1737 0.0909 0.0412 0.0163 0.00573.25 0.7647 0.6845 0.5817 0.4599 0.3307 0.2118 0.1190 0.0584 0.0251 0.00953.50 0.7778 0.7046 0.6103 0.4964 0.3716 0.2507 0.1501 0.0790 0.0366 0.01503.75 0.7895 0.7225 0.6358 0.5298 0.4102 0.2895 0.1832 0.1028 0.0510 0.0224
4.00 0.8000 0.7385 0.6587 0.5601 0.4465 0.3276 0.2177 0.1293 0.0683 0.03214.25 0.8095 0.7528 0.6793 0.5877 0.4802 0.3645 0.2528 0.1581 0.0885 0.04424.50 0.8182 0.7658 0.6979 0.6128 0.5116 0.3998 0.2880 0.1885 0.1112 0.05884.75 0.8261 0.7776 0.7148 0.6357 0.5406 0.4334 0.3227 0.2201 0.1361 0.07595.00 0.8333 0.7883 0.7301 0.6566 0.5674 0.4651 0.3566 0.2524 0.1630 0.0954
5.25 0.8400 0.7981 0.7440 0.6756 0.5920 0.4949 0.3894 0.2847 0.1912 0.11705.50 0.8462 0.8070 0.7567 0.6930 0.6148 0.5227 0.4209 0.3168 0.2205 0.14055.75 0.8519 0.8153 0.7683 0.7090 0.6357 0.5487 0.4508 0.3484 0.2503 0.16566.00 0.8571 0.8229 0.7790 0.7236 0.6550 0.5729 0.4792 0.3791 0.2804 0.19206.25 0.8621 0.8299 0.7888 0.7370 0.6728 0.5954 0.5060 0.4087 0.3104 0.2193
6.50 0.8667 0.8364 0.7979 0.7494 0.6892 0.6163 0.5313 0.4372 0.3399 0.24726.75 0.8710 0.8424 0.8063 0.7608 0.7043 0.6357 0.5550 0.4644 0.3689 0.27547.00 0.8750 0.8481 0.8141 0.7714 0.7184 0.6537 0.5772 0.4903 0.3970 0.30357.25 0.8788 0.8533 0.8213 0.7812 0.7314 0.6705 0.5980 0.5149 0.4243 0.33147.50 0.8824 0.8583 0.8281 0.7903 0.7434 0.6861 0.6175 0.5382 0.4504 0.3589
7.75 0.8857 0.8629 0.8343 0.7987 0.7546 0.7006 0.6357 0.5601 0.4755 0.38578.00 0.8889 0.8672 0.8402 0.8066 0.7650 0.7141 0.6527 0.5808 0.4994 0.41188.25 0.8919 0.8713 0.8457 0.8140 0.7747 0.7266 0.6686 0.6002 0.5222 0.43718.50 0.8947 0.8751 0.8509 0.8208 0.7838 0.7383 0.6835 0.6185 0.5438 0.46148.75 0.8974 0.8788 0.8557 0.8273 0.7922 0.7493 0.6974 0.6357 0.5643 0.4847
9.00 0.9000 0.8822 0.8603 0.8333 0.8001 0.7595 0.7104 0.6518 0.5837 0.50709.25 0.9024 0.8854 0.8646 0.8389 0.8075 0.7691 0.7226 0.6670 0.6021 0.52839.50 0.9048 0.8885 0.8686 0.8443 0.8145 0.7781 0.7340 0.6813 0.6194 0.54869.75 0.9070 0.8914 0.8724 0.8493 0.8210 0.7865 0.7447 0.6946 0.6357 0.5679
10.00 0.9091 0.8942 0.8761 0.8540 0.8271 0.7944 0.7547 0.7072 0.6511 0.5863
Erlang-B improvement function : F1,n(A) = A · [E1,n(A)− E1,n+1(A)]
INDEX 615
Number of Servers n
A 0 1 2 3 4 5 6 7 8 9
10.25 0.9111 0.8968 0.8795 0.8585 0.8329 0.8018 0.7642 0.7191 0.6656 0.603610.50 0.9130 0.8993 0.8828 0.8627 0.8383 0.8088 0.7731 0.7302 0.6793 0.620110.75 0.9149 0.9017 0.8859 0.8667 0.8435 0.8154 0.7814 0.7407 0.6923 0.635711.00 0.9167 0.9040 0.8888 0.8705 0.8483 0.8216 0.7893 0.7505 0.7045 0.650511.25 0.9184 0.9062 0.8916 0.8741 0.8529 0.8274 0.7967 0.7598 0.7160 0.6644
11.50 0.9200 0.9083 0.8943 0.8775 0.8573 0.8330 0.8037 0.7686 0.7268 0.677711.75 0.9216 0.9103 0.8969 0.8808 0.8614 0.8382 0.8103 0.7769 0.7371 0.690212.00 0.9231 0.9122 0.8993 0.8838 0.8653 0.8432 0.8165 0.7847 0.7468 0.702012.25 0.9245 0.9141 0.9016 0.8868 0.8691 0.8479 0.8224 0.7921 0.7559 0.713212.50 0.9259 0.9158 0.9038 0.8896 0.8726 0.8523 0.8280 0.7991 0.7646 0.7238
12.75 0.9273 0.9175 0.9060 0.8923 0.8760 0.8566 0.8334 0.8057 0.7728 0.733913.00 0.9286 0.9191 0.9080 0.8949 0.8792 0.8606 0.8384 0.8119 0.7805 0.743413.25 0.9298 0.9207 0.9100 0.8973 0.8823 0.8644 0.8432 0.8179 0.7879 0.752413.50 0.9310 0.9222 0.9119 0.8997 0.8852 0.8681 0.8477 0.8235 0.7948 0.760913.75 0.9322 0.9237 0.9137 0.9019 0.8880 0.8716 0.8520 0.8289 0.8014 0.7690
14.00 0.9333 0.9251 0.9154 0.9041 0.8907 0.8749 0.8562 0.8340 0.8077 0.776714.25 0.9344 0.9264 0.9171 0.9061 0.8932 0.8780 0.8601 0.8388 0.8137 0.784014.50 0.9355 0.9277 0.9187 0.9081 0.8957 0.8811 0.8638 0.8434 0.8194 0.791014.75 0.9365 0.9290 0.9202 0.9100 0.8980 0.8840 0.8674 0.8478 0.8248 0.797615.00 0.9375 0.9302 0.9217 0.9119 0.9003 0.8867 0.8708 0.8520 0.8299 0.8038
15.25 0.9385 0.9314 0.9232 0.9136 0.9025 0.8894 0.8741 0.8560 0.8348 0.809815.50 0.9394 0.9325 0.9246 0.9153 0.9045 0.8919 0.8772 0.8598 0.8395 0.815515.75 0.9403 0.9336 0.9259 0.9170 0.9065 0.8944 0.8802 0.8635 0.8439 0.820916.00 0.9412 0.9347 0.9272 0.9185 0.9085 0.8967 0.8830 0.8670 0.8482 0.826116.25 0.9420 0.9357 0.9285 0.9201 0.9103 0.8990 0.8858 0.8703 0.8522 0.8310
16.50 0.9429 0.9367 0.9297 0.9215 0.9121 0.9011 0.8884 0.8735 0.8561 0.835716.75 0.9437 0.9377 0.9308 0.9229 0.9138 0.9032 0.8909 0.8766 0.8598 0.840217.00 0.9444 0.9386 0.9320 0.9243 0.9155 0.9052 0.8933 0.8795 0.8634 0.844517.25 0.9452 0.9395 0.9331 0.9256 0.9171 0.9072 0.8957 0.8823 0.8668 0.848617.50 0.9459 0.9404 0.9341 0.9269 0.9186 0.9090 0.8979 0.8850 0.8700 0.8526
17.75 0.9467 0.9413 0.9352 0.9281 0.9201 0.9108 0.9001 0.8876 0.8731 0.856318.00 0.9474 0.9421 0.9362 0.9293 0.9215 0.9125 0.9021 0.8901 0.8761 0.859918.25 0.9481 0.9429 0.9371 0.9305 0.9229 0.9142 0.9041 0.8925 0.8790 0.863418.50 0.9487 0.9437 0.9381 0.9316 0.9243 0.9158 0.9060 0.8948 0.8818 0.866718.75 0.9494 0.9445 0.9390 0.9327 0.9256 0.9173 0.9079 0.8970 0.8844 0.8699
19.00 0.9500 0.9453 0.9399 0.9338 0.9268 0.9188 0.9097 0.8992 0.8870 0.872919.25 0.9506 0.9460 0.9408 0.9348 0.9280 0.9203 0.9114 0.9012 0.8895 0.875919.50 0.9512 0.9467 0.9416 0.9358 0.9292 0.9217 0.9131 0.9032 0.8918 0.878719.75 0.9518 0.9474 0.9424 0.9368 0.9304 0.9230 0.9147 0.9051 0.8941 0.881420.00 0.9524 0.9481 0.9432 0.9377 0.9315 0.9244 0.9162 0.9070 0.8963 0.8840
Erlang-B improvement function : F1,n(A) = A · [E1,n(A)− E1,n+1(A)]
616 INDEX
Number of Servers n
A 10 11 12 13 14 15 16 17 18 19
0.25 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00000.50 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00000.75 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00001.00 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00001.25 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
1.50 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00001.75 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00002.00 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00002.25 0.0002 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00002.50 0.0004 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
2.75 0.0009 0.0002 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00003.00 0.0018 0.0005 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00003.25 0.0032 0.0010 0.0003 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.00003.50 0.0055 0.0018 0.0005 0.0002 0.0000 0.0000 0.0000 0.0000 0.0000 0.00003.75 0.0088 0.0031 0.0010 0.0003 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000
4.00 0.0135 0.0051 0.0018 0.0006 0.0002 0.0000 0.0000 0.0000 0.0000 0.00004.25 0.0198 0.0080 0.0030 0.0010 0.0003 0.0001 0.0000 0.0000 0.0000 0.00004.50 0.0280 0.0120 0.0047 0.0017 0.0006 0.0002 0.0000 0.0000 0.0000 0.00004.75 0.0382 0.0174 0.0072 0.0027 0.0010 0.0003 0.0001 0.0000 0.0000 0.00005.00 0.0505 0.0242 0.0106 0.0042 0.0016 0.0005 0.0002 0.0001 0.0000 0.0000
5.25 0.0650 0.0328 0.0151 0.0064 0.0025 0.0009 0.0003 0.0001 0.0000 0.00005.50 0.0816 0.0432 0.0209 0.0093 0.0038 0.0014 0.0005 0.0002 0.0001 0.00005.75 0.1003 0.0555 0.0281 0.0131 0.0056 0.0022 0.0008 0.0003 0.0001 0.00006.00 0.1209 0.0698 0.0369 0.0179 0.0080 0.0033 0.0013 0.0005 0.0002 0.00016.25 0.1431 0.0859 0.0473 0.0240 0.0112 0.0049 0.0020 0.0008 0.0003 0.0001
6.50 0.1668 0.1038 0.0595 0.0314 0.0153 0.0069 0.0029 0.0012 0.0004 0.00026.75 0.1915 0.1234 0.0734 0.0403 0.0205 0.0096 0.0042 0.0017 0.0007 0.00027.00 0.2172 0.1445 0.0890 0.0507 0.0267 0.0131 0.0060 0.0026 0.0010 0.00047.25 0.2434 0.1668 0.1061 0.0626 0.0342 0.0174 0.0082 0.0037 0.0015 0.00067.50 0.2699 0.1901 0.1248 0.0761 0.0431 0.0227 0.0111 0.0051 0.0022 0.0009
7.75 0.2965 0.2143 0.1448 0.0911 0.0533 0.0290 0.0148 0.0070 0.0031 0.00138.00 0.3230 0.2391 0.1659 0.1075 0.0650 0.0366 0.0192 0.0095 0.0044 0.00198.25 0.3491 0.2642 0.1881 0.1254 0.0780 0.0453 0.0246 0.0125 0.0060 0.00278.50 0.3748 0.2894 0.2109 0.1444 0.0925 0.0554 0.0310 0.0163 0.0080 0.00378.75 0.3999 0.3146 0.2344 0.1645 0.1082 0.0667 0.0385 0.0208 0.0106 0.0051
9.00 0.4243 0.3396 0.2582 0.1855 0.1253 0.0793 0.0471 0.0263 0.0138 0.00689.25 0.4479 0.3642 0.2823 0.2072 0.1434 0.0932 0.0569 0.0326 0.0176 0.00909.50 0.4706 0.3884 0.3064 0.2295 0.1625 0.1084 0.0679 0.0400 0.0222 0.01169.75 0.4925 0.4120 0.3303 0.2522 0.1825 0.1246 0.0801 0.0485 0.0276 0.0149
10.00 0.5135 0.4349 0.3540 0.2752 0.2032 0.1420 0.0935 0.0581 0.0340 0.0188
Erlang-B improvement function : F1,n(A) = A · [E1,n(A)− E1,n+1(A)]
INDEX 617
Number of Servers n
A 10 11 12 13 14 15 16 17 18 19
10.25 0.5336 0.4571 0.3773 0.2982 0.2245 0.1602 0.1080 0.0687 0.0413 0.023410.50 0.5528 0.4786 0.4002 0.3212 0.2462 0.1793 0.1236 0.0805 0.0495 0.028810.75 0.5710 0.4992 0.4225 0.3441 0.2682 0.1991 0.1402 0.0934 0.0588 0.035011.00 0.5885 0.5191 0.4442 0.3666 0.2903 0.2194 0.1576 0.1073 0.0692 0.042211.25 0.6050 0.5381 0.4652 0.3888 0.3124 0.2402 0.1759 0.1223 0.0806 0.0503
11.50 0.6208 0.5563 0.4855 0.4105 0.3344 0.2612 0.1948 0.1381 0.0930 0.059311.75 0.6357 0.5738 0.5051 0.4317 0.3562 0.2825 0.2142 0.1548 0.1063 0.069312.00 0.6499 0.5904 0.5240 0.4523 0.3778 0.3038 0.2342 0.1723 0.1207 0.080312.25 0.6634 0.6062 0.5421 0.4723 0.3989 0.3251 0.2544 0.1904 0.1359 0.092212.50 0.6762 0.6213 0.5595 0.4916 0.4196 0.3462 0.2748 0.2091 0.1519 0.1051
12.75 0.6883 0.6357 0.5762 0.5103 0.4398 0.3671 0.2954 0.2282 0.1686 0.118913.00 0.6998 0.6494 0.5921 0.5283 0.4595 0.3877 0.3160 0.2477 0.1860 0.133413.25 0.7107 0.6625 0.6073 0.5457 0.4786 0.4080 0.3365 0.2674 0.2039 0.148813.50 0.7211 0.6748 0.6219 0.5623 0.4971 0.4278 0.3568 0.2872 0.2223 0.164913.75 0.7309 0.6866 0.6357 0.5783 0.5150 0.4471 0.3769 0.3072 0.2411 0.1816
14.00 0.7403 0.6978 0.6490 0.5936 0.5322 0.4659 0.3967 0.3270 0.2601 0.198814.25 0.7492 0.7085 0.6616 0.6083 0.5488 0.4842 0.4161 0.3468 0.2793 0.216514.50 0.7576 0.7187 0.6736 0.6223 0.5649 0.5020 0.4351 0.3664 0.2986 0.234614.75 0.7656 0.7283 0.6851 0.6357 0.5802 0.5191 0.4537 0.3857 0.3179 0.253015.00 0.7732 0.7375 0.6961 0.6486 0.5950 0.5357 0.4717 0.4048 0.3371 0.2715
15.25 0.7805 0.7462 0.7065 0.6608 0.6092 0.5517 0.4893 0.4234 0.3562 0.290215.50 0.7874 0.7545 0.7164 0.6726 0.6227 0.5671 0.5064 0.4417 0.3751 0.309015.75 0.7940 0.7625 0.7259 0.6837 0.6358 0.5820 0.5229 0.4596 0.3938 0.327716.00 0.8002 0.7700 0.7349 0.6944 0.6482 0.5963 0.5389 0.4770 0.4121 0.346316.25 0.8062 0.7772 0.7435 0.7046 0.6601 0.6100 0.5543 0.4939 0.4301 0.3648
16.50 0.8119 0.7841 0.7517 0.7144 0.6716 0.6231 0.5692 0.5104 0.4478 0.383116.75 0.8173 0.7906 0.7596 0.7237 0.6825 0.6358 0.5836 0.5263 0.4650 0.401117.00 0.8225 0.7969 0.7670 0.7325 0.6929 0.6479 0.5974 0.5418 0.4818 0.418917.25 0.8275 0.8028 0.7742 0.7410 0.7029 0.6595 0.6107 0.5567 0.4982 0.436317.50 0.8322 0.8085 0.7810 0.7491 0.7125 0.6706 0.6235 0.5711 0.5141 0.4533
17.75 0.8367 0.8140 0.7875 0.7569 0.7216 0.6813 0.6358 0.5850 0.5295 0.470018.00 0.8411 0.8192 0.7937 0.7643 0.7303 0.6915 0.6476 0.5984 0.5444 0.486218.25 0.8452 0.8241 0.7997 0.7714 0.7387 0.7013 0.6589 0.6114 0.5589 0.502018.50 0.8492 0.8289 0.8054 0.7781 0.7467 0.7107 0.6698 0.6238 0.5729 0.517418.75 0.8530 0.8335 0.8108 0.7846 0.7544 0.7197 0.6802 0.6358 0.5864 0.5324
19.00 0.8567 0.8378 0.8160 0.7908 0.7617 0.7283 0.6903 0.6473 0.5994 0.546919.25 0.8602 0.8420 0.8210 0.7967 0.7687 0.7365 0.6999 0.6584 0.6120 0.560919.50 0.8636 0.8460 0.8258 0.8024 0.7754 0.7444 0.7091 0.6690 0.6241 0.574519.75 0.8668 0.8499 0.8304 0.8078 0.7819 0.7520 0.7179 0.6792 0.6358 0.587620.00 0.8699 0.8536 0.8348 0.8130 0.7880 0.7593 0.7264 0.6891 0.6470 0.6003
Erlang-B improvement function : F1,n(A) = A · [E1,n(A)− E1,n+1(A)]
618 INDEX
Blocking probability E
n 0.001 0.002 0.005 0.01 0.02 0.05 0.10 0.20 0.50
1 0.0010 0.0020 0.0050 0.0101 0.0204 0.0526 0.1111 0.2500 1.00002 0.0458 0.0653 0.1054 0.1526 0.2235 0.3813 0.5954 1.0000 2.73213 0.1938 0.2487 0.3490 0.4555 0.6022 0.8994 1.2708 1.9299 4.59144 0.4393 0.5350 0.7012 0.8694 1.0923 1.5246 2.0454 2.9452 6.50115 0.7621 0.8999 1.1320 1.3608 1.6571 2.2185 2.8811 4.0104 8.4369
6 1.1459 1.3252 1.6218 1.9090 2.2759 2.9603 3.7584 5.1086 10.38867 1.5786 1.7984 2.1575 2.5009 2.9354 3.7378 4.6662 6.2302 12.35058 2.0513 2.3106 2.7299 3.1276 3.6270 4.5430 5.5971 7.3692 14.31979 2.5575 2.8549 3.3326 3.7825 4.3447 5.3702 6.5464 8.5217 16.2942
10 3.0920 3.4265 3.9607 4.4612 5.0840 6.2157 7.5106 9.6850 18.2726
11 3.6511 4.0215 4.6104 5.1599 5.8415 7.0764 8.4871 10.8570 20.254112 4.2314 4.6368 5.2789 5.8760 6.6147 7.9501 9.4740 12.0364 22.238113 4.8305 5.2700 5.9638 6.6072 7.4015 8.8349 10.4699 13.2218 24.224014 5.4464 5.9190 6.6632 7.3517 8.2003 9.7295 11.4735 14.4126 26.211615 6.0772 6.5822 7.3755 8.1080 9.0096 10.6327 12.4838 15.6079 28.2005
16 6.7215 7.2582 8.0995 8.8750 9.8284 11.5436 13.5001 16.8071 30.190617 7.3781 7.9457 8.8340 9.6516 10.6558 12.4613 14.5217 18.0098 32.181618 8.0459 8.6437 9.5780 10.4369 11.4909 13.3852 15.5480 19.2156 34.173419 8.7239 9.3514 10.3308 11.2301 12.3330 14.3147 16.5786 20.4241 36.166020 9.4115 10.0680 11.0916 12.0306 13.1815 15.2493 17.6132 21.6351 38.1592
21 10.1077 10.7929 11.8598 12.8378 14.0360 16.1885 18.6512 22.8484 40.153022 10.8121 11.5253 12.6349 13.6513 14.8959 17.1320 19.6925 24.0636 42.147223 11.5241 12.2649 13.4164 14.4705 15.7609 18.0795 20.7367 25.2807 44.141824 12.2432 13.0110 14.2038 15.2950 16.6306 19.0307 21.7836 26.4994 46.136925 12.9689 13.7634 14.9968 16.1246 17.5046 19.9853 22.8331 27.7196 48.1322
26 13.7008 14.5216 15.7949 16.9588 18.3828 20.9430 23.8850 28.9413 50.127927 14.4385 15.2852 16.5980 17.7974 19.2648 21.9037 24.9390 30.1643 52.123928 15.1818 16.0540 17.4057 18.6402 20.1504 22.8672 25.9950 31.3884 54.120129 15.9304 16.8277 18.2177 19.4869 21.0394 23.8333 27.0529 32.6137 56.116530 16.6839 17.6060 19.0339 20.3373 21.9316 24.8018 28.1126 33.8400 58.1132
31 17.4420 18.3887 19.8539 21.1912 22.8268 25.7726 29.1740 35.0672 60.110032 18.2047 19.1755 20.6777 22.0483 23.7249 26.7457 30.2369 36.2954 62.107033 18.9716 19.9663 21.5050 22.9086 24.6257 27.7207 31.3013 37.5244 64.104234 19.7426 20.7609 22.3356 23.7720 25.5291 28.6978 32.3672 38.7542 66.101535 20.5174 21.5591 23.1694 24.6381 26.4349 29.6767 33.4343 39.9847 68.0990
36 21.2960 22.3607 24.0063 25.5070 27.3431 30.6573 34.5027 41.2159 70.096637 22.0781 23.1656 24.8461 26.3785 28.2536 31.6397 35.5722 42.4478 72.094338 22.8636 23.9737 25.6887 27.2525 29.1661 32.6236 36.6429 43.6803 74.092139 23.6523 24.7847 26.5340 28.1288 30.0808 33.6090 37.7147 44.9134 76.090040 24.4442 25.5987 27.3818 29.0074 30.9973 34.5960 38.7874 46.1470 78.0880
Erlang’s B–formula E1,n(A) for fixed Value of E
INDEX 619
Blocking Probability E
n 0.001 0.002 0.005 0.01 0.02 0.05 0.10 0.20 0.50
41 25.2391 26.4155 28.2321 29.8882 31.9158 35.5843 39.8612 47.3812 80.086142 26.0369 27.2350 29.0848 30.7712 32.8360 36.5739 40.9359 48.6158 82.084243 26.8374 28.0570 29.9397 31.6561 33.7580 37.5648 42.0114 49.8509 84.082544 27.6407 28.8815 30.7969 32.5430 34.6817 38.5570 43.0878 51.0865 86.080845 28.4466 29.7085 31.6561 33.4317 35.6069 39.5503 44.1650 52.3225 88.0792
46 29.2549 30.5377 32.5175 34.3223 36.5337 40.5447 45.2430 53.5589 90.077647 30.0657 31.3692 33.3807 35.2146 37.4619 41.5403 46.3218 54.7957 92.076148 30.8789 32.2029 34.2459 36.1086 38.3916 42.5369 47.4012 56.0328 94.074749 31.6943 33.0387 35.1129 37.0042 39.3227 43.5345 48.4813 57.2703 96.073350 32.5119 33.8764 35.9818 37.9014 40.2551 44.5331 49.5621 58.5082 98.0720
51 33.3316 34.7162 36.8523 38.8001 41.1889 45.5326 50.6435 59.7463 100.070752 34.1533 35.5578 37.7245 39.7003 42.1238 46.5330 51.7256 60.9848 102.069553 34.9771 36.4013 38.5983 40.6019 43.0600 47.5343 52.8082 62.2236 104.068354 35.8028 37.2466 39.4737 41.5049 43.9973 48.5364 53.8914 63.4626 106.067155 36.6305 38.0936 40.3506 42.4092 44.9358 49.5394 54.9751 64.7019 108.0660
56 37.4599 38.9424 41.2290 43.3149 45.8754 50.5431 56.0594 65.9415 110.064957 38.2911 39.7927 42.1089 44.2218 46.8160 51.5477 57.1441 67.1813 112.063958 39.1241 40.6447 42.9901 45.1299 47.7577 52.5529 58.2294 68.4214 114.062959 39.9587 41.4982 43.8727 46.0392 48.7004 53.5589 59.3151 69.6617 116.061960 40.7950 42.3532 44.7566 46.9497 49.6441 54.5656 60.4013 70.9023 118.0610
61 41.6328 43.2097 45.6418 47.8613 50.5887 55.5730 61.4880 72.1430 120.060062 42.4723 44.0676 46.5283 48.7740 51.5342 56.5810 62.5750 73.3840 122.059163 43.3132 44.9270 47.4160 49.6878 52.4807 57.5897 63.6625 74.6251 124.058364 44.1557 45.7876 48.3049 50.6026 53.4280 58.5989 64.7504 75.8665 126.057465 44.9995 46.6497 49.1949 51.5185 54.3762 59.6088 65.8387 77.1080 128.0566
66 45.8448 47.5130 50.0861 52.4353 55.3252 60.6193 66.9274 78.3497 130.055867 46.6915 48.3776 50.9783 53.3531 56.2750 61.6304 68.0164 79.5916 132.055168 47.5395 49.2434 51.8717 54.2718 57.2256 62.6420 69.1058 80.8337 134.054369 48.3888 50.1104 52.7661 55.1915 58.1770 63.6541 70.1956 82.0759 136.053670 49.2394 50.9786 53.6615 56.1120 59.1291 64.6668 71.2857 83.3182 138.0529
71 50.0913 51.8480 54.5579 57.0335 60.0820 65.6800 72.3761 84.5608 140.052272 50.9444 52.7185 55.4554 57.9558 61.0355 66.6937 73.4668 85.8035 142.051573 51.7987 53.5901 56.3537 58.8789 61.9898 67.7079 74.5579 87.0463 144.050874 52.6542 54.4628 57.2530 59.8029 62.9448 68.7225 75.6492 88.2892 146.050275 53.5108 55.3365 58.1533 60.7276 63.9004 69.7377 76.7409 89.5323 148.0496
76 54.3685 56.2113 59.0544 61.6531 64.8567 70.7532 77.8328 90.7755 150.049077 55.2274 57.0871 59.9564 62.5794 65.8136 71.7693 78.9250 92.0189 152.048478 56.0873 57.9638 60.8593 63.5065 66.7712 72.7857 80.0175 93.2624 154.047879 56.9483 58.8416 61.7630 64.4343 67.7293 73.8026 81.1103 94.5060 156.047280 57.8104 59.7203 62.6676 65.3628 68.6881 74.8199 82.2033 95.7497 158.0467
Erlang’s B–formula E1,n(A) for fixed Value of E
620 INDEX
Number of Servers n
A 1 2 3 4 5 6 7 8 9 10
0.25 0.2500 0.0278 0.0022 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.00000.50 0.5000 0.1000 0.0152 0.0018 0.0002 0.0000 0.0000 0.0000 0.0000 0.00000.75 0.7500 0.2045 0.0441 0.0077 0.0011 0.0001 0.0000 0.0000 0.0000 0.00001.00 1.0000 0.3333 0.0909 0.0204 0.0038 0.0006 0.0001 0.0000 0.0000 0.00001.25 1.0000 0.4808 0.1555 0.0422 0.0097 0.0019 0.0003 0.0001 0.0000 0.0000
1.50 1.0000 0.6429 0.2368 0.0746 0.0201 0.0047 0.0010 0.0002 0.0000 0.00001.75 1.0000 0.8167 0.3337 0.1184 0.0364 0.0098 0.0023 0.0005 0.0001 0.00002.00 1.0000 1.0000 0.4444 0.1739 0.0597 0.0180 0.0048 0.0011 0.0002 0.00002.25 1.0000 1.0000 0.5678 0.2412 0.0908 0.0303 0.0090 0.0024 0.0006 0.00012.50 1.0000 1.0000 0.7022 0.3199 0.1304 0.0474 0.0154 0.0045 0.0012 0.0003
2.75 1.0000 1.0000 0.8467 0.4095 0.1788 0.0702 0.0248 0.0079 0.0023 0.00063.00 1.0000 1.0000 1.0000 0.5094 0.2362 0.0991 0.0376 0.0129 0.0040 0.00123.25 1.0000 1.0000 1.0000 0.6191 0.3026 0.1348 0.0546 0.0201 0.0068 0.00213.50 1.0000 1.0000 1.0000 0.7379 0.3778 0.1775 0.0762 0.0299 0.0107 0.00353.75 1.0000 1.0000 1.0000 0.8650 0.4618 0.2274 0.1029 0.0427 0.0163 0.0057
4.00 1.0000 1.0000 1.0000 1.0000 0.5541 0.2848 0.1351 0.0590 0.0238 0.00884.25 1.0000 1.0000 1.0000 1.0000 0.6545 0.3495 0.1731 0.0793 0.0336 0.01314.50 1.0000 1.0000 1.0000 1.0000 0.7625 0.4217 0.2172 0.1039 0.0460 0.01894.75 1.0000 1.0000 1.0000 1.0000 0.8778 0.5010 0.2675 0.1331 0.0616 0.02655.00 1.0000 1.0000 1.0000 1.0000 1.0000 0.5875 0.3241 0.1673 0.0805 0.0361
5.25 1.0000 1.0000 1.0000 1.0000 1.0000 0.6809 0.3871 0.2066 0.1031 0.04815.50 1.0000 1.0000 1.0000 1.0000 1.0000 0.7809 0.4564 0.2512 0.1298 0.06285.75 1.0000 1.0000 1.0000 1.0000 1.0000 0.8874 0.5320 0.3013 0.1606 0.08046.00 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.6138 0.3570 0.1960 0.10136.25 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.7017 0.4182 0.2360 0.1257
6.50 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.7954 0.4850 0.2807 0.15376.75 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.8949 0.5574 0.3304 0.18577.00 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.6353 0.3849 0.22177.25 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.7186 0.4445 0.26207.50 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.8073 0.5091 0.3066
7.75 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9011 0.5788 0.35568.00 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.6533 0.40928.25 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.7328 0.46728.50 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.8171 0.52998.75 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9062 0.5970
9.00 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.66879.25 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.74499.50 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.82569.75 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9106
10.00 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
Erlang’s C–formula E2,n(A)
INDEX 621
Number of Servers n
A 1 2 3 4 5 6 7 8 9 10
10.25 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000010.50 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000010.75 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000011.00 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000011.25 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
11.50 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000011.75 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000012.00 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000012.25 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000012.50 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
12.75 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000013.00 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000013.25 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000013.50 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000013.75 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
14.00 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000014.25 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000014.50 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000014.75 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000015.00 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
15.25 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000015.50 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000015.75 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000016.00 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000016.25 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
16.50 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000016.75 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000017.00 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000017.25 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000017.50 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
17.75 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000018.00 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000018.25 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000018.50 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000018.75 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
19.00 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000019.25 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000019.50 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000019.75 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.000020.00 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
Erlang’s C–formula E2,n(A)
622 INDEX
Number of Servers n
A 11 12 13 14 15 16 17 18 19 20
0.25 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00000.50 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00000.75 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00001.00 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00001.25 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
1.50 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00001.75 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00002.00 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00002.25 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00002.50 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
2.75 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00003.00 0.0003 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00003.25 0.0006 0.0002 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00003.50 0.0011 0.0003 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00003.75 0.0018 0.0006 0.0002 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
4.00 0.0030 0.0010 0.0003 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.00004.25 0.0048 0.0016 0.0005 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.00004.50 0.0072 0.0026 0.0008 0.0003 0.0001 0.0000 0.0000 0.0000 0.0000 0.00004.75 0.0106 0.0039 0.0014 0.0004 0.0001 0.0000 0.0000 0.0000 0.0000 0.00005.00 0.0151 0.0059 0.0021 0.0007 0.0002 0.0001 0.0000 0.0000 0.0000 0.0000
5.25 0.0210 0.0085 0.0033 0.0012 0.0004 0.0001 0.0000 0.0000 0.0000 0.00005.50 0.0284 0.0121 0.0048 0.0018 0.0006 0.0002 0.0001 0.0000 0.0000 0.00005.75 0.0378 0.0166 0.0069 0.0027 0.0010 0.0003 0.0001 0.0000 0.0000 0.00006.00 0.0492 0.0225 0.0096 0.0039 0.0015 0.0005 0.0002 0.0001 0.0000 0.00006.25 0.0630 0.0298 0.0132 0.0055 0.0022 0.0008 0.0003 0.0001 0.0000 0.0000
6.50 0.0795 0.0388 0.0178 0.0077 0.0032 0.0012 0.0005 0.0002 0.0001 0.00006.75 0.0988 0.0496 0.0236 0.0106 0.0045 0.0018 0.0007 0.0002 0.0001 0.00007.00 0.1211 0.0626 0.0306 0.0142 0.0062 0.0026 0.0010 0.0004 0.0001 0.00007.25 0.1467 0.0779 0.0392 0.0187 0.0084 0.0036 0.0015 0.0006 0.0002 0.00017.50 0.1758 0.0958 0.0495 0.0243 0.0113 0.0050 0.0021 0.0008 0.0003 0.0001
7.75 0.2085 0.1164 0.0617 0.0311 0.0149 0.0068 0.0029 0.0012 0.0005 0.00028.00 0.2450 0.1398 0.0760 0.0393 0.0193 0.0090 0.0040 0.0017 0.0007 0.00038.25 0.2853 0.1664 0.0925 0.0490 0.0247 0.0119 0.0054 0.0024 0.0010 0.00048.50 0.3296 0.1962 0.1115 0.0605 0.0312 0.0154 0.0072 0.0032 0.0014 0.00068.75 0.3780 0.2294 0.1331 0.0738 0.0390 0.0197 0.0095 0.0044 0.0019 0.0008
9.00 0.4305 0.2660 0.1575 0.0892 0.0482 0.0249 0.0123 0.0058 0.0026 0.00119.25 0.4871 0.3063 0.1848 0.1067 0.0590 0.0312 0.0157 0.0076 0.0035 0.00159.50 0.5479 0.3502 0.2151 0.1267 0.0714 0.0386 0.0199 0.0098 0.0046 0.00219.75 0.6129 0.3979 0.2485 0.1491 0.0857 0.0472 0.0249 0.0126 0.0061 0.0028
10.00 0.6821 0.4494 0.2853 0.1741 0.1020 0.0573 0.0309 0.0159 0.0079 0.0037
Erlang’s C–formula E2,n(A)
INDEX 623
Number of Servers n
A 11 12 13 14 15 16 17 18 19 20
10.25 0.7555 0.5047 0.3253 0.2019 0.1205 0.0690 0.0379 0.0200 0.0101 0.004910.50 0.8329 0.5639 0.3688 0.2326 0.1412 0.0823 0.0461 0.0248 0.0128 0.006310.75 0.9144 0.6270 0.4158 0.2662 0.1642 0.0975 0.0556 0.0304 0.0160 0.008111.00 1.0000 0.6939 0.4664 0.3029 0.1898 0.1145 0.0665 0.0371 0.0199 0.010311.25 1.0000 0.7647 0.5205 0.3428 0.2180 0.1337 0.0789 0.0448 0.0245 0.0129
11.50 1.0000 0.8393 0.5782 0.3858 0.2489 0.1550 0.0930 0.0538 0.0299 0.016011.75 1.0000 0.9178 0.6395 0.4321 0.2826 0.1786 0.1089 0.0640 0.0362 0.019712.00 1.0000 1.0000 0.7044 0.4817 0.3192 0.2046 0.1266 0.0756 0.0435 0.024112.25 1.0000 1.0000 0.7729 0.5347 0.3587 0.2331 0.1464 0.0888 0.0519 0.029312.50 1.0000 1.0000 0.8451 0.5910 0.4013 0.2641 0.1682 0.1035 0.0615 0.0353
12.75 1.0000 1.0000 0.9208 0.6507 0.4469 0.2978 0.1922 0.1200 0.0724 0.042213.00 1.0000 1.0000 1.0000 0.7138 0.4957 0.3343 0.2185 0.1383 0.0847 0.050113.25 1.0000 1.0000 1.0000 0.7803 0.5476 0.3735 0.2472 0.1585 0.0984 0.059113.50 1.0000 1.0000 1.0000 0.8502 0.6026 0.4156 0.2783 0.1808 0.1138 0.069213.75 1.0000 1.0000 1.0000 0.9234 0.6609 0.4606 0.3120 0.2052 0.1308 0.0807
14.00 1.0000 1.0000 1.0000 1.0000 0.7223 0.5085 0.3483 0.2317 0.1496 0.093614.25 1.0000 1.0000 1.0000 1.0000 0.7870 0.5594 0.3872 0.2605 0.1702 0.107914.50 1.0000 1.0000 1.0000 1.0000 0.8548 0.6133 0.4288 0.2917 0.1928 0.123714.75 1.0000 1.0000 1.0000 1.0000 0.9258 0.6702 0.4731 0.3253 0.2175 0.141215.00 1.0000 1.0000 1.0000 1.0000 1.0000 0.7301 0.5203 0.3613 0.2442 0.1604
15.25 1.0000 1.0000 1.0000 1.0000 1.0000 0.7930 0.5702 0.3999 0.2731 0.181415.50 1.0000 1.0000 1.0000 1.0000 1.0000 0.8590 0.6230 0.4410 0.3043 0.204315.75 1.0000 1.0000 1.0000 1.0000 1.0000 0.9280 0.6787 0.4848 0.3378 0.229216.00 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.7372 0.5312 0.3736 0.256116.25 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.7986 0.5803 0.4118 0.2850
16.50 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.8628 0.6320 0.4525 0.316216.75 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9300 0.6865 0.4956 0.349517.00 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.7437 0.5413 0.385117.25 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.8037 0.5896 0.422917.50 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.8664 0.6404 0.4632
17.75 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9318 0.6938 0.505818.00 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.7498 0.550818.25 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.8084 0.598218.50 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.8696 0.648118.75 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9335 0.7005
19.00 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.755419.25 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.812819.50 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.872719.75 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.935120.00 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
Erlang’s C–formula E2,n(A)