Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
Random geometry of wireless cellularnetworks
B. Błaszczyszyn (Inria/ENS Paris)
PRAWDOPODOBIENSTWO I STATYSTYKA DLA NAUKI I TECHNOLOGIISesja Specjalna pod auspicjami Komitetu Matematyki PAN
Torun, 5 czerwca 2013– p. 1
Cellular Network
Infrastructure of base stations (BS)provided by an operator.
Individual users talk to these stations and listen to them
send/receive data (Internet, VOD, mobile TV, etc).– p. 2
Technology and geometry
Radio communications between users and BS’s sharesome part of the electromagnetic spectrum.
Successive “generations” of cellular networks (1G,...,4G,GSM, CDMA, HSDPA, LTE, etc) use differenttechnologies to “separate” individual communications (intime and/or frequency, and/or coding).
– p. 3
Technology and geometry
Radio communications between users and BS’s sharesome part of the electromagnetic spectrum.
Successive “generations” of cellular networks (1G,...,4G,GSM, CDMA, HSDPA, LTE, etc) use differenttechnologies to “separate” individual communications (intime and/or frequency, and/or coding).
Performance of the (radio part) of a cellular networkdepends very much on its geometry (relative location ofBS’s and users).
– p. 3
Geometry and dynamics
SINR
←Honeycombvs
Poisson-Voronoi→
geometry: (static) pattern of BS with their path-loss fields,dynamics: arrivals/departures/mobility of users
Questions: SINR based QoS prediction⇒ capacity models⇒ operator dimensioning tools.
– p. 4
Shadowing and network geometry
Shadowing — signal power loss due to reflection,diffraction, and scattering. Modeled by random field withlog-normal marginals with mean 1 and some variance.
Impacts geometry of cellular networks:Serving BS ≡ with smallest path-loss 6≡the closets one.
Problems:Is believed to degrade QoS (?)⇒ Not always!How it harms the “perfect” honeycomb?⇒ Makes itmore Poisson-like. Poisson analysis may be useful!
– p. 5
OUTLINE of the remaining part of the talk
Foundations of wireless communications
Geometry of cellular networks
Poisson pp — basic facts
Poisson pp as a model for BS
When everything is similar to Poisson — a convergenceresult
When shadowing improves performance — heavy tailsin action
– p. 6
Foundations of wireless communications
– p. 7
Signal propagation
Maxwell’s electromagnetic field equations? — Not needed. Simplestatistical models relating received power Prec to the emitted one Pem.
direct path shadowing (diffraction) fading (multipath)
– p. 8
Signal propagation
Maxwell’s electromagnetic field equations? — Not needed. Simplestatistical models relating received power Prec to the emitted one Pem.
direct path shadowing (diffraction) fading (multipath)
Deterministic path-loss: Prec ≈ Pem · (distance)−β
for some β > 0, (path-loss exponent).
– p. 8
Signal propagation
Maxwell’s electromagnetic field equations? — Not needed. Simplestatistical models relating received power Prec to the emitted one Pem.
direct path shadowing (diffraction) fading (multipath)
Deterministic path-loss: Prec ≈ Pem · (distance)−β
for some β > 0, (path-loss exponent).
Random path-loss: Prec ≈ S · F · Pem · (distance)−β,S, F random variables for shadowing and fading.Typically S ∼ log-normal, F ∼ exponential (Rayleigh fading).
– p. 8
Signal detection and processing
Basic radio channel characteristic:Signal-to-Interference-and-Noise Ratio SINR =
useful signal︷ ︸︸ ︷
power of the signal from serving BS
noise power+ total power received from non-serving BS︸ ︷︷ ︸
interference
– p. 9
Signal coding
constant bit-rate (CBR) coding (e.g. for voice, mobile TV)1(SINR > Const)— existence of connection
– p. 10
Signal coding
constant bit-rate (CBR) coding (e.g. for voice, mobile TV)1(SINR > Const)— existence of connection
variable bit-rate coding (VBR) (e.g. for data transfer)f(SINR)— channel throughput (# Bits/sec that can besent in the channel with a given bit-error probability).
– p. 10
Signal coding
constant bit-rate (CBR) coding (e.g. for voice, mobile TV)1(SINR > Const)— existence of connection
variable bit-rate coding (VBR) (e.g. for data transfer)f(SINR)— channel throughput (# Bits/sec that can besent in the channel with a given bit-error probability).
Shannon’s theorem:f(x) ∼ log(1 + x) is maximal theoretical throughput in achannel a Gaussian channel.
– p. 10
Signal coding
constant bit-rate (CBR) coding (e.g. for voice, mobile TV)1(SINR > Const)— existence of connection
variable bit-rate coding (VBR) (e.g. for data transfer)f(SINR)— channel throughput (# Bits/sec that can besent in the channel with a given bit-error probability).
Shannon’s theorem:f(x) ∼ log(1 + x) is maximal theoretical throughput in achannel a Gaussian channel.
More recent information theoretic concepts (not consideredin this presentation): MIMO (multiple antenna systems),broadcast and MAC channels (joint coding to and fromseveral users), interference cancellation.
– p. 10
Geometry of cellular networks
– p. 11
Honeycomb model for BS placement
Traditionally considered as optimal placement of BS.
– p. 12
Honeycomb model for BS placement
Traditionally considered as optimal placement of BS.
Why? Hexagons can tile the plane. Have the smallest ratioof perimeter to area (compared to equilateral triangles andsquares, which tile the plane too)
– p. 12
Honeycomb model for BS placement
Traditionally considered as optimal placement of BS.
Why? Hexagons can tile the plane. Have the smallest ratioof perimeter to area (compared to equilateral triangles andsquares, which tile the plane too)→ minimizes the relativenumber of users next to the cell boundary (which are moredifficult to serve due to smaller SINR).
– p. 13
Problems with the Honeycomb
Real deployment of BS is nevera honeycomb (due to obviousarchitectural constrains). Often“looks like a random pattern”.
4G deployment accordingto H. S. Dhillon [UT Austin].
– p. 14
Problems with the Honeycomb
Real deployment of BS is nevera honeycomb (due to obviousarchitectural constrains). Often“looks like a random pattern”.
4G deployment accordingto H. S. Dhillon [UT Austin].
Honeycomb cellular networks may be also hard foranalytic evaluation.E.g. distribution function of SINR of the “typical” user inthe network?
– p. 14
“Wireless cell” is not (always) Voronoi
Voronoi Cell of X in {Xi}: set of “locations”closer toX than to any point of {Xi};{x : |x−X| ¬ inf i |x−Xi|}.
– p. 15
“Wireless cell” is not (always) Voronoi
Voronoi Cell of X in {Xi}: set of “locations”closer toX than to any point of {Xi};{x : |x−X| ¬ inf i |x−Xi|}.
Users (usually) connect to the BS received with thestrongest signal.
– p. 15
“Wireless cell” is not (always) Voronoi
Voronoi Cell of X in {Xi}: set of “locations”closer toX than to any point of {Xi};{x : |x−X| ¬ inf i |x−Xi|}.
Users (usually) connect to the BS received with thestrongest signal.
Deterministic path-loss model→ BS serve users in theirVoronoi cells.
– p. 15
“Wireless cell” is not (always) Voronoi
Voronoi Cell of X in {Xi}: set of “locations”closer toX than to any point of {Xi};{x : |x−X| ¬ inf i |x−Xi|}.
Users (usually) connect to the BS received with thestrongest signal.
Deterministic path-loss model→ BS serve users in theirVoronoi cells.
Random path-loss model (accounting for shadowing andor fading)→ serving BS is not necessarily the closetsone.
– p. 15
“Wireless cell” is not (always) Voronoi
Voronoi Cell of X in {Xi}: set of “locations”closer toX than to any point of {Xi};{x : |x−X| ¬ inf i |x−Xi|}.
Users (usually) connect to the BS received with thestrongest signal.
Deterministic path-loss model→ BS serve users in theirVoronoi cells.
Random path-loss model (accounting for shadowing andor fading)→ serving BS is not necessarily the closetsone.
Shadowing randomly “perturbs” geometry.
– p. 15
If not Honeycomb then what?
Enough arguments to adopt a stochastic modeling of the BS!
– p. 16
If not Honeycomb then what?
Enough arguments to adopt a stochastic modeling of the BS!A bit of formalism:Random point patterns (random elements in the space of locally finitesubsets of some space, here plane) are called point processes (pp).Usually considered as random (purely atomic) measures (in pointprocess theory). Can be also seen as random closed sets (in stochasticgeometry).
– p. 16
If not Honeycomb then what?
Enough arguments to adopt a stochastic modeling of the BS!A bit of formalism:Random point patterns (random elements in the space of locally finitesubsets of some space, here plane) are called point processes (pp).Usually considered as random (purely atomic) measures (in pointprocess theory). Can be also seen as random closed sets (in stochasticgeometry).
Point processes exhibiting some “point repulsion” arepotentially suitable to model BS locations:Gibbs pp (with appropriate conditional density), Hard-core models (e.g.
arising from random sequential packing of balls), Determinantal pp
(arise in physics, random matrix theory, combinatorics), Zeros of
Gaussian Analytic Functions, Perturbed Lattices, etc.
:-( Usually not amenable to explicit quantitative analysis of SINR!– p. 16
And if we assume
“completely random” network?
Completely random point pattern ≡ Poisson pp.
– p. 17
Poisson pp versus a “perturbed Honeycomb”
Poisson pp a perturbed Honeycomb
Poisson pp is easy to work with but exhibits more clustering.
– p. 18
Poisson point process— basic facts
– p. 19
Poisson pp
Definition: Φ = {Xi} is a Poisson pp of intensity measure Λin a (Polish) space E if
(1) number of points of Φ in any set A,Φ(A), is Poisson random variablewith mean Λ(A).
(2) numbers of points of Φ in disjointsets, Φ(A1), . . .Φ(An), n 1 areindependent random variables.
– p. 20
Poisson pp
Definition: Φ = {Xi} is a Poisson pp of intensity measure Λin a (Polish) space E if
(1) number of points of Φ in any set A,Φ(A), is Poisson random variablewith mean Λ(A).
(2) numbers of points of Φ in disjointsets, Φ(A1), . . .Φ(An), n 1 areindependent random variables.
“Complete independence” (2) characterizes Poisson pp(provided it is simple, without fixed atoms).
– p. 20
Poisson pp and random mapping of points
Consider probability kernel p(x, ·) from E to some (Polish)space E
′. Given pp Φ on E, consider independent, mappingof eachX ∈ Φ to E
′ with distribution p(X, ·).
– p. 21
Poisson pp and random mapping of points
Consider probability kernel p(x, ·) from E to some (Polish)space E
′. Given pp Φ on E, consider independent, mappingof eachX ∈ Φ to E
′ with distribution p(X, ·).
Displacement Theroem: Independent mapping of points ofPoisson pp of intensity Λ on E to E
′, with kernel p(x, ·) isPoisson pp on E
′ with intensity Λ′(·) =∫
Ep(x, ·) Λ(dx).
– p. 21
Poisson pp and random mapping of points
Consider probability kernel p(x, ·) from E to some (Polish)space E
′. Given pp Φ on E, consider independent, mappingof eachX ∈ Φ to E
′ with distribution p(X, ·).
Displacement Theroem: Independent mapping of points ofPoisson pp of intensity Λ on E to E
′, with kernel p(x, ·) isPoisson pp on E
′ with intensity Λ′(·) =∫
Ep(x, ·) Λ(dx).
Corollary:
Independent thinning (removing of points) of Poisson ppremains Poisson.
Independent marking, i.e. attaching “auxiliary” randomelementsKi ∈ K to Poisson points {Xi} makes{(Xi,Ki)} Poisson on E× K.
Application: —Ki radio channel conditions from BSXi.– p. 21
Poisson pp as a weak limit
A “spatial” extension of the “classical” Poisson theorem forthe convergence of binomial variables to Poisson.
Poisson Theorem: Independent thinning of arbitrary ppconverges weakly to Poisson pp, provided the retentionprobability goes to 0, and the process is rescaled topreserve constant intensity.
– p. 22
Poisson pp as a weak limit
A “spatial” extension of the “classical” Poisson theorem forthe convergence of binomial variables to Poisson.
Poisson Theorem: Independent thinning of arbitrary ppconverges weakly to Poisson pp, provided the retentionprobability goes to 0, and the process is rescaled topreserve constant intensity.
Theorem: Independent, successive translations of points ofarbitrary pp converges weakly to Poisson process, provided... (cf Daley&Vere-Jones 2008).
– p. 22
Poisson pp as a weak limit
A “spatial” extension of the “classical” Poisson theorem forthe convergence of binomial variables to Poisson.
Poisson Theorem: Independent thinning of arbitrary ppconverges weakly to Poisson pp, provided the retentionprobability goes to 0, and the process is rescaled topreserve constant intensity.
Theorem: Independent, successive translations of points ofarbitrary pp converges weakly to Poisson process, provided... (cf Daley&Vere-Jones 2008).
Application: The latter result will be used to show that“increasing variability” of the radio channel conditions makesany given network geometry (including the Honeycomb) “isperceived” by a given user as Poisson pp...
– p. 22
Linear and extremal “shot-noise”
Φ = {Xi}— pp on E, f — real function o E. Define:
I = IΦ,f =∫f(x)Φ(dx) =
∑
Xi∈Φf(Xi)
— shot-noise,
Z = ZΦ,f = sup{f(Xi) : Xi ∈ Φ}— extremal shot-noise
– p. 23
Linear and extremal “shot-noise”
Φ = {Xi}— pp on E, f — real function o E. Define:
I = IΦ,f =∫f(x)Φ(dx) =
∑
Xi∈Φf(Xi)
— shot-noise,
Z = ZΦ,f = sup{f(Xi) : Xi ∈ Φ}— extremal shot-noise
Fact: For Poisson pp of intensity Λ(·)LΦ(f) = E[e−I] = exp
{−
∫
E(1− e−f(x)) Λ(dx)
},
P{Z ¬ t } = exp{−
∫
E1(f(x) > t) Λ(dx)
}.
– p. 23
Linear and extremal “shot-noise”
Φ = {Xi}— pp on E, f — real function o E. Define:
I = IΦ,f =∫f(x)Φ(dx) =
∑
Xi∈Φf(Xi)
— shot-noise,
Z = ZΦ,f = sup{f(Xi) : Xi ∈ Φ}— extremal shot-noise
Fact: For Poisson pp of intensity Λ(·)LΦ(f) = E[e−I] = exp
{−
∫
E(1− e−f(x)) Λ(dx)
},
P{Z ¬ t } = exp{−
∫
E1(f(x) > t) Λ(dx)
}.
Application: Z — signal from the strongest BS, I — interference.Question: Distribution of SINR?L/(w + I) for w > 0 can be evaluated as well.
– p. 23
“Typical point” of Poisson pp
Consider stationary pp Φ on Rd of intensity λ ∈ (0,∞)
(mean number of points per unit of volume).Stationary (homogeneous) Poisson pp: Λ(dx) = λdx.
– p. 24
“Typical point” of Poisson pp
Consider stationary pp Φ on Rd of intensity λ ∈ (0,∞)
(mean number of points per unit of volume).Stationary (homogeneous) Poisson pp: Λ(dx) = λdx.
“Typical pint” of Φ is supposed to be a point uniformlysampled from Φ.
– p. 24
“Typical point” of Poisson pp
Consider stationary pp Φ on Rd of intensity λ ∈ (0,∞)
(mean number of points per unit of volume).Stationary (homogeneous) Poisson pp: Λ(dx) = λdx.
“Typical pint” of Φ is supposed to be a point uniformlysampled from Φ.
Formalized in Palm theory. One defines Palm distributionP0 of Φ: P0{Γ } = 1/λE
[∫
[0,1]d 1(Φ− x ∈ Γ)Φ(dx)]
where E corresponds to the stationary distribution of Φ.P0 is the distribution of points of Φ “seen by an observersitting on its typical point”.
– p. 24
“Typical point” of Poisson pp
Consider stationary pp Φ on Rd of intensity λ ∈ (0,∞)
(mean number of points per unit of volume).Stationary (homogeneous) Poisson pp: Λ(dx) = λdx.
“Typical pint” of Φ is supposed to be a point uniformlysampled from Φ.
Formalized in Palm theory. One defines Palm distributionP0 of Φ: P0{Γ } = 1/λE
[∫
[0,1]d 1(Φ− x ∈ Γ)Φ(dx)]
where E corresponds to the stationary distribution of Φ.P0 is the distribution of points of Φ “seen by an observersitting on its typical point”.
Theorem [Slivnyak-Mecke characterization of Poisson pp]P0{Γ } = P{Φ + δ0 ∈ Γ} iff Φ is Poisson pp.
“Typical Poisson point sees stationary configuration.”– p. 24
Neighbours of the typical point
For a stationary point process Φ of intensity λ on R2 define
K(r) = 1λ
E0[Φ({x : |x| ¬ r})− 1] (Riplay’sK function)
expected number of “additional” points of Φ within thedistance r of its typical point.
– p. 25
Neighbours of the typical point
For a stationary point process Φ of intensity λ on R2 define
K(r) = 1λ
E0[Φ({x : |x| ¬ r})− 1] (Riplay’sK function)
expected number of “additional” points of Φ within thedistance r of its typical point.
Define also L(r) =√
K(r)/π (Riplay’s L function).
– p. 25
Neighbours of the typical point
For a stationary point process Φ of intensity λ on R2 define
K(r) = 1λ
E0[Φ({x : |x| ¬ r})− 1] (Riplay’sK function)
expected number of “additional” points of Φ within thedistance r of its typical point.
Define also L(r) =√
K(r)/π (Riplay’s L function).
Fact: For Poisson ppK(r) = πr2, L(r) = r (Slivnyak-Mecke).
– p. 25
Neighbours of the typical point
For a stationary point process Φ of intensity λ on R2 define
K(r) = 1λ
E0[Φ({x : |x| ¬ r})− 1] (Riplay’sK function)
expected number of “additional” points of Φ within thedistance r of its typical point.
Define also L(r) =√
K(r)/π (Riplay’s L function).
Fact: For Poisson ppK(r) = πr2, L(r) = r (Slivnyak-Mecke).
Application: Estimators of Riplay’sK and L functions, are used to com-pare (regularity/clustering in) ob-served point patterns ... to be usedfor BS patterns. L function for the Honeycomb
– p. 25
Poisson pp as a model for BS
– p. 26
Poisson or not Poisson
Figure lifted from a talk by Harpreet S. Dhillon of University of Texas at Austin.
– p. 27
Poisson or not Poisson
Figure lifted from a talk by Harpreet S. Dhillon of University of Texas at Austin.
More (counter-) arguments for Poisson modeling of real networkdeployment in
C.-H. Lee, C.-Y. Shih, and Y.-S. Chen, “Stochastic geometry based models for modelingcellular networks in urban areas”, Wireless Networks, 2012.
– p. 27
Comparison of Riplay’s function
”Poisson-like network” more “regular” network
Empirical Riplay’s L function for real positioning of BS in somebig European city
thanks to M. Jovanovic and M.K. Karray [Orange Labs]
– p. 28
User in Poisson network
Consider one user located at, say, the origin of the planeR2.
– p. 29
User in Poisson network
Consider one user located at, say, the origin of the planeR2.
Locations of BS on the plane represented byhomogeneous Poisson pp Φ = {Xi ∈ R
2} on theplane.
– p. 29
User in Poisson network
Consider one user located at, say, the origin of the planeR2.
Locations of BS on the plane represented byhomogeneous Poisson pp Φ = {Xi ∈ R
2} on the plane.
Usual path-loss model: the received power of a signaloriginating from a base station atXi is
PXi =SXiℓ(|Xi|)
=SXi
(K|Xi|)β,
where ℓ(x) = (K|x|)β is a deterministic path-lossfunction, with constantsK > 0 and β > 2, and therandom variable Si represents shadowing.
– p. 29
Path-loss to serving BS
The smallest path-loss (to a serving BS):
L∗ = minXi∈Φ
(K|Xi|β)
SXi.
– p. 30
Path-loss to serving BS
The smallest path-loss (to a serving BS):
L∗ = minXi∈Φ
(K|Xi|β)
SXi.
Fact: The distribution function of L∗ admits a simpleexpression in Poisson model
P(L∗ t) = exp{−λπE[S2/β]t2/β/K2} ,
for a general distribution of S with E[S2/β] <∞.proof: extremal shot-noise distribution.
– p. 30
SINR to serving BS
SINR corresponding to L∗:
SINR∗ =1/L∗
W +∑
Xi∈ΦSXi/(K|Xi|)
β − 1/L∗.
– p. 31
SINR to serving BS
SINR corresponding to L∗:
SINR∗ =1/L∗
W +∑
Xi∈ΦSXi/(K|Xi|)
β − 1/L∗.
Fact: P{SINR∗ t } =∑⌈1/t⌉n=1 (−1)
n−k(n−1k−1
)t−2n/βn In,β(Wa−β/2)Jn,β(tn),
where a = λπE[S2/β]/K2, tn = t1−(n−1)t ,
In,β(x) =2n
∫∞0 u
2n−1e−u2−uβxΓ(1−2/β)−β/2du
βn−1(C′(β))n(n−1)! , with C′(β) = 2πβ sin(2π/β)
,
Jn,β(x) =∫
[0,1]n−1
n−1∏
i=1vi(2/β+1)−1i (1−vi)2/β
n−1∏
i=1(x+ηi)
dv1 . . . dvn−1, with
ηi := (1− vi)∏n−1k=i+1 vk
cf. H. P. Keeler, B.B., and M.K. Karray, Proc. of IEEE ISIT, 2013– p. 31
Shadowing makes everything is similar toPoisson — a convergence result.
– p. 32
Claim
Hexagonal or any spatially homogeneous network withshadowing of sufficiently large variance is perceived at agiven user location as an equivalent Poisson networkwithout shadowing.
– p. 33
Claim
Hexagonal or any spatially homogeneous network withshadowing of sufficiently large variance is perceived at agiven user location as an equivalent Poisson networkwithout shadowing.
It means that distribution of the path-loss, interference,SINR, and many more characteristics of the network“measured” by this user can be approximated using theequivalent Poisson network model.
– p. 33
Arbitrary network with log-normal shadowing
Locations of BS on the plane represented by a locallyfinite point pattern φ = {Xi ∈ R
2}; arbitrary, subject tosome general condition, to be specified.
– p. 34
Arbitrary network with log-normal shadowing
Locations of BS on the plane represented by a locallyfinite point pattern φ = {Xi ∈ R
2}; arbitrary, subject tosome general condition, to be specified.
Let SXi = S(σ)i = exp(−σ
2/2 + σZi) be iid (acrossBS) log-normal random variables with mean one andvariance σ2, where Zi are standard normal randomvariables.
– p. 34
Arbitrary network with log-normal shadowing
Locations of BS on the plane represented by a locallyfinite point pattern φ = {Xi ∈ R
2}; arbitrary, subject tosome general condition, to be specified.
Let SXi = S(σ)i = exp(−σ
2/2 + σZi) be iid (acrossBS) log-normal random variables with mean one andvariance σ2, where Zi are standard normal randomvariables.
Assume path-loss constant
K = K(σ) = K exp(
−σ2(β−2)2β2
)
– p. 34
Arbitrary network with log-normal shadowing
Locations of BS on the plane represented by a locallyfinite point pattern φ = {Xi ∈ R
2}; arbitrary, subject tosome general condition, to be specified.
Let SXi = S(σ)i = exp(−σ
2/2 + σZi) be iid (acrossBS) log-normal random variables with mean one andvariance σ2, where Zi are standard normal randomvariables.
Assume path-loss constant
K = K(σ) = K exp(
−σ2(β−2)2β2
)
Usually one uses logarithmic standard deviation (log-SD)v = σ10/ log 10 (SD of the path-loss expressed in dB) toparametrize the shadowing variance.
– p. 34
User’s perception of the network
Denote by
N = N (σ) :=
{
Y(σ)i =
K(σ)β|Xi|β
S(σ)i
: Xi ∈ φ
}
the path-loss process of the typical user, i.e.; the values ofpath-loss it has with all BS.
– p. 35
User’s perception of the network
Denote by
N = N (σ) :=
{
Y(σ)i =
K(σ)β|Xi|β
S(σ)i
: Xi ∈ φ
}
the path-loss process of the typical user, i.e.; the values ofpath-loss it has with all BS.
For convenience, we considerN (σ) as a point process on
R+ having unit-mass atoms located at Y (σ)i .
– p. 35
User’s perception of the network
Denote by
N = N (σ) :=
{
Y(σ)i =
K(σ)β|Xi|β
S(σ)i
: Xi ∈ φ
}
the path-loss process of the typical user, i.e.; the values ofpath-loss it has with all BS.
For convenience, we considerN (σ) as a point process on
R+ having unit-mass atoms located at Y (σ)i .
Note:N is a one-dimensional image of the planar networkgeometry φ perceived by the user at the origin.
– p. 35
Homogeneous network assumption
Let B0(r) = {x : |x| < r} ball of radius r centered at theorigin.
(Empirical) homogeneity condition: we require that
#{Xi ∈ B0(r)}
πr2→ λ as r →∞
for some constant λ, 0 < λ <∞;
λ is the (empirical) density of the network.
– p. 36
Homogeneous network assumption
Let B0(r) = {x : |x| < r} ball of radius r centered at theorigin.
(Empirical) homogeneity condition: we require that
#{Xi ∈ B0(r)}
πr2→ λ as r →∞
for some constant λ, 0 < λ <∞;
λ is the (empirical) density of the network.
This condition is satisfied in particular by any regular lattice(e.g. honeycomb) network model, all reasonable perturbedlattice models and for almost any realization φ of a randomergodic point process Φ.
– p. 36
Convergence result
Theorem:Let φ be a given pattern of BS satisfying the homogeneitycondition with empirical density λ. Then, theone-dimensional imageN (σ) of the network φ perceived bythe user at the origin converges weakly as σ →∞ to thePoisson point process on R
+ with the intensity measure
Λ(t) = λπK2t2
β . This image is characteristic for a planarPoisson network of BS with intensity λ.
– p. 37
Proof idea
Increasing by∆ the variance of the log-normal shadowingcorresponds on the logarithmic scale, to adding to all
path-loss values Y (σ)i received by given user (i.e. points ofN (σ)) independent Gaussian terms. Indeed
S(σ+∆)i = exp(−(σ +∆)2/2 + (σ +∆)Zi)
=distr. S(σ)i × exp(−σ∆−∆
2/2 + ∆Z′i).
– p. 38
Proof idea
Increasing by∆ the variance of the log-normal shadowingcorresponds on the logarithmic scale, to adding to all
path-loss values Y (σ)i received by given user (i.e. points ofN (σ)) independent Gaussian terms. Indeed
S(σ+∆)i = exp(−(σ +∆)2/2 + (σ +∆)Zi)
=distr. S(σ)i × exp(−σ∆−∆
2/2 + ∆Z′i).ForN (σ), on the logarithmic scale, one can use the Poissonconvergence result for successive translations of points.
– p. 38
Proof idea
Increasing by∆ the variance of the log-normal shadowingcorresponds on the logarithmic scale, to adding to all
path-loss values Y (σ)i received by given user (i.e. points ofN (σ)) independent Gaussian terms. Indeed
S(σ+∆)i = exp(−(σ +∆)2/2 + (σ +∆)Zi)
=distr. S(σ)i × exp(−σ∆−∆
2/2 + ∆Z′i).ForN (σ), on the logarithmic scale, one can use the Poissonconvergence result for successive translations of points.Sufficient condition for such a result:
supi P(Y(σ)i ∈ [0, t])→ 0
andE[N (σ)([0, t])
]→ Λ(t)
can be verified; cf. B.B., H. P. Keeler and M.K. Karray, Proc. of IEEEInfocom, 2013. – p. 38
Statistical confirmation
How large σ should be to use Poisson approximation?
In a given (simulated or real-data scenario), one cancompare the empirical distribution of L∗ and SINR∗ totheoretical distribution (just presented) in Poisson model.
The values of L∗ and SINR∗ are measured by users andreported to the operator in a real network. Operatorsusually have data regarding the empirical distribution ofL∗ and SINR∗.
– p. 39
L∗, honeycomb versus Poisson
We assume some value of path-loss exponent β from2.5 to 5.
– p. 40
L∗, honeycomb versus Poisson
We assume some value of path-loss exponent β from2.5 to 5.
We simulate the honeycomb with shadowing ofincreasing variance σ and calculate the empirical DFof L∗.
– p. 40
L∗, honeycomb versus Poisson
We assume some value of path-loss exponent β from2.5 to 5.
We simulate the honeycomb with shadowing ofincreasing variance σ and calculate the empirical DFof L∗.
We statistically test (Kolmogorov-Smirnov (K-S)) thegoodness of fit of this empirical DF to the theoretical DFin Poisson model.
– p. 40
L∗, honeycomb versus Poisson
We assume some value of path-loss exponent β from2.5 to 5.
We simulate the honeycomb with shadowing ofincreasing variance σ and calculate the empirical DFof L∗.
We statistically test (Kolmogorov-Smirnov (K-S)) thegoodness of fit of this empirical DF to the theoretical DFin Poisson model.
Result: Starting from the log-SD v of 7dB to 15dB (depend-ing on β from 2.5 to 5) theK-S test does not distinguishhoneycomb from Poisson at areasonable confidence level.
40 60 80 100 120 140 160 180 2000
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Propagation loss [dB]
CD
F
– p. 40
SINR∗ real network data versus Poisson
0
0.2
0.4
0.6
0.8
1
-15 -10 -5 0 5 10 15 20 25 30
CD
F
SINR [dB]
Infinite PoissonMeasurements
Distribution function of the SINR from the strongest BSthanks to M. Jovanovic and M.K. Karray [Orange Labs]
– p. 41
When shadowing improves performance— heavy tails in action
– p. 42
Shadowing; a stochastic resonances?Blocking probability v/s shadowing variance
0
0.2
0.4
0.6
0.8
1
0 10 20 30 40 50 60
Mea
n bl
ocki
ng p
roba
b.
Logarithmic standard deviation of the shadowing
β=2.53
3.54
4.55
OFDMA, downlink, hexagonal network of 36 BS, cell radius 0.5 km; log-normal shadowing,path-loss exponent β, traffic 34.6 Erlang per km2. [BB-Karray (2011)]
– p. 43
Elements of explanation
User call-blocking depends onSignal from the stronger (serving) BSmaxi Pi.signal-to-interference-and-noise ratio from itSINR∗ = maxi Pi
W+∑i Pi−maxi Pi
.
where Pi = Si(K|Xi|)−β. Assume log-normal S withE[S] = 1 and increasing variance.
– p. 44
Elements of explanation
User call-blocking depends onSignal from the stronger (serving) BSmaxi Pi.signal-to-interference-and-noise ratio from itSINR∗ = maxi Pi
W+∑i Pi−maxi Pi
.
where Pi = Si(K|Xi|)−β. Assume log-normal S withE[S] = 1 and increasing variance.
Signal from the strongest station decreases in varianceof S. (No surprise.) Indeed, S converges in Pr and in L1 to 0,when V ar(S)→∞.
– p. 44
Elements of explanation
User call-blocking depends onSignal from the stronger (serving) BSmaxi Pi.signal-to-interference-and-noise ratio from itSINR∗ = maxi Pi
W+∑i Pi−maxi Pi
.
where Pi = Si(K|Xi|)−β. Assume log-normal S withE[S] = 1 and increasing variance.
Signal from the strongest station decreases in varianceof S. (No surprise.) Indeed, S converges in Pr and in L1 to 0,when V ar(S)→∞.
For negligible noiseW = 0, mean SINR increases invariance of S. (Surprise?) Not really: single big-jump principleof heavy tailed variables! For log-normal (heavy tailed) Si:maxi Si ∼
∑
i Si, hence maxi Si∑i Si−maxi Si
∼ ∞.– p. 44
Conclusions
– p. 45
CONCLUSIONS
Deployment of BS is usually not regular (far from the“optimal” Honeycomb”). Often Poisson pp can be usedto model it.
Poisson pp allows to capture explicitly the distribution ofthe path-loss and SINR of a given user to its serving(strongest) BS, which is a fundamental cellular networkcharacteristic.
Shadowing impacts geometry of cellular networks. Itmakes it “even more” Poisson. It can also “separate” thestrongest signal from the interference thus increasingSINR (which is a good thing).
– p. 46
thank you
– p. 47