Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
Distributed Learning in Gamesfor Wireless Networks
Ph.D. Defense
M. Shabbir Ali
Advisors1 : Marceau Coupechoux (Telecom ParisTech) & Pierre Coucheney (UVSQ)
NETLEARN Project
June 27, 2017
1Jason R. Marden, UCSB, USA.Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 1 / 32
Motivation
Figure: 5G Network requirements
Some Key Network Technologies
1. Ultra dense small cell networks
2. Device-to-device networks
3. Sensor networks
Some Classical Questions
1. Resource allocation?
2. Load balancing?
3. Coverage maximization?
Solution Approaches
1. Centralized approaches
2. Distributed learning
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 2 / 32
Distributed Learning in Games for Wireless Networks
Figure: Small cell networks
• Game: Interactions can be mod-eled as a game
• Players: Base stations andusers
• Strategies: Parameters• Cost/Utility: Related todata rate, load, ..
• Nash Equilibrium (NE):No player gains by deviating
• Distributed Learning: Playerslearn from their actions to reachthe optimal NE
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 3 / 32
Outline of Presentation
1. Potential Games (PG)
2. Distributed learning in near-potential game
3. Distributed learning in noisy-potential game
4. Application 1: Load balancing in small cell networks
5. Application 2: Channel assignment in D2D networks
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 4 / 32
Potential Games
Game and Nash Equilibrium
Definition (Game)
A finite game G ={S, {Xi}i∈S , {Ui}i∈S
}, where S is a set of players,
X = X1 × X2 × . . .× X|S| is action sets and Ui : X →R is a cost or (utility) function. Anaction profile is denoted as x = (xi , x−i ).
Definition (ε−Nash Equilibrium)
An action profile(x∗i , x
∗−i
)∈ X is an ε-NE if
Ui (x∗i , x∗−i )− Ui (xi , x
∗−i ) ≤ ε, ∀i ∈ S, xi ∈ Xi . (1)
If ε = 0 then it is a Pure NE (PNE).
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 5 / 32
Potential Games
Potential Games
Definition (Potential games)
Consider a G if there is a potential function Φ : X →R such that ∀i ∈ S, ∀xi , x ′i ∈ Xi and∀x−i ∈ X−i the below condition is true then the game is exact-PG if
Ui (xi , x−i )− Ui (x′i , x−i ) = Φ(xi , x−i )− Φ(x ′i , x−i ), (2)
the game is near-PG if∣∣Ui (xi , x−i )− Ui (x′i , x−i ) + Φ(x ′i , x−i )− Φ(xi , x−i )
∣∣ ≤ ξ, (3)
the game is noisy-PG if
E[Ui (xi , x−i )]− E[Ui (x
′i , x−i )
]= Φ(xi , x−i )− Φ(x ′i , x−i ). (4)
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 6 / 32
Potential Games
Potential Games
• Potential games are generalization of common payoff games
• Potential games have at least one NE [Monderer and Shapley, ’96]
• All optimal points of potential function are NEs [Monderer and Shapley, ’96]
• We call the global optimizer of potential function as the optimal NE.
• Log-linear learning algorithm (LLA) in exact-PG is known to converge to the optimal NE.
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 7 / 32
Potential Games
Related Work in Potential Games
• (Exact-PG) Log-linear learning algorithm (LLA) [Marden and Shamma, 2012]
• (Near-PG) LLA [Cannodagon, 2011]
• (Noisy-PG) Binary-LLA [Leslie and Marden, 2011]
• We extend the results of LLA and Binary-LLA in near-PG and noisy-PG
• We apply the obtained results to wireless network problems
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 8 / 32
Distributed learning in near-potential game
Binary Log-linear Learning Algorithm (BLLA)
BLLA in near-PG
1: Initialisation: Start with arbitrary actionprofile x .
2: Set parameter τ .3: while t ≥ 1 do4: Select a player i and a trial action
xi ∈ Xi with uniform probability.5: Player i plays action xi (t − 1) to
compute the cost Ui (x(t − 1)).6: Player i plays the trial action xi to
compute its cost Ui (xi , x−i (t − 1)).7: Player i selects action
xi (t) ∈ (xi (t − 1), xi ) with probability
(1 + e∆i/τ
)−1, (5)
∆i = Ui (xi , x−i (t − 1))−Ui (x(t − 1)).8: All the other players repeat their
actions i.e., x−i (t) = x−i (t − 1).9: end while
• τ controls perturbation
• τ → 0 BLLA ≡ Better response• τ →∞ BLLA ≡ Uniform random
• For every τ > 0, the underlying Markovchain is ergodic.
• Stochastically Stable States (SSSs) arethe support of stationary distribution asτ → 0.
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 9 / 32
Distributed learning in near-potential game
Rules for Resistance Computation
• Roots of the minimum resistance tree are SSSs.
• Resistance is the cost of transition of BLLA.
• We propose resistance rules that enables the convergence analysis of BLLA.
Proposition (Rules for Resistance Computation)
Let f , f1 and f2 be strictly positive functions. Let resistances Res(f ), Res(f1), and Res(f2) exist.Let κ be a positive constant.
Rule 1. f1(τ) is sub-exponential if and only if Res(f1) = 0. In particular Res(κ) = 0.
Rule 2. Res(e−κ/τ ) = κ.
Rule 3. Res(f1 + f2) = min {Res(f1),Res(f2)}.
Rule 4. If Res(f1) < Res(f2) then Res(f1 − f2) = Res(f1).
Rule 5. Res(f1f2) = Res(f1) + Res(f2).
Rule 6. Res( 1f
) = −Res(f ).
Rule 7. If ∀τ, f1(τ) ≤ f2(τ) then Res(f2) ≤ Res(f1).
Rule 8. If ∀τ, f1(τ) ≤ f (τ) ≤ f2(τ) and if Res(f1) = Res(f2) then Res(f ) exists andRes(f ) = Res(f1).
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 10 / 32
Distributed learning in near-potential game
Convergence of BLLA in Near-PG
Let Φ∗ and Φ† be the first minimum and second minimum values of the function Φ. Let |X | bethe cardinality of action space.
Theorem (Convergence to Optimal ε-NE)
For any ξ-potential game G with potential Φ and ε > 0 the stochastically stable states of LLAand BLLA corresponds to a set of ε-NEs with potential less than Φ∗ + ε if
ξ <ε
2 (|X | − 1). (6)
Corollary (Convergence to Optimal PNE)
The stochastically stable states of LLA and BLLA corresponds to a set of PNEs whose potentialis Φ∗ if
ξ <Φ† − Φ∗
2(|X | − 1). (7)
• If the distance to exact PG is small then BLLA converges to optimal ε-NE.
• If the distance is sufficiently small then BLLA converges to optimal PNE.
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 11 / 32
Distributed learning in noisy-potential game
BLLA in Noisy-PG
• (Key Idea) Use multiple samples of the noisy utilities.
• When a specific number of samples are used then BLLA converges.
Theorem
For any noisy potential game GN with potential Φ the stochastically stable states of BLLA arethe global maximizers of the potential Φ if one of the following holds.
1. The estimation noise is bounded in an interval of size ` and the number of estimationsamples used are
N ≥(
log
(4
ξ
)+
2
τ
)`2
2 (1− ξ)2 τ2, (8)
where 0 < ξ < 1.
2. The estimation noise is unbounded with finite mean and variance. Let M(θ) be a finitemoment generating function of noise. Let θ∗ = argmaxθ θ (1− ξ) τ − log (M(θ)). Thenumber of samples used are
N ≥log(
4ξ
)+ 2τ
log(
eθ∗(1−ξ)τ
M(θ∗)
) . (9)
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 12 / 32
Distributed learning in noisy-potential game
BLLA in Noisy-PG
Theorem (Almost sure convergence)
Consider BLLA in a noisy-PG with a decreasing parameter τ(t) = 1/ log(1 + t), and the numberof samples N(τ) is given by above Theorem. Then, BLLA converges with probability 1 to theglobal maximizer of the potential function.
• As τ decreases the number of samples N increases.
• In applications, a small N is desired.
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 13 / 32
Application 1: Load balancing in small cell networks
Outline of Presentation
1. Potential Games (PG)
2. Distributed learning in near-potential game
3. Distributed learning in noisy-potential game
4. Application 1: Load balancing in small cell networks
5. Application 2: Channel assignment in D2D networks
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 14 / 32
Application 1: Load balancing in small cell networks
Load balancing in small cell networks
Figure: Load balancing in small cell network
• Association is based on maximum received power Pigi (x)
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 15 / 32
Application 1: Load balancing in small cell networks
Load balancing in small cell networks
Figure: Load balancing in small cell network
• Cell Range Extension (CRE) bias increases the coverage range.
• With CRE association is based on maximum biased received power Pigi (x)ci .
• (Outage) High interference may lead to user being not served.
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 16 / 32
Application 1: Load balancing in small cell networks
Load balancing in small cell networks
Figure: Load balancing in small cell network
• Cell Range Extension (CRE) bias increases the coverage range.• (Outage) High interference may lead to user being not served.• Almost Blank Subframe (ABS) helps reduce outages.• θi is the proportion of blank subframes.
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 17 / 32
Application 1: Load balancing in small cell networks
Load balancing in small cell networks
Global Objective Function
φα(c, θ)
=∑i∈S
fα(ρi(c, θ))
(10)
Minimization Problem{c∗, θ∗
}= arg minFφα
(c, θ)
(11)
• (α = 0) : Rate optimal policy
• (α = 1) : Proportional fair policy
• (α = 2) : Delay optimal policy
• (α→∞) : Min-max load policy
Related Work:
• Convex optimization [Kim et al. 2012]
• Integer programming [Ye et al. 2013]
• Other work includes simulations, heuris-tics, sub-optimal algorithms
Game Model
The interactions of BSs is modeled as a gameΓ =
{S, {Xi}i∈S , {Ui}i∈S
}, where S is a set
of BSs, Xi is strategy set, and Ui are costfunctions:
Ui (ai , a−i ) =∑
j∈N$i
fα(ρj (ai , a−i )
). (12)
• N$i is a neighborhood of BS i , and $ isneighborhood control parameter:
$ = 0 =⇒ N0i = S
$ = 1 =⇒ N1i = BS i
0 < $ < 1 =⇒ N$i ⊂ S
• The game Γ is a ξ$-PG
• We adapt BLLA for near-PG Γ
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 18 / 32
Application 1: Load balancing in small cell networks
Load balancing in small cell networks
Global Objective Function
φα(c, θ)
=∑i∈S
fα(ρi(c, θ))
(10)
Minimization Problem{c∗, θ∗
}= arg minFφα
(c, θ)
(11)
• (α = 0) : Rate optimal policy
• (α = 1) : Proportional fair policy
• (α = 2) : Delay optimal policy
• (α→∞) : Min-max load policy
Related Work:
• Convex optimization [Kim et al. 2012]
• Integer programming [Ye et al. 2013]
• Other work includes simulations, heuris-tics, sub-optimal algorithms
Game Model
The interactions of BSs is modeled as a gameΓ =
{S, {Xi}i∈S , {Ui}i∈S
}, where S is a set
of BSs, Xi is strategy set, and Ui are costfunctions:
Ui (ai , a−i ) =∑
j∈N$i
fα(ρj (ai , a−i )
). (12)
• N$i is a neighborhood of BS i , and $ isneighborhood control parameter:
$ = 0 =⇒ N0i = S
$ = 1 =⇒ N1i = BS i
0 < $ < 1 =⇒ N$i ⊂ S
• The game Γ is a ξ$-PG
• We adapt BLLA for near-PG Γ
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 18 / 32
Application 1: Load balancing in small cell networks
Neighborhood Scenario
meters*10
me
ters
*1
0
20 40 60 80 100 120 140 160 180 200
20
40
60
80
100
120
140
160
180
200
BS 3
BS 4
BS 2 BS 1
BS 6
BS 7
BS 5
BS 8
Figure: Large neighborhood with pathloss + fading• All BSs can be neighbours with non-zero probability because of fading.• Leads to centralised alogorithm.• Distributed algorithm is obtained by limiting neighbors.• Near-PG is best suitable in this scenario.
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 19 / 32
Application 1: Load balancing in small cell networks
Results
Theorem (Sufficient Condition for Optimal ε-NE)
The constraint ξ$ < ε2(|X |−1)
is satisfied if
$ ≤ εM (1− ρmax)α ,
where ρmax is the maximum load of a BS and M is a constant that depends on the systemparameters.
Corollary (Sufficient Condition for Optimal PNE)
The constraint ξ$ <φ†α−φ
∗α
2|X | is satisfied if
$ ≤ M(φ†α − φ∗α
)(1− ρmax)α .
• As $ decreases the neighborhood expands that decreases the distance to exact-PG.
• A smaller neighborhood is desired so as to limit the information exchange.
• There is a tradeoff between the size of neighborhood and the quality of solution obtained.
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 20 / 32
Application 1: Load balancing in small cell networks
Convergence of BLLA
0 100 200 300 400 500
Iterations
-6.6
-6.4
-6.2
-6
-5.8
-5.6
-5.4
φ0
LLA
BLLA
(a) (α = 0) Rate-optimal.
0 100 200 300 400 500
Iterations
10.8
10.9
11
11.1
11.2
11.3
11.4
11.5
11.6
11.7
φ2
LLA
BLLA
(b) (α = 2)Delay-optimal.
0 100 200 300 400 500
Iterations
1010
1015
1020
1025
1030
1035
1040
1045
φ50
LLA
BLLA
(c) (α = 50) Min-max.
Figure: Convergence of BLLA (τ = 0.001, $ = 10−22, one macro BS, and 7 small BSs,maximum bias is 16).
α = 0 α = 2 α→∞BS i c∗i ρ∗i % c∗i ρ∗i % c∗i ρ∗i %
MBS 1 1 92 1 61 1 45SBS 2 1 7 3 20 8 42SBS 3 1 4 3 9 9 23SBS 4 1 9 3 18 8 37SBS 5 1 11 3 21 7 37SBS 6 1 8 3 20 7 43SBS 7 1 5 3 11 8 30SBS 8 1 7 3 19 6 37
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 21 / 32
Application 1: Load balancing in small cell networks
Optimal Coverage Regions
meters*10
me
ters
*1
0
20 40 60 80 100 120 140 160 180 200
20
40
60
80
100
120
140
160
180
200
BS 3
BS 4
BS 2 BS 1
BS 8
BS 7
BS 6
BS 5
(a) (α = 0) Rate-optimal.
meters*10
me
ters
*1
0
20 40 60 80 100 120 140 160 180 200
20
40
60
80
100
120
140
160
180
200
BS 4
BS 3
BS 2 BS 1
BS 6
BS 5
BS 8
BS 7
(b) (α = 2)Delay-optimal.
meters*10
me
ters
*1
0
20 40 60 80 100 120 140 160 180 200
20
40
60
80
100
120
140
160
180
200
BS 3
BS 4
BS 2 BS 1
BS 6
BS 7
BS 5
BS 8
(c) (α = 50) Min-max.
• In this setting, optimal coverage regions of small BSs grows with α as the optimal CREbias increases.
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 22 / 32
Application 1: Load balancing in small cell networks
Effect of ABS
0 10 20 30 40 50 60 70 80 90 1000
0.05
0.1
0.15
0.2
0.25No outage constraint and no ABS
0 10 20 30 40 50 60 70 80 90 1000
0.005
0.01
0.015
0.02Outage constraint without ABS
Ou
tage
pro
babili
ty
0 10 20 30 40 50 60 70 80 90 1000
0.005
0.01
0.015Outage constraint with ABS
Iterations
BS 1
BS 2
BS 3
BS 4
BS 5
BS 6
BS 7
BS 8
BS 1
BS 2
BS 3
BS 4
BS 5
BS 6
BS 7
BS 8
BS 1
BS 2
BS 3
BS 4
BS 5
BS 6
BS 7
BS 8
(a) Outage constraint
0 20 40 60 80 10010
10
1020
1030
1040
1050
1060
Iterations
φ5
0
97 97.5 98 98.5 99 99.5 100
1011
1012
Iterations
φ5
0
No outage constraint and no ABS
Outage constraint without ABS
Outage constraint with ABS
(b) Global cost
Figure: Improvement with ABS
• Outage constraint is met by restricting actions.
• ABS provides a new flexibility in obtaining a better load balancing with lower global cost.
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 23 / 32
Application 2: Channel Assignment in D2D Network
Outline of Presentation
1. Potential Games (PG)
2. Distributed learning in near-potential game
3. Distributed learning in noisy-potential game
4. Application 1: Load balancing in small cell networks
5. Application 2: Channel assignment in D2D networks
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 24 / 32
Application 2: Channel Assignment in D2D Network
Channel Assignment Problem (CAP) in D2DNetworks
D2D Network Model:
• Consider D set of all users
• F set of channels
• c =[c1, c2, . . . , c|D|
]is channel vector
• νj (c) is noisy estimated data rate
• Average sum data rate
φ (c) = E[∑
j∈D νj (c)]
Figure: D2D network model showing signaland interference links.
Goal (Maximize expected sum data rate)
c∗ ∈ argmaxc∈F|D|
φ (c)
• Dynamic programming [Wang, 2016], graph-theoretical and heuristic solutions [maghsudi,2016], game theory [song, 2014], etc..
• Even in centralized, full information, no-noise case this is NP-hardShabbir (Telecom ParisTech) Distributed Learning June 27, 2017 25 / 32
Application 2: Channel Assignment in D2D Network
Noisy-potential Game Model
Definition (CAP Game)
A CAP game G := {D, {Xi}i∈D, {Ui}i∈D}, where D is a set of UEs that are players of the
game, {Xi}i∈D are action sets consisting of orthogonal channels, Ui : X →R are random utilityfunctions, and X := X1 × X2 × . . .X|D|.
Proposition
CAP game GN := {D, {Xi}i∈D, {UNi }i∈D} is a noisy-potential game with potential φ (a) if
UNi = 1
N
∑Nk=1 Ui , and marginal contribution utility
Ui (ai , a−i ) =∑
j∈D(ai )
νj (ai , a−i )−∑
j∈D(ai )\iνj (a
0i , a−i ),
where D(ai ) is the set of UEs on channel ai , and a0i is a null action.
• We apply BLLA for the noisy-PG GN by using the number of samples N is given inTheorem.
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 26 / 32
Application 2: Channel Assignment in D2D Network
Convergence of BLLA in noisy-PG
0 100 200 300 400 500Iterations
1.7
1.8
1.9
2
2.1
2.2
2.3
2.4
2.5
Sum
data
rate
in b
its/s
ec
×107
BLLA, τ = 0.1BLLA, τ(t) = 0.1/log(1+t)
(a) Convergence of BLLA for fixedtemperature and decreasing temperature.
0 100 200 300 400 500Iterations
1.4
1.6
1.8
2
2.2
2.4
2.6
Su
m d
ata
ra
te in
bits/s
ec
×107
Samples = 1Samples = 100Samples wrt τ = 0.05
(b) Effect of number of samples onconvergence of BLLA.
(a) Decreasing temperature results in smooth convergence compared to fixed
(b) For τ = 0.05, the number of samples N from the Theorem gives smooth convergence.Otherwise, high fluctuations are observed.
(c) No guarantee of convergence for other number of samples.
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 27 / 32
Application 2: Channel Assignment in D2D Network
Sum Data Rate Vs Number of Channels and UEs
4 6 8 10 12 14 16 18 20Number of channels
4
4.5
5
5.5
Su
m d
ata
ra
te in
bits/s
ec
×107
BLLA, τ(t) = 0.1/log(1+t)
(a) Effect of channels on sum rate byfixing #UEs to 20.
0 20 40 60 80 100Number of UEs
1
2
3
4
5
6
7
8
9
10
Sum
data
rate
in b
its/s
ec
×107
BLLA, τ(t) = 0.1/log(1+t)
(b) Effect of number of UEs on sum rateby fixing #channels to 10.
(a) Sum rate increases with the number of channels since BLLA assign channels optimallyleading to lower interference per channel
(b) Sum rate increases linearly until 60 UEs as BLLA manages to assign channels optimallyand maintain low interference.
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 28 / 32
Conclusion and Future Works
Conclusion and Future Works
Conclusion
• Extended the results to broader class of potential games: near-PG and noisy-PG
• Provided new rules for proving convergence
• Proposed an algorithm to learn the best parameter τ of LLA
• Load balancing in small cell network
• Channel assignment in D2D network
• Characterize the efficiency of distributed greedy algorithm for submodular maximization
Future Work
• Relationship of near-PG and noisy-PG
• Analysis of greedy algorithm with other utility functions
• Application to sensor networks
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 29 / 32
Publications
Publications I
Journal Papers
J1 Mohd. Shabbir Ali, Pierre Coucheney, and Marceau Coupechoux “Load Balancing inHeterogeneous Networks Based on Distributed Learning in Near-Potential Games”, IEEETrans. Wireless Commun., vol.15, no.7, pp.5046-5059, 2016.
J2 Mohd. Shabbir Ali, Pierre Coucheney, and Marceau Coupechoux “Optimal distributedchannel assignment in D2D networks using learning in noisy potential games”, To besubmitted IEEE Trans. Wireless Commun., 2017.
J3 Mohd. Shabbir Ali, David Grimsman, Joao P. Hespanha and Jason R. Marden “ Efficiencyand Information Trade-off of Submodular Maximization Problems”, To be SubmittedIEEE Transactions on Automatic Control, 2017.
J4 Mohd. Shabbir Ali and Neelesh B. Mehta, “Modeling Time-Varying Aggregate Interferencein Cognitive Radio Systems, and Application to Primary Exclusive Zone Design,” IEEETrans. Wireless Commun., vol.13, no.1, pp.429-439, Jan. 2014.
arXiv Paper
X1 Mohd. Shabbir Ali, Pierre Coucheney, and Marceau Coupechoux “ Optimal distributedchannel assignment in D2D networks using learning in noisy potential games”, arXivpreprint arXiv:1701.04577, 2017.
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 30 / 32
Publications
Publications II
Conference Papers
C1 Mohd. Shabbir Ali, Pierre Coucheney, and Marceau Coupechoux “Load Balancing inHeterogeneous Networks Based on Distributed Learning in Potential Games”, Proc.IEEE WiOpt, pp.371-378, May 2015.
C2 Mohd. Shabbir Ali, Pierre Coucheney, and Marceau Coupechoux “ Optimal distributedchannel assignment in D2D networks using learning in noisy potential games”, Acceptedin INFOCOM 5G and Beyond Workshop, May 2017.
C3 Mohd. Shabbir Ali and N. B. Mehta, “Modeling time-varying aggregate interference fromcognitive radios and implications on primary exclusive zone design”, IEEE GlobalCommunications Conference (GLOBECOM), pp. 3760-3765, 2013.
C4 Mohd. Shabbir Ali, Pierre Coucheney, and Marceau Coupechoux “Learning AnnealingSchedule of Log-Linear Algorithms for Load Balancing in HetNets”, Proc. EuropeanWireless Conference pp. 1-6, 2016.
C5 Mohd. Shabbir Ali, Pierre Coucheney, and Marceau Coupechoux “ Rules for ComputingResistance of Transitions of Learning Algorithms in Games”, Accepted in InternationalConference on Game Theory for Networks (GAMENETS), May 2017.
C6 Mohd. Shabbir Ali, David Grimsman, Joao P. Hespanha and Jason R. Marden “ Efficiencyand Information Trade-off of Submodular Maximization Problems”, In Review IEEEConference on Decision and Control (CDC), 2017.
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 31 / 32
Thank you
THANK YOU!
Shabbir (Telecom ParisTech) Distributed Learning June 27, 2017 32 / 32