Designing Games for Distributed Optimization Na Li and Jason R. Marden IEEE Journal of Selected...

Preview:

Citation preview

Designing Games for Distributed Optimization

Na Li and Jason R. Marden

IEEE Journal of Selected Topics in Signal Processing, Vol. 7, No. 2, pp. 230-242, 2013

Presenter: Seyyed Shaho Alaviani

Designing Games for Distributed OptimizationNa Li and Jason R. MardenIEEE Journal of Selected Topics in Signal Processing, Vol. 7, No. 2, pp. 230-242, 2013

Presenter: Seyyed Shaho Alaviani

Introduction -advantages of game theory

Problem Formulation and Preliminaries - potential games -state based potential games -stationary state Nash equilibrium

Main Results - state based game design -analytical properties of designed game -learning algorithm

Numerical Examples

Conclusions

Network-Consensus-Rendezvous-Formation-Schooling-Flocking

All: special cases of distributed optimization

Game Theory: a powerful tool for the design and control of multi agent systems

Using game theory requires two steps:

1- modelling the agent as self-interested decision maker in a game theoretical environment: defining a set of choices and a local objective function for each decision maker

2- specifying a distributed learning algorithm that enables the agents to reach a Nash equilibrium of the designed game

Introduction

Core advantage of game theory:

It provides a hierarchical decomposition between

the distribution and optimization problem (game design)

and

the specific local decision rules (distributed learning algorithm)

Example: Lagrangian

The goal of this paper:

To establish a methodology for the design of local agent objective functions that leads to desirable system-wide behavior

Connected and disconnected graphs Directed and undirected graphs

connected disconnected directed

undirected

Graph

Consider a multi-agent of agents,

set of decisions, nonempty convex subset of real numbers

Optimization problem:

s.t.

where is a convex function, andthe graph is undirected and connected

Problem Formulation and Preliminaries

Physics:

Main properties of potential games:

1- a PSNE is guaranteed to exist

2- there are several distributed learning algorithms with proven asymptotic guarantees

3- learning PSNE in potential games is robust: heterogeneous clock rates and informational delays are not problematic

Stochastic games( L. S. Shapley, 1953):

In a stochastic game the play proceeds by steps from position to position, according to transition probabilities controlled jointly by two players.

State Based Potential Games(J. Marden, 2012):

A simplification of stochastic games that represents and extension to strategic form games where an underlying state space is introduced to the game theoretic environment

State Based Game Design:The goal is to establish a state based game formulation for our distributed optimization problem that satisfies the following properties:

Main Results

A State Based Game Design for Distributed Optimization:

- State Space

- Action sets

- State dynamics

- Invariance associated with state dynamics

- Agent cost functions

State Space:

Action sets:

An action for agent I is defined as a tuple

indicates a change in the agent value

indicates a change in the agent’s estimation term

State Dynamics:

For a state and an action , the ensuing state is given by

Invariance associated with state dynamics:

Let be the initial values of the agents

Define the initial estimation terms to satisfy

Then for all

Agent cost functions:

Analytical Properties of Designed Game

Theorem 2 shows that the designed game is a state based potential game.

Theorem 2: The state based game is a state based potential game with potential function

and represents the ensuing state.

Theorem 3: Let G be the state based game. Suppose that is a differentiable convex function, the communication graph is connected and undirected, and at least one of the following conditions is satisfied:

Theorem 3 shows that all equilibria of the designed game are solutions to the optimization problem.

Question:

Could the results in Theorem 2 and 3 have been attained using framework of strategic form games?

impossible

Learning Algorithm

We prove that the learning algorithm gradient play converges to a stationary state NE.

Assumptions:

Theorem 4: Let G be a state based potential game with a potential function that satisfies the assumption. If the step size for all , then the state action pair of the gradient play

asymptotically converges to a stationary state NE.

Example 1:

Consider the following function to be minimized

Numerical Examples

Example 2: Distributed Routing Problem

source destination

m routes

Application: the Internet

Amount traffic

Percentage of traffic that agent i designates to route r

For each route r, there is an associated congestion function that reflects the cost of using the route as a function of the amount of traffic on that route.

Then total congestion in the network will be

R=5

N=10

Communication graph

𝛼=900

Conclusions:

- This work presents an approach to distributed optimization using the framework of state based potential games.

- We provide a systematic methodology for localizing the agents’ objective functions while ensuing that the resulting equilibria are optimal with regards to the system level objective function.

- It is proved that the learning algorithm gradient play guarantees convergence to a stationary state NE in any state based potential game

- Robustness of the approach

MANY THANKS

FOR

YOUR ATTENTION

Recommended