Game Theory 06

Embed Size (px)

Citation preview

  • 8/3/2019 Game Theory 06

    1/40

    1

    Game Theory, Lecture 6

    3013The One-Stage-Deviation-Principle

    53A Finitely Repeated Game with a UniqueStatic Equilibrium

    2010A Finitely Repeated Game with MultipleStatic Equilibria

    3013Multi-stage Games with ObservableActions

    8539Total

    TimeSlides

  • 8/3/2019 Game Theory 06

    2/40

    2

    Multi-Stage Games with Observed Actions

    Multi-stage game with observed actions: Game that can

    be divided into stages and where

    At each stage all the players move simultaneously.

    Before a new stage starts the players observe whathappened on the previous stages.

  • 8/3/2019 Game Theory 06

    3/40

    3

    Multi-Stage Games with Observed Actions

    It includes the case where only a subset of players

    moves (the set of actions of the players that dont play isdo nothing).

    At stage kplayers will play an action profile

    This action profile becomes known at stage k+1.

    ),...,,(21

    k

    I

    kkkaaaa =

  • 8/3/2019 Game Theory 06

    4/40

    4

    Multi-Stage Games with Observed Actions

    At the start of stage keveryone knows the history of the

    game until stage k: hk

    = (a0

    , a1

    ,, ak-1

    ).

    To define strategies we specify a function frominformation sets to the set of actions.

    At the start of stage kwe are at an information set and

    each player knows everything that happened before in

    the game, that is, he knows hk.

  • 8/3/2019 Game Theory 06

    5/40

    5

    Multi-Stage Games with Observed Actions

    A pure-strategy si for player i defines what action player ishould take at each stage kand for any possible historyof the game up to stage k, with k= 0, 1,,K.

    Hence, a pure-strategy is a sequence of actions for any

    possible contingency: where

    Set of possible Set of available

    histories at stage k actions at stage k

    ),...,,...,(1 K

    i

    k

    iii ssss =

    KKK

    i

    kkk

    iiAHsAHs,,AHs :,,:: 000 ...................

  • 8/3/2019 Game Theory 06

    6/40

    6

    Multi-Stage Games with Observed Actions

    Mixed strategies specify probability mixtures over the

    actions in each stage for any given contingency.

    The payoff of player i, ui , is a function of the terminalhistory hK+1 (i.e., of the entire sequence of actions from

    the initial stage 0 through the terminal stage K) to thereal numbers.

    Usually, the payoffs of a multi-stage game are thediscounted sum of the payoffs at each stage, :

    =

    =K

    k

    kk

    i

    k

    i

    agu0

    )(

    )( kki

    ag

  • 8/3/2019 Game Theory 06

    7/40

    7

    Multi-Stage Games with Observed Actions

    The game might have a finite or infinite number so

    stages (horizon), that is, Kmight be infinity.

    If the game has an infinite horizon we assume that theproperty Continuity at Infinity of Payoffs (CIP) holds.

    This property says that stage game payoffs that happenin a distant future are relatively unimportant. It is satisfied

    if (i) overall payoffs are a discounted sum of per-periodpayoffs, and (ii) the per-period payoffs are bounded.

  • 8/3/2019 Game Theory 06

    8/40

    8

    Multi-Stage Games with Observed Actions

    Example 1:1

    1 1 1 1

    22

    2222

    ts1

    dn2

    L

    L

    L

    L

    L

    L L L

    L LLL L L L

    R

    RR

    R R R R

    RR R R R R R R

  • 8/3/2019 Game Theory 06

    9/40

    9

    Multi-Stage Games with Observed Actions

    Example 2: Any dynamic game of perfect information is

    also a multi-stage game1

    2

    L R

    RL

    ts1

    dn2

  • 8/3/2019 Game Theory 06

    10/40

    10

    Multi-Stage Games with Observed Actions

    Many of the application of game theory to economics,

    political science and biology use multi-stage games withobserved actions.

    A stage does not correspond necessarily to a period:

    A period might have one or more stages;

    A stage can have only one period.

  • 8/3/2019 Game Theory 06

    11/40

    11

    Multi-Stage Games with Observed Actions

    Consider Rubinstein-Stahls bargaining model:

    A pie of size 1 is to be split between two players.

    In periods 0, 2, 4,.. player 1 proposes a sharing rule

    (x1,1-x1) that player 2 can accept or reject. If player 2 accepts any offer the game ends, if he

    rejects then he can propose a sharing rule (x2,1-x2) in

    the subsequent period that 1 can accept or reject.

    If player 1 accepts one of 2s offers the game ends, if

    he rejects then 1 can make an offer in the subsequentperiod.

  • 8/3/2019 Game Theory 06

    12/40

    12

    Multi-Stage Games with Observed Actions

    Example 3: 1

    1

    2

    2R

    R

    x

    x

    pe1

    per2

    stage1st

    stage2nd

  • 8/3/2019 Game Theory 06

    13/40

    13

    Multi-Stage Games with Observed Actions

    We can classify multi-stage games into subclasses:

    Repeated Finite horizon

    games Infinite horizon

    Multi-stagegames

    Non-repeated Finite horizon

    games Infinite horizon

  • 8/3/2019 Game Theory 06

    14/40

    14

    Multi-Stage Games with Observed Actions

    A repeated game has the property that the same stage

    game is played again and again. For example, repeatedplay of the PD game or of MP game.

    We will see that repeated games are good do describesituations where players interact many times (repeatedinteraction).

    The Rubinstein-Stahl game can be classified as a non-repeated game with an infinite horizon.

  • 8/3/2019 Game Theory 06

    15/40

    15

    A Finitely Repeated Game with Multiple Static Equilib.

    Consider the multi-stage game corresponding to two

    repetitions of the stage game (let=1, no discounting).

    5,50,00,6D

    0,00,04,3M

    6,03,40,0U

    RML1\2

  • 8/3/2019 Game Theory 06

    16/40

  • 8/3/2019 Game Theory 06

    17/40

    17

    A Finitely Repeated Game with Multiple Static Equilib.

    What are the SPE of the two-stage repeated game?

  • 8/3/2019 Game Theory 06

    18/40

    18

    A Finitely Repeated Game with Multiple Static Equilib.

    Applying the solution concept of SPE we know that first

    we need to find the Nash equilibria of the second stagesubgame.

    Since the second stage-subgame is the static game itsequilibria are: (M,L), (M,L), and (1,2).

    These are the only possible pairs of actions that can be

    played in the second stage.

  • 8/3/2019 Game Theory 06

    19/40

    19

    A Finitely Repeated Game with Multiple Static Equilib.

    What will be the actions played in the first stage?

    Well, if players play a NE of the static game in the firststage there will be no unilateral incentives to deviate inthe first-stage.

    We already know that players will play a NE in thesecond-stage so we found several SPE.

  • 8/3/2019 Game Theory 06

    20/40

    20

    A Finitely Repeated Game with Multiple Static Equilib.

    All combinations of the static equilibria:

    (M,L) and (M,L)

    (M,L) and (U,M)

    (M,L) and (1,2)

    (U,M) and (U,M)

    (U,M) and (M,L)

    (U,M) and (1,2)

    (1,2) and (1,2) (1,2) and (M,L)

    (1,2) and (U,M)

  • 8/3/2019 Game Theory 06

    21/40

    21

    A Finitely Repeated Game with Multiple Static Equilib.

    Are there more SPE?

    For example, is there are SPE where it is possible tosupport the play of (5,5) in the first stage of the game?

    Lets see

  • 8/3/2019 Game Theory 06

    22/40

    22

    A Finitely Repeated Game with Multiple Static Equilib.

    Player 1:

    Play D in the first stage;

    If action profile (D,R) happened in stage 1, then playM in the second stage, otherwise play 1.

    Player 2:

    Play R in the first stage; If action profile (D,R) happened in stage 1, then play

    L in the second stage, otherwise play 2.

  • 8/3/2019 Game Theory 06

    23/40

    23

    A Finitely Repeated Game with Multiple Static Equilib.

    Lets check if these strategies are a SPE.

    In the second stage players play (M,L) which is a NE ofthe static game, so no player wants to deviate.

    Can player 2 gain by deviating in the first stage giventhat 1 sticks to his strategy?

    Deviation payoff: u2(D,L)+u2(1,2)=6+12/7=54/7.

    No deviation payoff: u2(D,R)+u2(M,L)= 5+3=8=56/7.

  • 8/3/2019 Game Theory 06

    24/40

    24

    A Finitely Repeated Game with Multiple Static Equilib.

    Since we have three NE in the stage game we canconstruct strategies where each player can use the NEactions of the stage game to punish or reward the rival inthe second stage.

    This example shows that repeated play expands the setof equilibrium outcomes.

    This is not always the case.

  • 8/3/2019 Game Theory 06

    25/40

    25

    A Finitely Repeated Game with a Unique Static Equilib.

    If there is a single Nash equilibrium of the stage gamewe cant support the play of other payoffs.

    Consider the PD repeated twice.

    The stage game is:

    1,15,0D

    0,54,4C

    DC1\2

  • 8/3/2019 Game Theory 06

    26/40

    26

    A Finitely Repeated Game with a Unique Static Equilib.

    In the second stage both player can only play D so wehave (D,D).

    In the first stage no matter what they do today there is noconsequence tomorrow, so they should just maximize

    their payoff today.

    The argument can be repeated for Kstages, where Kis

    finite.

  • 8/3/2019 Game Theory 06

    27/40

    27

    A Finitely Repeated Game with a Unique Static Equilib.

    Any time a stage game has a unique NE and the stagegame is repeated a finite number of times, the only SPEof the finitely repeated game is repetition of the NE of the

    stage game.

  • 8/3/2019 Game Theory 06

    28/40

    28

    The One-Stage-Deviation-Principle

    How do we check if a strategy profile is a SPE of a multi-stage game with observed actions?

    We only need to check if there is a profitable deviationfrom that strategy profile for each stage.

    This is called the one-stage-deviation principle.

  • 8/3/2019 Game Theory 06

    29/40

    29

    The One-Stage-Deviation-Principle

    For multistage games with observed actions thefollowing result holds:

    For a given combination of strategies of theopponents, a players strategy is optimal from any

    stage of the game if and only if there is no stage ofthe game from which the player can gain by changing

    his strategy there, keeping it fixed at all other stages.

  • 8/3/2019 Game Theory 06

    30/40

    30

    The One-Stage-Deviation-Principle

    Theorem 1: In a finite-horizon multi-stage game withobserved actions, pure-strategy profile s is subgame

    perfect if and only if it satisfies the one-stage deviation

    condition that no player i can gain by deviating from s ina single stage and conforming to s thereafter.

  • 8/3/2019 Game Theory 06

    31/40

    31

    The One-Stage-Deviation-Principle

    Intuition for result:

    We can always divide a two-stage deviation into twoone-stage deviations. If we can show that there is noincentive to deviate in a one stage deviation, then a

    two stage deviation is also not profitable.

  • 8/3/2019 Game Theory 06

    32/40

    32

    The One-Stage-Deviation-Principle

    Implication of result:

    To check if a strategy profile s is a SPE of a multi-stage game with observed actions we need to checkfor every stage, every player, and every history of the

    game, if a player can gain by deviating at that stageand conforming to the strategy s thereafter.

  • 8/3/2019 Game Theory 06

    33/40

    33

    The One-Stage-Deviation-Principle

    Example:

    Player 1s strategy: (A,L ifA,R ifB)

    Player 2s strategy: (R if 1 playsA,L if 1 playsB)

    1

    1 1

    22

    A

    L L L L

    L L

    B

    RR

    RR R R

    ts1

    dn2

    ),( ba ),( dc ),( fe ),( hg ),( ji ),( lk ),( po),( nm

  • 8/3/2019 Game Theory 06

    34/40

    34

    The One-Stage-Deviation-Principle

    This game has two possible histories at the end of thefirst stage: (A, do nothing) or (B, do nothing).

    For history (A, do nothing) we need to check:

    If 1 can gain by playingR instead ofL given that 2 isplayingR, that is, if g > e .

    If 2 can gain by playingL instead ofR assuming that 1

    is playingL in stage 2, that is, if b >f.

  • 8/3/2019 Game Theory 06

    35/40

    35

    The One-Stage-Deviation-Principle

    For history (B, do nothing) we need to check:

    If 1 can gain by playingL instead ofR given that 2 isplayingL, that is, if i > k.

    If 2 can gain by playingR instead ofL assuming that 1is playing R in stage 2, that is, ifp > l .

    Finally, we need to verify

    If 1 wants to playB instead ofA in the first stageassuming that 2 plays his strategy, that is, if k> e .

  • 8/3/2019 Game Theory 06

    36/40

    36

    The One-Stage-Deviation-Principle

    We can use this example to illustrate why we dont needto check for a two-stage deviation.

    In this game only player 1 can do a two stage deviationsince player 2 plays only once.

    Suppose 1 does the following two-stage deviation(B,R ifA,L ifB) whereas player 2 sticks to his

    equilibrium strategy (R if 1 playsA,L if 1 playsB).

  • 8/3/2019 Game Theory 06

    37/40

    37

    The One-Stage-Deviation-Principle

    Example:

    Player 1s strategy: (B,R ifA,L ifB)

    Player 2s strategy: (R if 1 playsA,L if 1 playsB)

    1

    1 1

    22

    A

    L L L L

    L L

    B

    RR

    RR R R

    ts1

    dn2

    ),( ba ),( dc ),( fe ),( hg ),( ji ),( lk ),( po),( nm

  • 8/3/2019 Game Theory 06

    38/40

    38

    The One-Stage-Deviation-Principle

    The payoff of 1 in his equilibrium strategy is e. The payoffof 1 in the two-stage deviation is i. For the two-stage

    deviation not to be profitable we must have i

  • 8/3/2019 Game Theory 06

    39/40

    39

    The One-Stage-Deviation-Principle

    What happens if the horizon is infinite?

    Maybe a player can gain by some infinite sequence ofdeviations, even though he cannot gain by a single-deviation in any subgame.

    This possibility is excluded by the CIP property.

  • 8/3/2019 Game Theory 06

    40/40

    40

    The One-Stage-Deviation-Principle

    Theorem 2: In an infinite-horizon multi-stage game withobserved actions that is continuous at infinitey, pure-strategy profile s is subgame perfect if and only if it

    satisfies the one-stage deviation condition that no playeri can gain by deviating from s in a single stage andconforming to s thereafter.