69
Obvious Dominance and Random Priority Marek Pycia and Peter Troyan * June 2019 Abstract We introduce a general class of simplicity concepts that vary the foresight abilities required of agents in extensive-form games, and use it to provide characterizations of simple mechanisms in social choice environments with and without transfers. We show that obvious strategy-proofness—an important simplicity concept included in our class—is characterized by clinch-or-pass games we call millipede games. Some millipede games are indeed simple and widely-used, though others may be complex, requiring significant foresight on the part of the agents, and are rarely observed. Weakening the foresight abilities assumed of the agents eliminates these complex millipede games, leaving monotonic games as the only simple games, a class which includes ascending auctions. As an application, we explain the widespread popularity of the well-known Random Priority mechanism by showing it is the unique mechanism that is efficient, fair, and simple to play. 1 Introduction Consider a group of agents who must come together to make a choice from some set of potential outcomes that will affect each of them. This can be modeled as having the agents play a “game”, taking turns choosing from sets of actions (possibly simultaneously), with * Pycia: University of Zurich; Troyan: University of Virginia. First presentation: April 2016. First posted draft: June 2016. For their comments, we would like to thank Itai Ashlagi, Eduardo Azevedo, Ben Golub, Yannai Gonczarowski, Ed Green, Fuhito Kojima, Shengwu Li, Giorgio Martini, Stephen Morris, Nick Net- zer, Erling Skancke, Utku Unver, the Eco 514 students at Princeton, and the audiences at the 2016 NBER Market Design workshop, NEGT’16, NC State, ITAM, NSF/CEME Decentralization, the Econometric Soci- ety Meetings, the University of British Columbia, the University of Virginia, the 2019 ASSA Meetings, and 2019 Match UP. Simon Lazarus provided excellent research assistance. Pycia gratefully acknowledges the support of the UCLA Department of Economics and the William S. Dietrich II Economic Theory Center at Princeton. Troyan gratefully acknowledges support from the Bankard Fund for Political Economy and the Roger Sherman Fellowship at the University of Virginia. 1

Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

  • Upload
    others

  • View
    8

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

Obvious Dominance and Random Priority

Marek Pycia and Peter Troyan∗

June 2019

Abstract

We introduce a general class of simplicity concepts that vary the foresight abilitiesrequired of agents in extensive-form games, and use it to provide characterizationsof simple mechanisms in social choice environments with and without transfers. Weshow that obvious strategy-proofness—an important simplicity concept included in ourclass—is characterized by clinch-or-pass games we call millipede games. Some millipedegames are indeed simple and widely-used, though others may be complex, requiringsignificant foresight on the part of the agents, and are rarely observed. Weakeningthe foresight abilities assumed of the agents eliminates these complex millipede games,leaving monotonic games as the only simple games, a class which includes ascendingauctions. As an application, we explain the widespread popularity of the well-knownRandom Priority mechanism by showing it is the unique mechanism that is efficient,fair, and simple to play.

1 Introduction

Consider a group of agents who must come together to make a choice from some set ofpotential outcomes that will affect each of them. This can be modeled as having the agentsplay a “game”, taking turns choosing from sets of actions (possibly simultaneously), with

∗Pycia: University of Zurich; Troyan: University of Virginia. First presentation: April 2016. First posteddraft: June 2016. For their comments, we would like to thank Itai Ashlagi, Eduardo Azevedo, Ben Golub,Yannai Gonczarowski, Ed Green, Fuhito Kojima, Shengwu Li, Giorgio Martini, Stephen Morris, Nick Net-zer, Erling Skancke, Utku Unver, the Eco 514 students at Princeton, and the audiences at the 2016 NBERMarket Design workshop, NEGT’16, NC State, ITAM, NSF/CEME Decentralization, the Econometric Soci-ety Meetings, the University of British Columbia, the University of Virginia, the 2019 ASSA Meetings, and2019 Match UP. Simon Lazarus provided excellent research assistance. Pycia gratefully acknowledges thesupport of the UCLA Department of Economics and the William S. Dietrich II Economic Theory Center atPrinceton. Troyan gratefully acknowledges support from the Bankard Fund for Political Economy and theRoger Sherman Fellowship at the University of Virginia.

1

Page 2: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

the final outcome determined by the decisions made by all of the agents each time theywere called to play. To ensure that the ultimate decision taken satisfies desirable normativeproperties (e.g., efficiency), the incentives given to the agents are crucial. The standard routetaken in mechanism design is to appeal to the revelation principle and use a strategy-proofdirect mechanism where agents are simply asked to report their private information, andit is always in their interest to do so truthfully. However, this is useful only to the extentthe participants understand that a given mechanism is strategy-proof, and indeed, there isevidence many real-world agents do not tell the truth, even in strategy-proof mechanisms.In other words, strategy-proof mechanisms, while theoretically appealing, may not actuallybe easy for participants to play in practice.

What mechanisms, then, are actually “simple to play”? And further, what simple mech-anisms satisfy other normatively desirable properties such as efficiency and fairness? Weaddress these questions in a general class of environments with and without transfers. Weintroduce a general class of simplicity concepts that vary the foresight abilities required ofagents in extensive-form imperfect-information games, and use these concepts to providecharacterizations of simple mechanisms including such popular mechanisms as posted pricesand Random Priority.

Our general approach relies on the idea that a game player plans for only these nodes(or information sets) of the game that he or she perceives as simple. For a strategic plan tobe dominant for such a player the called for action needs to be unambiguously better thanalternatives irrespective of what happens at nodes that are not simple for the agent. As thegame progress, the agent perception of which nodes are simple may change, and we takethe sets of nodes perceived as simple by the players at various histories of the game as aprimitive of our approach. The smaller (in inclusion sense) these sets of simple nodes are thestronger is the resulting simplicity requirement. We allow for the possibility that the agentcan change his or her strategic plan along the path of the game.

One important simplicity concept covered by our approach is Li’s (2017) obvious domi-nance.1 He defines a strategy in an imperfect-information extensive-form game to be obvi-ously dominant if, whenever an agent is called to play, even the worst possible final outcomefrom following the action prescribed by the strategy is at least as good as the best possibleoutcome from taking any other action at the node in question, where the best and worstcases are determined by fixing the agent’s strategy and considering all possible strategiesthat could be played by her opponents in the future. In terms of our general approach this

1Li (2017b) provides both a theoretical behavioral foundation for obvious dominance as well as experi-mental evidence that, in certain settings, obviously dominant strategies are indeed played more often thantheir counterparts that are dominant, but not obviously so.

2

Page 3: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

means that obviously dominant strategy is simple for agents who perceive all their own in-formation sets as simple and all information sets of other agents as not simple. Seeing allown future moves as simple means that obviously dominant strategies presume that agentscan perform demanding backward induction. For instance, consider chess. Assuming thatWhite can always force a win, any winning strategy of White is obviously dominant, and yetthe strategic choices in chess are far from obvious.2

We thus also study more demanding simplicity concepts, in particular one-step-foresightdominance and strong obvious dominance. A strategy in a game is one-step-foresight domi-nant if it is dominant for players who perceives as simple their current information set andonly the first information sets of the continuation game at which they may be called to play.For instance, in an ascending auction planning to stay in is dominant for such players as longas the current price is below their value because they can foresee the next round of biddingand they can always drop out at the next round.3

A strategy in a game is strongly obviously dominant if it is dominant for players whoperceive as simple only their current information set. In other words, the strategy is stronglyobviously dominant if, whenever an agent is called to play, even the worst possible finaloutcome from the prescribed action is at least as good as the best possible outcome fromany other action, where what is possible may depend on all future actions, including actionsby the agent’s future-self. Thus, strongly obviously dominant strategies are those that areweakly better than all alternative strategies even if the agent is concerned that she mighttremble in the future or has time-inconsistent preferences.4

For each of these three subclasses of our general simplicity construction we analyze whichgames are simple. In the case of obvious dominance we study social choice environmentswithout transfers, hence complementing Li (2017b) who focused on the case with transfers.5

We call the class of obviously dominant games in these environments millipede games. Ina millipede game, Nature moves first and chooses a deterministic subgame, after whichagents engage in a game of passing and clinching that resembles the well-studied centipedegame (Rosenthal, 1981). To describe this deterministic subgame in this introduction, forexpositional ease, we focus on allocation problems with agents who demand at most one

2Even determining whether White actually has a winning strategy remains an open question. We wouldlike to thank Eduardo Azevedo and Ben Golub for the chess example.

3Notice that the bidder strategic plan for the next round of bidding may be adjusted at the next round.4For rich explorations of agents with time-inconsistent preferences, see e.g., Laibson (1997), Gul and

Pesendorfer (2001; 2004), Jehiel (1995; 2001); for bounded horizon backward induction see also Ke (2015).5Social choice problems without transfers are ubiquitous in the real-world, and examples include refugee

resettlement, school choice, organ exchange, public housing allocation, course allocation, and voting, amongothers. See e.g. Roth (2015) and Jones and Teytelboym (2016) for resettlement, Abdulkadiroğlu and Sönmez(2003) for school choice, Roth, Sönmez, and Ünver (2004) for transplants, Sönmez and Ünver (2010) andBudish and Cantillon (2012) for course allocation, and Arrow (1963) for voting and social choice.

3

Page 4: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

unit; the construction for more general social choice environments is similar. An agent ispresented with some subset of objects that she can ‘clinch’, or, take immediately and leavethe game; she also may be given the opportunity to ‘pass’, and remain in the game.6 Ifthis agent passes, another agent is presented with an analogous choice. Agents keep passinguntil one of them clinches some object and leaves the game. While some millipede games,such as sequential dictatorships, are frequently encountered and are indeed simple to play,others are rarely observed in market-design practice, and their strategy-proofness is notnecessarily immediately clear. In particular, similar to chess, some millipede games requireagents to look far into the future and to perform potentially complicated backward inductionreasoning.

We study one-step-foresight dominance in our general model that allows both environ-ments with and without transfers. We establish that all such games are monotonic. Interminology of the above allocation example, at the next move (if there is one) the playercan either clinch all the payoffs that were clinchable previously, or, in the last move, theplayer can clinch any other still possible payoff. Ascending auctions provide an example ofsuch a monotonicity: the only clinchable payoff is that associated with dropping out, exceptif the agent wins. We further show that in environments such as single-good auctions andbinary public good choice any social choice rule that is implementable in obviously dominantstrategies is also implementable in one-step-foresight dominant strategies.7

We study strong obvious dominance also in our general model. We show that stronglyobviously strategy-proof games do not require agents to look far into the future and performlengthy backwards induction: in all such games, each agent has essentially at most onepayoff-relevant move. Building on this insight, we show that all strong obvious strategy-proofness can be implemented as sequential personalized price games in which each agent isoffered at most once a choice from the menu of outcome-price pairs. If the menu has three ormore undominated object-price pairs, then then agent’s final outcome is what they choosefrom the menu.8 If the menu has only two undominated pairs, then the final outcome mightdepend on other agents’ choices but truthfully indicating the preferred object-price pair isthe dominant choice. In this way, strong obvious dominance gives us a microfoundation forposted prices, the ubiquitous sale mechanism.9

6While there can be many clinching actions, there can be at most one passing action.7This result builds on Li’s (2017) characterization of obvious dominance implementation in these envi-

ronments.8An object-price pair is dominated if the menu contains another pair with the same object and a lower

price. In environments without transfers, strongly obviously strategy-proof and Pareto efficient games takethe simple form of almost-sequential dictatorships, studied earlier in a different context by Pycia and Ünver(2016).

9For an earlier microfoundation of posted prices, see Hagerty and Rogerson (1987). Armstrong (1996)

4

Page 5: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

While most of our focus is on incentives and simplicity, there are other important criteriawhen designing a mechanism, such as efficiency and fairness. In the final section of the paperwe study these normative criteria in the context of no-transfer allocation problems and showthat only one mechanism satisfies all three requirements: Random Priority. Random Priorityhas a long history and is extensively used in a wide variety of practical allocation problems.School choice, worker assignment, course allocation, and the allocation of public housing arejust a few of many examples, both formal and informal. Random Priority is well-knownto have good efficiency, fairness, and incentive properties: it is Pareto efficient, it gives thesame lotteries over outcomes to agents who reported the same preference rankings (a fairnessproperty known as equal treatment of equals), and it is obviously strategy-proof.10 However,it has until now remained unknown whether there are other such mechanisms, and if so,what explains the relative popularity of RP over these alternatives.11 We show that thereare none, thus resolving positively the quest to establish Random Priority as the uniquemechanism with good incentive, efficiency, and fairness properties and thereby explaining itspopularity in practical market design settings.

Our results build on the key contributions of Li (2017b), who formalized obvious strategy-proofness and established its desirability as an incentive property (see the discussion above).Our construction of the simplicity criteria—while being more general and allowing us toselect more precisely simple mechanisms—is inspired by his work. Li’s work generated asubstantive interest focused on his simplicity concept. Following up on Li’s work, but pre-ceding ours, Ashlagi and Gonczarowski (2016) show that stable mechanisms such as DeferredAcceptance are not obviously strategy-proof, except in very restrictive environments whereDeferred Acceptance simplifies to an obviously strategy-proof game with a ‘clinch or pass’structure similar to simple millipede games (though they do not describe it in these terms).Other related papers include Troyan (2016), who studies obviously strategy-proof allocationvia the popular Top Trading Cycles (TTC) mechanism, and provides a characterization of

shows that posted prices (combined with bundling) can achieve good revenues (for other analyses of revenuesof posted price mechanisms see also e.g. Chawla et al. (2010) and Feldman et al. (2014)). Empirically, evenon eBay, which began as an auction website, Einav et al. (2018) document a dramatic shift in the 2000s fromauctions to posted prices as the predominant selling mechanism on the platform.

10For discussion of efficiency and fairness see, e.g., Abdulkadiroğlu and Sönmez (1998), Bogomolnaia andMoulin (2001), Che and Kojima (2010), and Liu and Pycia (2011). Obvious strategy-proofness of RandomPriority was established by Li (2017b).

11In single-unit demand allocation with at most three agents and three objects, Bogomolnaia and Moulin(2001) proved that Random Priority is the unique mechanism that is strategy-proof, efficient, and symmetric.In markets in which each object is represented by many copies, Liu and Pycia (2011) and Pycia (2011) provedthat Random Priority is the asymptotically unique mechanism that is symmetric, asymptotically strategy-proof, and asymptotically ordinally efficient. While these earlier results looked at either very small or verylarge markets, ours is the first characterization that holds for any number of agents and objects.

5

Page 6: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

the priority structures under which TTC is OSP-implementable.12 Following our work, Ar-ribillaga et al. (2017) characterize the voting rules that are obviously strategy-proof on thedomain of single-peaked preferences and, in an additional result, in environments with twoalternatives, and Bade and Gonczarowski (2017) study obviously strategy-proof and efficientsocial choice rules in several environments.13 There has been less work that go beyond Li’sobvious dominance. Li (2017a) extends his ideas to ex post equilibrium context, while Zhangand Levin (2017a; 2017b) provide decision-theoretic foundations for obvious dominance andexplore weaker incentive concepts.

More generally, our work also contributes to the understanding of limited foresight andlimits on backward induction. Other work in this area—with very different approach fromours—includes Jehiel’s (1995; 2001) studies of limited foresight, Ke’s (2015) axiomatic ap-proach to bounded horizon backward induction, as well as the rich literature on time-inconsistent preferences (e.g., Laibson (1997) and Gul and Pesendorfer (2001; 2004)). Thepaper also adds to our understanding of dominant incentives, efficiency, and fairness in set-tings with and without transfers. In settings with transfers, these questions were studiedby e.g. Vickrey (1961), Clarke (1971), Groves (1973), Green and Laffont (1977), Holmstrom(1979), Dasgupta et al. (1979), and Hagerty and Rogerson (1987). In settings without trans-fers, in addition to Gibbard (1973, 1977) and Satterthwaite (1975) and the allocation papersmentioned above, the literature on mechanisms satisfying these key objectives includes Pá-pai (2000), Ehlers (2002) and Pycia and Unver (2017; 2016) who characterized efficient andgroup strategy-proof mechanisms in settings with single-unit demand, and Pápai (2001) andHatfield (2009) who provided such characterizations for settings with multi-unit demand.14

Liu and Pycia (2011), Pycia (2011), Morrill (2014), Hakimov and Kesten (2014), Ehlersand Morrill (2017), and Troyan et al. (2018) characterize mechanisms that satisfy certainincentive, efficiency, and fairness objectives.

12Li showed that the classic top trading cycles (TTC) mechanism of Shapley and Scarf (1974), in whicheach agent starts by owning exactly one object, is not obviously strategy-proof. Also of note is Loertscherand Marx (2015) who study environments with transfers and construct a prior-free obviously strategy-proofmechanism that becomes asymptotically optimal as the number of buyers and sellers grows.

13This last paper builds on our work characterizing obvious dominance (released in 2016), but some ofour characterization results on Random Priority (added in 2019) in turn build on their insights into Paretoefficient obvious strategy-proof mechanisms. Other papers that follow on our work include: and Mackenzie(2017), who introduces the notion of a “round table mechanism” for OSP implementation and draws parallelswith the standard Myerson-Riley revelation principle for direct mechanisms.

14Pycia and Ünver (2016) characterized individually strategy-proof and Arrovian efficient mechanisms.For an analysis of these issues under additional feasibility constraints, see also Dur and Ünver (2015).

6

Page 7: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

2 Model

Let N = {i1, . . . , iN} be a set of agents, and X a finite set of outcomes.15 Each agent has apreference ranking over outcomes, where we write x ≿i y to denote that x is weakly preferredto y. We allow for indifferences, and write x ∼i y if x ≿i y and y ≿i x. The domain ofpreferences of agent i ∈ N is denoted Pi. Without loss of generality we assume that eachdomain Pi is associated with a partition over the set of outcomes such that, for all x, y ∈ Xthat belong to the same element of the partition, we have x ∼i y for all ≿i∈ Pi.16 Eachpreference relation of agent i may then be associated with the corresponding strict rankingof the elements of the partition. We will generally work with this strict ranking, denoted ≻i,and will sometimes refer to ≻i as an agent’s type.

We study both the settings with and without transfers and the main assumption we makeon the preference domains is that they are rich. Without transfers the richness assumptiontakes the simple form: every strict ranking of the elements of the partition is in Pi. Thisassumption is satisfied for a wide variety of preference structures.17 Two important examplesare the canonical voting environment (where every agent can strictly rank all alternatives)and allocating a set of indivisible goods (where each agent cares only about the set of goodsshe receives). In the former case, each agent partitions X into |X | singleton subsets and anagent’s type ≻i is a strict preference relation over X . In the latter case, an outcome x ∈ Xdescribes the entire allocation to each of the agents. Since agents are indifferent over howobjects she does not receive are assigned to others, each element of agent i’s partition of Xcan be identified with her own allocation, and her type ≻i is then a (strict) ranking of theseallocations.

With transfers the richness assumption incorporates the idea that players prefer to make15The assumption that X is finite simplifies the exposition and it is satisfied in such examples of our

setting as voting and the no-transfer allocation environments listed in the introduction. This assumptioncan be relaxed. For instance, our analysis goes through with no substantive changes if we allow infinite Xendowed with a topology such that agents’ preferences are continuous in this topology and the relevant setsof outcomes are compact.

16This assumption is without loss of generality as it is satified for every domain Pi by the partition in whicheach element is a singleton set. The partition will become meaningful only in conjunction with the richnessassumption introduced next. Thanks to the partitional structure of outcomes, our model encompasses bothstandard social choice models in which all preference rankings are possible as well as standard allocationmodels in which each agent’s preferences are determined only by the object (or set of objects) the agentreceives and, keeping the agent’s allocation constant implies that the agent is indifferent among how theremaining objects are allocated among other agents.

17 We might slightly relax this assumption. For instance, in the presence of outside options we wouldsay that a game is individually rational if each agent can obtain at least his outside option. To obtain theanalogues of many of our results for individually rational games it is sufficient to assume that the domainof each agent’s preferences satisfy the richness condition restricted to sets X ⊆ X that do not contain theoutside option of this agent. Also, many of our proofs only rely on the existence of all preference relationsbetween any min {3, |X|} equivalence classes.

7

Page 8: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

smaller transfers over large ones; that is we normalize the transfers as payments from theplayers, with negative transfers playing the role of payments to the players. Formally, we fixa non-empty finite set W ⊊ R of possible transfers, and a non-empty finite set of substantiveoutcomes X0. The set of outcomes X is then assumed to take the form X = X0 ×W . Weembed the model without transfers in this more general setting by assuming that only onelevel of transfer is possible, |W| = 1. Richness now entails the following assumptions on Pi:

(1) For any outcome x and transfers w,w′, if w < w′ then each agent strictly prefers(x,w) over (x,w′).

(2) Every strict ranking of the elements of the partition that does not violate (1) is in Pi.The first of these requirements operationalizes the idea the idea that elements of W are

transfers, and normalizes the transfers as money that the agent pays. The second requirementextends richness to environments with transfers.18 A ntaural further restriction, which wedo do not impose, is separability: for any outcomes x and x′ and transfers w,w′, if an agentis indifferent between (x,w) and (x′, w), then this agent is indifferent between (x,w′) and(x′, w′).

When dealing with lotteries, we are agnostic as to how agents evaluate them, as long as thefollowing property holds: an agent prefers lottery µ over ν if for any outcomes x ∈ supp (µ)

and y ∈ supp (ν) this agent weakly prefers x over y; the preference between µ and ν is strictif, additionally, at least one of the preferences between x ∈ supp (µ) and y ∈ supp (ν) isstrict. This mild assumption is satisfied for expected utility agents; it is also satisfied foragents who prefer µ to ν as soon as µ first-order stochastically dominates ν.

To determine the outcome that will be implemented, the planner designs a game Γ forthe agents to play. Formally, we consider imperfect-information, extensive-form games withperfect recall, which are defined in the standard way: there is a finite collection of partiallyordered histories (sequences of moves), H. At every non-terminal history h ∈ H, one agent iscalled to play and has a finite set of actions A(h) from which to choose. To avoid trivialitieswe assume that no agent moves twice in a row and that at each history at which the agentmoves there are at least two moves to choose from. We also allow for random moves byNature, and . we use the notation ω (h) for a realization of Nature’s move at history h andω := (ω(h)){h∈H:nature moves at h} to denote a realization of Nature’s moves throughout thegame at every history at which Nature is called to play. Each terminal history is associatedwith an outcome in X , and agents receive payoffs at each terminal history that are consistentwith their preferences over outcomes ≻i.

We use the notation h′ ⊆ h to denote that h′ is a subhistory of h (equivalently, h is18As in the setting without transfers, an analogue of this condition for sets of min {3, |X |} outcomes is

sufficient.

8

Page 9: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

a continuation history of h′), and write h ⊂ h′ when h ⊆ h′ but h = h′. Hi (h) denotesthe set of (strict) subhistories h′ ⊂ h at which agent i is called to move. When useful, wesometimes write h′ = (h, a) to denote the history h′ that is reached by starting at history h

and following the action a ∈ A(h).The collection Ii of information sets of agent i is a partition of the set of histories at

which i moves such that for any information set I ∈ Ii and h, h′ ∈ I and any subhistoriesh ⊆ h and h′ ⊆ h′ at which i moves at least one of the following two symmetric conditionsobtains: either (i) there is a history h∗ ⊆ h such that h∗ and h′ are in the same informationset, A(h∗) = A(h′), and i makes the same move at h∗ and h′, or (ii) there is a history h∗ ⊆ h′

such that h∗ and h are in the same information set, A(h∗) = A(h), and i makes the samemove at h∗ and h. We denote by I (h) the information set containing history h.19 Theseimperfect information games allow us to incorporate incomplete information in the standardway in which Nature moves first and determines agents’ types.

A strategy for a player i in game Γ is a function Si that specifies an action at each oneof her information sets.20 When we want to refer to the strategies of different types ≻i ofagent i, we write Si(≻i) for the strategy followed by agent i for whom Nature drew type ≻i;in particular, Si(≻i)(I) denotes the action chosen by agent i with type ≻i at information setI. We use SN (≻N ) = (Si(≻i))i∈N to denote the strategy profile for all of the agents whenthe type profile is ≻N= (≻i)i∈N . An extensive-form mechanism, or simply a mechanism,is an extensive-form game Γ together with a profile of strategies SN . Two extensive-formmechanisms (Γ, SN ) and (Γ′, S ′

N ) are equivalent if for every profile of types ≻N= (≻i)i∈N ,the resulting distribution over outcomes when agents follow SN (≻N ) in Γ is the same aswhen agents follow S ′

N (≻N ) in Γ′.21

Remark 1. In the sequel we establish several equivalences. Each of them is an equivalenceof two mechanism which also satisfy additional criteria such as obvious strategy-proofnessor strong obvious strategy-proofness. We can thus formally strengthen these results to C-equivalence defined as follows. A solution concept C(·) maps any game Γ into a subset ofstrategy profiles C(Γ). The interpretation is that the strategy profiles in C(Γ) are thoseprofiles that satisfy the solution concept C; for example, if we were concerned with strategy-proof implementation (C = SP), then SP(Γ) would be all strategy profiles SN such thatSi(≻i) is a weakly dominant strategy for all i and all types ≻i in game Γ. Two extensive-

19We will see shortly that it is essentially without loss of generality to assume all information sets aresingletons, and so will be able to drop the I(h) notation and identify each information set with the uniquesequence of actions (i.e., history) taken to reach it.

20We consider pure strategies, but the analysis can be easily extended to mixed strategies.21The equivalence concept here is outcome-based, and hence different from the procedural equivalence

concept of Kohlberg and Mertens (1986).

9

Page 10: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

form mechanisms (Γ, SN ) and (Γ′, S ′N ) are C−equivalent if the following two conditions are

satisfied: (i) for every profile of types ≻N= (≻i)i∈N , the resulting distribution over outcomeswhen agents play SN (≻N ) in Γ is the same as when agents play S ′

N (≻N ) in Γ′ and (ii)SN (≻N ) ∈ C(Γ) and S ′

N (≻N ) ∈ C(Γ′). In other words, two mechanisms are C-equivalent ifthey are equivalent and the strategies satisfy the solution concept C in the respective games.

3 Example: Obvious Dominance and Millipede Games

How to define the concept of games that are “simple to play”? As an example, we re-examineobvious strategy-proofness, the seminal simplicity concept proposed by Li (2017b). Given agame Γ, a strategy Si obviously dominates another strategy S ′

i for player i if, starting fromany earliest information set I at which these two strategies diverge,22 the worst possiblepayoff to the agent from playing Si is at least as good as the best possible payoff from S ′

i,where the best/worst case outcomes are determined over all possible (S−i, ω). A profile ofstrategies SN (·) =(Si (·))i∈N is obviously dominant if for every player i and every type ≻i,the strategy Si (≻i) obviously dominates every other strategy S ′

i. Γ is obviously strategy-proof(OSP) if there exists a profile of strategies SN (·) that is obviously dominant. As Li showed,obvious strategy-proofness successfully differentiates between ascending auctions—in whichthe standard bidders’ strategies are obviously dominant—and second-price auctions —whichthe standard strategies are not obviously dominant even when the two auction formats arepayoff equivalent.

We show that some obviously dominant strategies are not necessarily simple: as wediscussed in the introduction, if White have a winning strategy in chess, then this strategyis obviously dominant. Despite this classification, not only we cannot calculate the White’swinning strategy, we do not even know whether it exists. In the chess example, it is onlyWhite that possibly has an obviously dominant strategy; Black does not have one. Thisexample thus opens two questions: in what games all players have obviously dominantstrategies, and how to refine the obvious dominance concept.

We first tackle the first of these questions and characterize the entire class of obviouslystrategy-proof mechanisms in environments without transfers as a class of games that we call“millipede games”. The construction of millipede games plays a role in our analysis of thesecond question and of the more demanding simplicity concepts; be definition games thatare simple in the more demanding sense belong to the millipede class.

22That is, information set I is on the path of play under both Si and S′i and both strategies choose the

same action at all earlier information sets but choose a different action at I. Li (2017b) refers to such aninformation set as an earliest point of departure. Note that for two strategies, there will in general be multipleearliest points of departure.

10

Page 11: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

i

j

...

y

...

z

k

...

x

...

z

...

w

j

...

. . . . . .passpass

x j...

y

j

...

x

...

z

i

...

x

...

y

...

z

j

...

x

i

...

y

...

z

`

...

. . . . . .pass

...

w

i

...

. . . . . .passpasspasspass

1

Figure 1: An example of a millipede game in the context of object allocation.

Roughly speaking, a millipede game is a take-or-pass game similar to a centipede game,but with more players and more actions (i.e., “legs”) at each node. Figure 1 shows theextensive form of a millipede game for the special case of object allocation with single-unitdemand, where the agents are labeled i, j, k, . . . and the objects are labeled w, x, y, . . .. Atthe start of the game, the first mover, agent i has three options: he can take x, take y, orpass to agent j. If he takes an object, he leaves the game and it continues with a new agent.If he passes, then agent j can take x, take z, or pass back to i. If he passes back to i, then i’spossible choices increase from his previous move (he can now take z). The game continuesin this manner until all objects have been allocated.

While Figure 1 considers an object allocation environment, millipede games can be de-fined more generally on any preference structure that satisfies our assumptions of Section2. Recall that each agent’s preference domain Pi partitions the outcome space X into indif-ference classes. We use the term payoff to refer to the indifference class associated with aparticular element of the partition. We say that a payoff x is possible for agent i at history h

if there is a strategy profile of all the agents (including choices made by Nature) such that his on the path of the game and, under this strategy profile, agent i obtains payoff x (i.e., theoutcome that obtains under the given strategy profile is in the indifference class that givesher payoff x). For any history h, Pi (h) denotes the set of payoffs that are possible for i at h.We say agent i has clinched payoff x at history h if agent i receives payoff x at all terminalhistories h ⊇ h. If after following action a ∈ A(h), an agent receives the same payoff for everyterminal h ⊇ (h, a), we say that a is a clinching action. Clinching actions are generalizationsof the “taking actions” of Figure 1 to environments where the outcomes/payoff structure may

11

Page 12: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

be different from object allocation (where more generally, what i is clinching is a particularindifference class for herself). We denote the set of payoffs that i can clinch at history h byCi(h).23 If an action a ∈ A(h) is not a clinching action, then it is called a passing action.

A millipede game is a finite extensive-form game of perfect information that satisfiesthe following properties: Nature either moves once, at the empty history ∅, or Nature hasno moves. At any history h at which an agent, say i, moves, all but at most one action areclinching actions; the remaining action (if there is one) is a passing action.24 And finally, forall i, all histories h at which i moves and all terminal histories, and all payoffs x, at leastone of the following holds:25 (a) x ∈ Pi(h); or (b) x /∈ Pi(h) for some h ∈ Hi(h); or (c)x ∈ ∪h∈Hi(h)

Ci(h); or (d) ∪h∈Hi(h)Ci(h) ⊆ Ci(h).

Conditions (a)-(d) ensure that, if an agent’s top still-possible outcome is not clinchableat some history h, then she is able to guarantee that the payoff she ultimately receives is atleast as good as any payoff she could have clinched at h, which is necessary for passing tobe obviously dominant. To see this, consider an agent whose top payoff is x, and a historyh such that (a), (b) and (c) fail for x. This means at all prior moves, x was possible (by“not (b)”), but not clinchable (by “not (c)”), and so obvious dominance requires i to pass.However, x has disappeared as a possibility at h (by “not (a)”), and so we must offer her theopportunity to clinch anything she could have clinched previously (condition (d)), or elseshe may regret her decision to pass at an earlier node.

Notice that millipede games have a recursive structure: the continuation game thatfollows any action is also a millipede game. A simple example of a millipede game is adeterministic serial dictatorship in which no agent ever passes and there is only one activeagent at each node.26 A more complex example is sketched in Figure 1.27

Our first main result is to characterize the class of OSP games and mechanisms as theclass of millipede games with greedy strategies. A strategy is called greedy if at each moveat which the agent can clinch the best still-possible outcome for her, the strategy has theagent clinch this outcome; otherwise, the agent passes.

23That is, x ∈ Ci(h) if there is some action a ∈ A(h) such that i receives payoff x for all terminal h ⊇ (h, a).At a terminal history h, no agent is called to move and there are no actions; however, for the purposes ofconstructing millipede games below, it will be useful to define Ci(h) = {x} for all i, where x is the payoffassociated with the unique outcome that obtains at terminal history h.

24There may be several clinching actions associated with the same final payoff.25Recall that Hi(h) = {h′ ⊊ h : i moves at h′}.26An agent is active at history h of a millipede game if the agent moves at h, or the agent moved prior to

history h and has not yet clinched an outcome.27The first more complex example of a millipede game we know of is due to Ashlagi and Gonczarowski

(2016). They construct an example of OSP-implementation of deferred acceptance on some restricted pref-erence domains. On these restricted domains, DA reduces to a millipede game (though they do not classifythe actions as “passing” or “clinching” actions).

12

Page 13: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

Theorem 1. In environments without transfers, every obviously strategy-proof mechanism(Γ, SN ) is equivalent to a millipede game with the greedy strategy. Every millipede game withthe greedy strategy is obviously strategy-proof.

This theorem is applicable in many environments. This includes allocation problems inwhich agents care only about the object(s) they receive, in which case, clinching actionscorrespond to taking a specified object and leaving the remaining objects to be distributedamongst the remaining agents. Theorem 1 also applies to standard social choice problems inwhich no agent is indifferent between any two outcomes (e.g., voting), in which case clinchingcorresponds to determining the final outcome for all agents. In such environments, Theorem1 implies the following

Corollary 1. If each agent have strict preferences among all outcomes, then each OSP gameis equivalent to a game in which either there are only two outcomes that are possible whenthe first agent moves (and the first mover can either clinch any of them, or can clinch oneof them or pass to a second agent, who is presented with an analogous choice, etc.), or thefirst agent to move can clinch any possible outcome and has no passing action.

The latter case is the standard dictatorship, with a possible restricted set of possibleoutcomes, while the former case gives the almost-sequential dictatorships we study in Sec-tion 4.3. In particular, this corollary gives an analogue of the Gibbard and Satterthwaitedictatorship result, with no efficiency assumption.

We now outline the main ideas of the proof of Theorem 1. First, to show that greedystrategies are obviously dominant in a millipede game, note that if, at some history h, anagent can clinch her top still-possible outcome, it is clearly obviously dominant to do so.Harder is to show that if an agent cannot clinch her top still-possible outcome at h, thenpassing is obviously dominant. Formally, this follows from conditions (a)-(d) above. Thecomplete technical details can be found in the appendix.

The more interesting (and difficult) part of Theorem 1 is the first part: for any OSPgame Γ, we can find an equivalent millipede game with greedy strategies. The proof in theappendix breaks the argument down into three main steps.

Step 1. Every OSP game Γ is equivalent to a perfect information OSP game Γ′ in whichNature moves once, as the first mover.

Intuitively, this follows because if we break any information set with imperfect informationto several different information sets with perfect information, the set of outcomes that arepossible shrinks. For an action a to be obviously dominant, the worst possible outcome froma must be (weakly) better than the best possible outcome from any other a′. If the set of

13

Page 14: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

.....…i jpass pass

o2 o99

i pass

o3k2…

j

k3k1 k4o98

i

o100

j pass

o1

i

o1

pass

jij …

Figure 2: An example of a millipede game.

possibilities shrinks, then the worst case from a only improves, and the best case from a′

worsens; thus, if a was obviously dominant in Γ, it will remain so in Γ′.28

Step 2. At every history, all actions except for possibly one are clinching actions.Step 2 allows us to greatly simplify the class of OSP games to “clinch or pass” games.

Indeed, if there were two passing actions a and a′ at some history h, then following eachof a and a′ there are at least two outcomes that are possible for i. There will always bea type of agent i for which one of the possibilities following a is at best his second choice,while one of the possibilities following a′ is his first choice. This implies that i must havesome strategy that can guarantee himself this first-choice outcome in the continuation gamefollowing a′ (by OSP). We can then construct an equivalent game in which i is able to clinchthis first-choice outcome already at h. Proceeding in this way, we are able to eliminate oneof the actions a or a′.

Step 3. If agent i passes at a history h, then the payoff she ultimately receives must beat least as good as any of the payoffs she could have clinched at h.

An agent may follow the passing action if she cannot clinch her favorite possible outcometoday, and so she passes, hoping she will be able to move again in the future and get it then.To retain obvious strategy-proofness, the game needs to promise agent i that she can neverbe made worse off by passing. This implies that one of the conditions (a)-(d) in definitionabove must obtain. Combining steps 1-3 imply that any OSP game Γ is equivalent to amillipede game.

Theorem 1 characterizes the entire class of obviously strategy-proof games. We havealready briefly mentioned some familiar dictatorship-like games that fit into this class (e.g.,

28That every OSP game is equivalent to an OSP game with perfect information was first pointed out in afootnote by Ashlagi and Gonczarowski (2016). The same footnote also states that de-randomizing an OSPgame leads to an OSP game. For completeness, the appendix contains the (straightforward) proofs of thesestatements.

14

Page 15: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

Random Priority, also known as Random Serial Dictatorship, in an object allocation envi-ronment). Another example of a millipede game is sketched in Figure 2. Here, there are 100agents {i, j, k1, . . . , k98} and 100 objects {o1, o2, . . . , o100} to be assigned. The game beginswith agent i being offered the opportunity to clinch o2, or pass to j. Agent j can then eitherclinch o99, in which case the next mover is k2, or pass back to i, and so on. Now, consider thetype of agent i that prefers the objects in the order of their index: o1 ≻i o2 ≻i · · · ≻i o100.At the very first move of the game, i is offered her second-favorite object, o2, even thoughher top choice, o1, is still available. The obviously dominant strategy here requires i to pass.However, if she passes, she may not be offered the opportunity to clinch her top object(s) forhundreds of moves. Further, when considering all of the possible moves of the other agents,if i passes, the game has the potential to go off into thousands of different directions, andin many of them, she will never be able to clinch better than o2. Thus, while passing isformally obviously dominant, fully comprehending this still requires the ability to reason farinto the future of the game and perform lengthy backwards induction.

4 Simple Dominance

The upshot of the previous section is that some OSP mechanisms, such as Random Priority,are indeed quite simple to play; however, the full class of millipede games is much larger,and contains OSP mechanisms that may be quite complex to play. The reason is that OSPrelaxes the assumption that agents fully comprehend how the choices of other agents willtranslate into outcomes, but it still presumes that they understand how their own futureactions affect outcomes. Thus, while OSP guarantees that when taking an action, agents donot have to reason carefully about what their opponents will do, it still may require thatthey reason carefully about the continuation game, in particular with regard to their own“future self”.

We propose a class of simplicity concepts that relax the assumption that players cananalyze and plan their own actions arbitrarily far into the future of the play. The proposedconceptualization offers a way to relax the foresight assumptions embedded not only inobvious dominance but also in other game theoretic concepts. Otherwise we maintain theapproach pioneered by Li (2017b) and in particular the assumptions that players cannot fullyanalyze the actions of other players but understand the set of possible outcomes followingtheir own actions.29

In order to analyze agents who can plan for only part of the game, we need to allowthe agent’s perception of the strategic situation—and hence the planned actions (referred to

29Our construction allows a relaxation of the first of these assumptions.

15

Page 16: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

as strategic plan)—to vary across nodes at which the agent moves. First we operationalizethese ideas for games of perfect information. For each player i and node h∗ ∈ Hi at which i

moves, let us define a set of nodes Hi,h∗ ⊆ {h ∈ Hi|h ⊇ h∗} that are perceived as simple fromthe perspective of node h∗.30 For instance, Hi,h∗ might be the set of all continuation of h∗

at which i moves. The strategic plan Si,h∗ of agent i at node h∗ is a mapping from simplenodes h ∈ Hi,h∗ to actions at these nodes.31 Strategic plan Si,h∗ is simple at node h∗ if theworst for i outcome of the continuation game in which i follows Si,h∗ (h) at all h ∈ Hi,h∗ isweakly preferred by i to the best for i outcome of the continuation game in which i does notplay Si,h∗ (h∗) at node h∗. A strategic collection (Si,h∗)h∗∈Hi

is simple if all its strategicplans are simple.

This approach to simplicity takes the collection (Hi,h∗)h∗∈Hiof simple-node sets to

be a parameter of the definition. The smaller (in inclusion sense) the set of simple nodesis at every relevant node, the stronger is the resulting simplicity requirement. A naturalrequirement on the collection of simple node sets is that if an agent classifies a node h ⊃ h1

as simple from the perspective of node h1 then the agent continue to classify the node h assimple from the perspective of all nods h2 ⊋ h1 such that h ⊃ h2; we do not impose thisrequirement but it is satisfied in our main examples to which we turn now.

A collection is obviously strategy-proof when Hi,h∗ is the set of all continuation of h∗ atwhich i moves; that is when i can plan all his future moves. A collection is strongly obviousstrategy-proof (SOSP) when Hi,h∗ = {h∗}; that is when i cannot plan any future moves.Another natural instance of the concept—that we call One-Step Foresight (OSF) obtainswhen Hi,h∗ = {h ∈ Hi (h

∗) |h∗ ⊊ h′ ⊊ h ⇒ h′ ∈ Hi}, that is when i can plan one move aheadbut not more.

A strategic collection is consistent if Si,h∗(h) = Si,h(h) for h ∈ Hi,h∗ . Obviously Dom-inant strategic collections and Strongly Obviously Dominant strategic collections are con-sistent, while One-Step Foresight strategic collections need not be consistent. For all con-sistent strategic collections, we can naturally define the induced (global) strategies Si (h) =

Si,h∗ (h∗). The OSP collections induce Li’s OSP strategies and SOSP collections inducestrategies that are strongly obviously strategy-proof in the sense we introduced in the 2016draft of our paper.32 Furthermore, OSP strategies allow us to define strategic collections

30The assumption that Hi,h∗ ⊆ Hi is made for simplicity; in its absence we need to endow players withbelieves of what other players are doing.

31We focus on pure strategies; the extension to mixed strategies is straightforward.32For an agent i with preferences ≻i, strategy Si strongly obviously dominates strategy S′

i in game Γ if,starting at any earliest point of departure I between Si and S′

i, the best possible outcome from following S′i

at I is weakly worse than the worst possible outcome from following Si at I, where the best and worst casesare determined by considering any future play by other agents (including Nature) and agent i. If a strategySi strongly obviously dominates all other S′

i, then we say that Si is strongly obviously dominant.

16

Page 17: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

that are OSP, and same for SOSP strategies.The extension to imperfect information games is straightforward: we replace nodes h∗

and h by information sets I∗ and I, and replace the relationship h ⊇ h∗ by the relationshipthat I is a continuation of I∗ (that is a possible information set following I∗). The keyparameter of the simplicity definition is then the collection (Ii,I∗)I∗∈Ii of simple informationsets, and the simple strategic collections (Si,I∗)I∗∈Ii are defined analogously to the discussionabove. In the sequel we focus on perfect information game because of the following

Theorem 2. For every imperfect information game Γ and every collection (Ii,I∗)I∗∈Ii ofsimple information sets, and every simple simple strategic collections (Si,I∗)I∗∈Ii, in the per-fect information game in which all information sets contain exactly one history from gameΓ, the induced strategic collection (Si,h∗)h∗∈Hi

is simple where Si,h∗ (h) = Si,I∗ (I) and I is acontinuation of I∗, h is a continuation of h∗, and h∗ ∈ I∗ and h ∈ I.

This result obtains from the analogous observation about the obvious strategy-proofness—firstmentioned by Ashlagi and Gonczarowski (2016) and formalized in our Lemma 1—and thefirst part of the following

Theorem 3. For any collection of simple information sets and strategic collection (Si,I∗)I∗∈Ii,if the strategic collection is simple then the induced strategy Si (I

∗) = Si,I∗ (I) is obviouslydominant. Furthermore, if the induced strategy Si (I

∗) = Si,I∗ (I) is strongly obviously dom-inant, then the strategic collection is simple.

This result establishes obvious dominance and strong obvious dominance as the extremepoints of the class of simple dominance concepts we study. It holds true because the larger(in inclusion sense) are sets of simple information sets the more demanding is the simpledominance requirement.

4.1 Confusion Among Similar Games

We may think of simple strategic collections as providing the right guidance to the playereven if the player is confused about the action sets at non-simple nodes. For expositionalsimplicity, we restrict attention to perfect information games and formalize this idea asfollows. For any game Γ and collection of permutations η = {ηh}h∈H of actions at nodesh ∈ H, we construct the relabeled game η (Γ) by permuting actions at each node h bypermutation ηh; otherwise game η (Γ) has the same game tree as Γ and the same payoffs atterminal nodes. For instance, if (h∗, a1, a2, ..., ak) is a terminal history in game Γ then it isa terminal history in game η (Γ) and all players payoffs are the same in both games. Wesay that Γ and some other game Γ′ are indistinguishable from node-h∗ perspective of agent

17

Page 18: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

i with the set of simple nodes Hi,h∗ if there is a collection of permutations η = {ηh}h∈H suchthat ηh is an identity for all h ∈ Hi,h∗ and Γ′ = η (Γ).

This preparation allows us to relate simple strategic collections to the standard notionof weak dominance. We say that a strategy Si,h∗ of player i is weakly dominant at incontinuation game beginning at h∗ if following this strategy leads to weakly better outcomesfor i irrespective of the strategies followed by other players.

Theorem 4. For each game Γ, agent i, preference ranking ≻i, and the profile of simplehistories (Hi,h∗)h∗∈Hi

, the strategic plan Si,h∗ is simple from the perspective of h∗ ∈ Hi in Γ

if and only if in every game Γ′ that is indistinguishable from node-h∗ perspective of agent iwith the set of simple nodes Hi,h∗ every strategy Si,h∗ such that Si,h∗ (h) = Si,h∗ (h) for allh ∈ Hi,h∗ is weakly dominant.

The straightforward proof is in the appendix, and—similarly to the proofs of the previoustwo theorems—it does not rely on our domain richness assumptions.

This theorem tells us the strategic collection (Si,h∗)h∗∈Hiis simple in Γ if and only if for

every h∗ ∈ Hi in every game Γ′ that is indistinguishable from node-h∗ perspective of agenti with the set of simple nodes Hi,h∗ every strategy Si,h∗ such that Si,h∗ (h) = Si,h∗ (h) for allh ∈ Hi,h∗ is weakly dominant. When the strategic collection is consistent, we can expressthis result equivalently in terms of simplicity of the induced strategies Si (h) = Si,h (h).When expressed in this way, this result contains Li’s microfoundation for Obvious Strategy-Proofness.33

4.2 One-Step Foresight

One-Step Foresight (OSF) is a more demanding simplicity concept than Obvious Strategy-Proofness. Indeed, all OSF games are monotonic in the following sense. We initially restrictattention to environments without transfers. Game Γ is monotonic if, for any agent i andany histories h ⊂ h′ such that i moves at h, h′ and does not move at any h′′ such thath ⊊ h′′ ⊊ h′, either (i) Ci(h) ⊆ Ci(h

′) or (ii) Pi(h) \ Ci(h) ⊆ Ci(h′).

Theorem 5. In environments without transfers, game Γ is one-step-foresight simple if andonly if it is monotonic.

An analogous result obtains in environments with transfers, though there it requires anadjustment to the definition of monotonicity. We need to impose it on outcomes that areundominated in the following sense: an outcome is dominated within a set of outcomes if for

33This result also subsumes the microfoundation for Strong Obvious Strategy-Proofness from the 2016-2018drafts of our paper.

18

Page 19: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

no agent type it is the most preferred outcome. Given our richness assumption, an outcomeis dominated within a set if there is another outcome which is the same substantive outcomeand lower transfer.

The analogue is not true for OSP games as illustrated by examples of Section 3.

Binary allocation problems

Li (2017b) studies binary allocation problems, and shows that in this environment, any socialchoice rule implementable in OSP strategies is implementable via a personal clock auction.As we now show, personal clock auctions are also OSF-simple, and so, surprisingly, anyOSP-implementable social choice rule is also implementable in OSF strategic collections.

Formally, there is a set Y ⊆ 2N of allocations and a transfer for each agent wi. The set ofoutcomes is thus X = Y ×WN . To be consistent with Li (2017b), in this section, we denoteagents types by θi, and assume that preferences are quasilinear, where

ui(θi, y, w) = 1i∈yθi + wi

denotes agent i’s utility from outcome (y, w) when she has type θi. This environment isbeyond our baseline model because it allows a continuum of transfers and continuum oftypes, however our OSF concept extends straightforwardly to this environment.

The following definition of a personal clock auction is adapted from Li (2017b).34

Γ is a a personal clock auction if, for every i ∈ N , at every earliest history at which i

moves h∗i , there exists a fixed transfer wi ∈ R, a going transfer wi : {hi : hi ⊇ h∗

i } → R anda set of “quitting actions” Aq such that

• For all terminal h ⊃ h, either (i) i /∈ gy(h) and gw,i(h) = wi or (ii) i ∈ gy(h) andgw,i(h) = min{wi(hi) : h

∗i ⊆ hi ⊂ h}.

• If h ⊃ (h, a) for some h ∈ Hi and a ∈ Aq, then i /∈ gy(h).

• Aq ∩ A(h∗i ) = ∅

• For all h′i, h

′′i ∈ {hi : hi ⊇ h∗

i }:

– If h′i ⊂ h′′

i , then wi(h′i) ≥ wi(h

′′i )

– If h′i ⊂ h′′

i , wi(h′i) > wi(h

′′i ) and there is no h′′′

i such that h′i ⊂ h′′′

i ⊂ h′′i , then

Aq ∩ A(h′′i ) = ∅

34Note that we have made a few simplifications to Li (2017b)’s definition, namely that there are no trivialmoves (|A(h)| > 1 for all h) and there is perfect information, which is without substantive loss of generalityby Theorem 2.

19

Page 20: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

– If h′i ⊂ h′′

i and wi(h′i) > wi(h

′′i ), then |A(h′

i) \ Aq| = 1

– If |A(h′i) \ Aq| > 1, then there exists a ∈ A(h′

i) such that, for all h ⊃ (h, a),i ∈ gy(h).

Theorem 6. In binary allocation environments with transfers, every OSF game is equivalentto a personal clock auction.

This result follows from Li (2017b) because he shows that all OSP games in binary allocationenvironments can be implemented via personal clock auctions, and these auctions are OSF-simple.

In particular, defining the ascending clock auctions identically as Li, we have the follow-ing.

Theorem 7. Ascending clock auctions are one-step-foresight simple.

The direct proof of this result is immediate because the following collection of strategicplans is OSF: For any h∗

i such that the going price is p is weakly lower than the bidder valuev: Stay In, with plan to drop Out at any next hi ⊃ h∗

i . For any h∗i such that p > v: Drop

out immediately.35

4.3 Strong Obvious Strategy-Proofness and Price Mechanisms

In light of Theorem 3, the strongest simplicity concept in our class is the strong obviousdominance. If a mechanism admits a profile of strongly obviously dominant strategies, wesay that it is strongly obviously strategy-proof (SOSP). Random Priority is SOSP, but themillipede game depicted in Figure 2 is not. Thus, SOSP mechanisms further delineate theclass of games that are simple to play, by eliminating the more complex millipede gamesthat may require significant forward-looking behavior and backward induction.36

Strong obvious strategy-proofness has several appealing features that capture the ideaof a game being simple to play. As mentioned above, SOSP strengthens OSP by looking atthe worst/best case outcomes for i over all possible future actions that could be taken by i’sopponents and agent i herself. Thus, a strongly obviously dominant strategy is one that isweakly better than all alternative strategies even if the agent is concerned that she mighttremble in the future or has time-inconsistent preferences.

35Another OSF-simple collection of strategic plans is: For any h∗i such that the going price is p is strictly

lower than the bidder value v: Stay In, with plan to drop Out at any next hi ⊃ h∗i . For any h∗

i such thatp ≥ v: Drop out immediately.

36Recall also the example of chess discussed in the introduction: if White can force a win, then any winningstrategy of White is obviously dominant, yet the strategic choices in chess seem far from obvious. Chess willnot admit a strongly obviously dominant strategy.

20

Page 21: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

It turns out that SOSP games can be implemented so that each agent is called to moveat most once. We show a stronger result that highlights the simplicity of SOSP games: inany SOSP game, each agent can have at most one history at which her choice of action ispayoff-relevant. Formally, we say a history h at which agent i moves is payoff-irrelevant forthis agent if either (i) there is only one action at h or (ii) i receives the same payoff at allterminal histories h ⊃ h; if i moves at h and this history is not payoff-irrelevant, then it ispayoff-relevant for i. The definition of SOSP and richness of the preference domain giveus the following.

Theorem 8. Along each path of a SOSP game that is on the path of the greedy strategiesfor some type profile, there is at most one payoff-relevant history for each agent.

The on-path restriction is not needed in environments without transfers. We provide theargument in the appendix.

This result allows us to further conclude that the unique payoff-relevant history (if itexists) is the first history at which an agent chooses from among two or more actions.While an agent might be called to act later in the game, and her choice might influence thecontinuation game and the payoffs for other agents, it cannot affect her own payoff.

We can refine Theorem 8 further and show that SOSP games are effectively personalizedprice games: at the typical payoff-relevant history the agent is being offered a menu ofsubstantive outcomes and prices, and picks one of the alternatives from the menu. Moreformally, we say that a game Γ is a sequential personalized price game if it is a perfect-information game in which Nature moves first (if at all). The agents then move sequentially,with each agent called to play at most once. The ordering of the agents and the sets ofpossible outcomes at each history are determined by Nature’s action and the actions takenby earlier agents. As long as there are either at least three possible payoffs that do notdiffer only by transfers for an agent when he is called to move or there is exactly one suchpayoff, the agent can clinch any of the possible payoffs, while at the same time also selectinga message from a pre-determined set of messages.37 When exactly two payoffs—that do notdiffer only by transfers—are possible for the agent who moves, the agent can be faced witheither a choice between them (clinching and picking an accompanying message), or, he mightbe given a possibility to clinch one of these payoffs (and picking an accompanying message)and passing (with no message). We can also refer to to personalized price games as curateddictatorships, a term that stresses the connection to dictatorship games in settings without

37 As discussed in Section 3, (S)OSP games may present an agent with several different ways to clinchthe same payoff; sending a message is a simple way to encode which of the clinching actions the agent takes.Fixing a clinched payoff, each possible message may affect the future of the game, e.g., by determining whothe next mover is, though the agent’s own payoff is not affected.

21

Page 22: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

transfers as well as the game designer freedom in setting the menus.

Theorem 9. Every strongly obviously strategy-proof mechanism (Γ, SN ) is equivalent to asequential personalized price mechanism with the greedy strategy. Every personalized postedprice mechanism with the greedy strategy is strongly obviously strategy-proof.

That personalized posted price mechanisms are SOSP is immediate. To get a sense forwhy the other part of the theorem is true let us restrict attention to the setting withouttransfers and millipede games that are SOSP (the complete proof is in the appendix). Inthis context, we refer to the price mechanisms as curated dictatorships. Because we areconstructing an equivalent game, we can assume that at no history an agent has exactly oneaction. Given the definition of a curated dictatorship, it is sufficient to show that prunedmillipede games in which there is a history h at which the acting agent has three or morepossible payoffs and a passing move are not SOSP. Consider such a game and let h be such ahistory. By Theorem 8, h is first time i is called to act, and so h is on the path of play for alltypes of agent i. Because there can be at most one passing action, at least one payoff must beclinchable for i, say x, and at least one must not be clinchable at h, say y (if everything wereclinchable, then the passing action can be pruned). Let z = x, y be some third payoff that ispossible at h. To complete the argument, note that either agent i of type y ≻i x ≻i z ≻i · · ·or agent i of type y ≻i z ≻i x ≻i · · · has no strongly obviously dominant action at h, andso the game is not SOSP.

5 Random Priority

Thus far, all of our results have been about incentives and what makes a game “simple toplay”. Two other key goals when designing a mechanism are efficiency and fairness. As anapplication of our study of simplicity, we show that SOSP and OSP combined with naturalfairness and efficiency axioms characterize the popular Random Priority (RP) mechanism.

Random Priority succeeds on these three important dimensions: it is simple to play,efficient, and fair.38 However, this is only a partial explanation of its success, as to now, ithas remained unknown whether there exist other such mechanisms, and, if so, what explainsthe relative popularity of RP over these alternatives.39 Theorems 12 and 11 provide an

38Pareto efficiency and fairness of RP have been recognized at least since Abdulkadiroğlu and Sönmez(1998) (see Bogomolnaia and Moulin (2001) for analysis of more demanding efficiency concepts), while Li(2017b) established OSP of RP. It is easy to see that RP also satisfies all of our more demanding simplicityrequirements.

39Bogomolnaia and Moulin (2001) provide a characterization of RP in the special case of |N | = 3, but theirresult does not extend to larger markets; Liu and Pycia (2011) provide a characterization using asymptoticversions of standard axioms in replica economies as the market size grows to infinity.

22

Page 23: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

answer to this question: not only does Random Priority have good efficiency, fairness, andincentive properties, it is the only mechanism that does so, thus explaining the widespreadpopularity of Random Priority in practice.

While we have informally been using object allocation as a running example for ourprevious results, we now make this more concrete. There is a set of agents N and objects O;each agent is to be assigned exactly one of objects, and we assume |N | = |O|. In this context,an “outcome” is an allocation of objects to agents, and the outcome space X is the space of allsuch allocations. Agents care only about their own assignment, and are indifferent betweenallocations where the assignment of others may vary, but their own assignment is unchanged.This indifference structure is consistent with our richness assumption. For simplicity weconsider single-unit demand for simplicity and comparison with the prior literature thatoften focuses on this case (Abdulkadiroğlu and Sönmez, 1998; Bogomolnaia and Moulin,2001; Che and Kojima, 2010; Liu and Pycia, 2011).40

Our efficiency concept is standard Pareto efficiency: an outcome is Pareto efficient whenno other outcome is weakly preferred by all participants and strictly preferred by at leastone; a mechanism is Pareto efficient if it always generates Pareto efficient outcomes. Westart by looking at the standard fairness requirement known as equal treatment of equals : amechanism satisfies this requirement if, whenever two agents have the same type, they receivethe same distribution over payoffs. It is easy to see that deterministic curated dictatorshipsusually fail this criterion. For instance, a curated dictatorship in which player 1 choosesfirst among all outcomes and then player 2 chooses among all remaining outcomes fails thiscondition for these two players if they have the same most preferred object. The naturalway to resolve this unfairness is to order the agents randomly, and allow them to pick in thisorder, which defines the popular Random Priority (RP) mechanism we study.

The weakest simplicity concept in our class—Obvious Strategy-Proofness—allows us toestablish the individual-outcome equivalence of all OSP and Pareto efficient mechanismssatisfying equal treatment of equals. Two mechanisms are individual-outcome equivalentif for every agent i and at every preference profile, the distribution of object allocated to i

is exactly the same under the two mechanisms.41

Theorem 10. Every mechanism that is obviously strategy-proof, Pareto efficient, and sat-isfies equal treatment of equals is individual-outcome equivalent to Random Priority withgreedy strategies.

That RP is obviously strategy-proof was recognized by Li (2017b), and its Pareto effi-40For earlier equivalence results beyond the single unit demand case, see Pycia (2011).41This equivalence concept was studied e.g. by Lee and Sethuraman (2011) and Pycia (2016); its asymp-

totic version was studied by Che and Kojima (2010) and Liu and Pycia (2011).

23

Page 24: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

ciency and equal treatment of equals is known at least since Abdulkadiroğlu and Sönmez(1998), hence we omit formulating the reverse implication in this theorem. The proof ofthe theorem builds a companion equivalence which imposes stronger fairness assumption(extensive-form symmetry) in order to establish the stronger mechanism equivalence definedin Section 2. A game Γ is symmetric if for every permutation of agents σ, game Γ is iso-morphic to game σ (Γ), where game σ (Γ) is defined recursively as follows: at every node atwhich nature moves, it has same moves in σ (Γ) as in Γ; at every history at which i moves inΓ agent σ (i) moves in σ (Γ) and has the same moves as i in Γ, at every terminal history theoutcome of i in Γ is the same as the outcome of σ (i) in the corresponding terminal historyof σ (Γ′). This procedure also defines histories σ (h) for each history h . In a symmetricgame, a strategy profile is symmetric if for every permutation σ of agents such that i = σ (i)

this agent chooses the same action at histories h and σ (h). A mechanism is symmetric ifthe underlying game and the strategy profile are symmetric. It is folk knowledge that thestandard extensive form game of Random Priority satisfies the symmetry requirement.42

Theorem 11. An obviously strategy-proof mechanism is symmetric and Pareto efficient ifand only if it is equivalent to Random Priority.

Being OSP the mechanism of the theorem (say ϕ) is a millipede (Theorem 1) and—foreach preference profile separately—the proof builds a bijection between the deterministicmillipedes chosen in the first move by nature in ϕ and the serial dictatorships over which RPrandomizes. Crucially the deterministic millipedes and serial dictatorships related by thebijection deliver the same outcomes establishing the equivalence of ϕ and RP. This bijectionidea was first employed by Abdulkadiroğlu and Sönmez (1998) and then used by e.g. Pathakand Sethuraman (2011) and Carroll (2014). The construction of the bijection is highlyinvolved and very different from the bijections of the earlier literature. In the constructionwe rely on the properties of the millipedes established by us, and the results on Paretoefficient OSP mechanisms obtained by Bade and Gonczarowski (2017).43

Leveraging Theorem 11, we can easily prove Theorem 10. Suppose that a mechanismϕ satisfies the assumptions of Theorem 10. By Theorem 1, we can assume that in ϕ the

42We can describe a symmetric game as a symmetrization of an underlying game Γ0 where a symmetrizationof Γ0 is the game in which nature moves first and chooses the continuation game σ (Γ0) uniformly atrandom among all permutations σ of the set of agents. This approach is used by the earlier literaturestudying symmetrizations—e.g. Abdulkadiroğlu and Sönmez (1998); Pathak and Sethuraman (2011); Carroll(2014)—and it is in this sense that the symmetry of RP is well-known. Notice also that in our definition ofthe symmetry concept we could only look at symmetry vis-a-vis exchanging the roles of any two agents (i.e.restrict attention to permutations σ that are transpositions of two agents).

43Their paper builds on the 2016 drafts of the present paper, including our characterization of OSP games(now Theorem 1); while our Theorem 11—and hence indirectly our Theorem 10—build on their analysis ofefficiency.

24

Page 25: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

nature moves only at the root of the game. Let ϕS be the symmetrization of ϕ defined asin footnote 42 (for more details see the proof of Theorem 11). The random mechanism ϕS

can be expressed as a lottery over symmetrizations of each deterministic continuation gameafter the nature move in ϕ. Each of these deterministic continuations is OSP and Paretoefficient, and hence Theorem 11 allows us to conclude that all these symmetrizations areequivalent to RP. Hence any distribution over these symmetrizations is equivalent to RP,and so in particular so is ϕS. Because ϕ satisfies equal treatment of equals, ϕS has the sameindividual outcome distributions as ϕ, and thus we conclude that ϕ has the same individualoutcome distributions as RP.44

Remark 2. This simple proof is of an independent interest from the perspective of the litera-ture on symmetrizations. The same procedure we used to derive Theorem 10 from Theorem11, allows us to conclude that the individual outcome counterparts of the main results of Ab-dulkadiroğlu and Sönmez (1998), Pathak and Sethuraman (2011) and Carroll (2014) obtainwhen we replace symmetry by equal treatment of equals in their results; thus e.g. lotteriesover Pathak and Sethuraman’s school choice mechanisms are individual-outcome equivalentto RP as soon as they satisfy equal treatment of equals (e.g. they do not need to be sym-metrizations). We can also strengthen the main result of Lee and Sethuraman (2011) byrelaxing the symmetrization assumption to equal treatment of equals and conclude that lot-teries over Pápai (2000) hierarchical exchange mechanisms are equivalent to RP as soon asthey satisfy equal treatment of equals.45

The difference in the equivalence concepts between Theorems 10 and 11 is expected.Equal treatment of equals is an assumption on marginal distributions and it leads to theequivalence of marginal distributions, while symmetry is an assumption about joint distri-butions and it leads the equivalence of joint distributions. Against this background it isnoteworthy that by strengthening the simplicity requirement to SOSP, we can derive theequivalence of outcomes (joint distributions) from equal treatment of equals (marginal dis-tributions).

Theorem 12. Every mechanism that is strongly obviously strategy-proof, Pareto efficient,and satisfies equal treatment of equals is equivalent to Random Priority with greedy strategies.Furthermore, Random Priority with greedy strategies is strongly obviously strategy-proof,Pareto efficient, and satisfies equal treatment of equals.

44This is the moment in the proof where we are losing the full equivalence.45A similar strengthening of Che and Kojima (2010) is implied by Liu and Pycia (2011) whose proof

approach however is different from ours: they obtain results for mechanisms satisfying equal treatmentof equals without going through a symmetrized version of their results. For the discussion of the strongconclusions one can draw from all these results see Pycia (2016).

25

Page 26: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

The proof does not rely on the bijection approach we employed in earlier results. It buildson our characterization of SOSP mechanisms that are Pareto efficient. An almost-sequentialdictatorship is a perfect-information game in which nature moves first and then the agentsmove in turn, with each agent moving at most once. At his move, an agent picks an objectand sends a message. As long as there are at least three objects unallocated, the movingagent can choose from all still-available objects. When there are two objects remaining,an agent can be faced with either a choice between them, or, he might be given a choicebetween one of these objects for sure or giving the next agent an opportunity to allocate theremaining two objects among the two of them.46

The difference between a serial dictatorship and a sequential dictatorship is that in a serialdictatorship, the ordering of the agents is fixed after Nature’s move, while in a sequentialdictatorship, the next agent to move may depend on the choices of the earlier agents. Thereason we call the mechanism described above an “almost” sequential dictatorship is becauseit works exactly as a sequential dictatorship if there are three or more objects remaining;however, a small modification is allowed when only two objects remain.

Theorem 13. Every strongly obviously strategy-proof and Pareto efficient mechanism (Γ, SN )

is equivalent to an almost-sequential dictatorship with the greedy strategy. Every almost-sequential dictatorship with the greedy strategy is strongly obviously strategy-proof and Paretoefficient.

That almost-sequential dictatorships are SOSP and efficient is immediate. The proof ofthe other direction follows easily from Theorem 9; we provide details in the appendix.47

6 Concluding Summary

We study the question of what makes a game “simple to play” in a general class of environ-ments with and without transfers. We introduce a general class of simplicity concepts thatvary the foresight abilities required of agents in extensive-form imperfect-information games,and use it to provide characterizations of simple mechanisms.

Li’s (2017) obvious strategy-proofness is the weakest simplicity concept included in ourclass. Focusing on settings without transfers, we characterize all mechanisms in this class

46Pycia and Ünver (2016) use the same name for deterministic mechanisms without messages that belongto the class we study; they show that these are exactly the deterministic mechanisms which are strategy-proofand Arrovian efficient with respect to a complete social welfare function. We use the name their introducedbecause our class is a natural extension of theirs.

47The natural analogue of Theorem 13 remains valid beyond object allocation, for all rich preferencedomains, with no substantive changes in the proof.

26

Page 27: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

as clinch-or-pass games we call millipede games. Some millipede games are indeed simpleand widely-used, though others may be complex, requiring significant foresight on the partof the agents, and are rarely observed.

Another natural concept in our class is one-step foresight. It weakens the foresight abili-ties assumed of the agents and eliminates these complex millipede games, leaving monotonicgames as the only simple games, a class which includes ascending auctions. Building on Li’scharacterization of personal clock auctions we show that in the binary environments he stud-ied, every obviously strategy-proof mechanism can be implemented in a one-step-foresightsimple way.

The strongest simplicity concept we study is strong obvious strategy-proofness. Stronglyobviously strategy-proof mechanisms eliminate the need for significant foresight and back-wards induction: we prove that in these mechanisms each agent is asked to make at mostone payoff-relevant choice. While a constraint on the choice of mechanisms, strong obviousstrategy-proofness is still weak enough to allow positive results, and indeed admits mecha-nisms that are seen extensively in practice: we show that SOSP mechanisms are equivalent topersonalized price mechanisms. Posted prices are among the most popular sale mechanisms.Even on eBay, which began as an auction website, Einav et al. (2018) document a dramaticshift in the 2000s from auctions to posted prices as the predominant selling mechanism onthe platform. Our work provides a reason why posted prices are so widely used in practice.48

As a final application we prove a natural characterization of the popular mechanismknown as Random Priority by showing that it is the unique mechanism that is obviouslystrategy-proof, Pareto efficient, and satisfies equal treatment of equals. In other words,our results show that Random Priority is the unique mechanism satisfying these desirableincentive, efficiency, and fairness properties, providing an explanation for its widespread use.

A Proofs

A.1 Proof of Theorem 1

Before proceeding with the proof of Theorem 1, we first define the concepts of possible,guaranteeable, and clinchable outcomes/actions more formally.

48For an earlier microfoundation of posted prices, see Hagerty and Rogerson (1987). Armstrong (1996)shows that posted prices (combined with bundling) can achieve good revenues (for other analyses of revenuesof posted price mechanisms see also e.g. Chawla et al. (2010) and Feldman et al. (2014)). We also refer topersonalized-price mechanisms as curated dictatorships because in the special case of our setting in whichthere are no transfers personalized-price mechanisms resemble sequential dictatorships.

27

Page 28: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

A.1.1 Definitions

Fix a game Γ. Let S = (Si)i∈N denote a strategy profile for the agents, and recall that ω

denotes one particular realization of Nature’s moves (i.e., at each h at which nature is calledto play). Define z(h, S, ω) ∈ X as the unique final outcome reached when play starts at somehistory h and proceeds according to (S, ω).

We first discuss the distinction between types of payoffs (possible vs. guaranteeable)and then the distinction between types of actions (clinching actions vs. passing actions).Recall that agents may be indifferent between several outcomes. For any outcome x ∈ X ,let [x]i = {y ∈ X : y ∼i x} denote the x-indifference class of agent i, and define

Xi(h, Si) = {[x]i : z(h, (Si, S−i), ω) ∈ [x]i for some (S−i, ω)}

to be the possible indifference classes that may obtain for agent i starting at history h if shefollows strategy Si. If there exists some Si such that [x]i ∈ Xi(h, Si), then we then we saythat [x]i is possible for i at h. If, further, there exists some Si such that Xi(h, Si) = {[x]i},then we say [x]i is guaranteeable for i at h. Let

Pi(h) = {[x]i : ∃Si s.t. [x]i ∈ Xi(h, Si)}

Gi(h) = {[x]i : ∃Si s.t. Xi(h, Si) = {[x]i}}

be the sets of payoffs that are possible and guaranteeable at h, respectively.49 Note thatGi(h) ⊆ Pi(h), and the set Pi(h) \ Gi(h) is the set of payoffs that are possible at h, butare not guaranteeable at h. (In the proof of the theorem below, we will generally dropthe bracket notation [x]i and, when there is no confusion, simply refer to the “payoff x”.Statements such as “x is a possible payoff at h” or “x ∈ Pi(h)” are understood as “someoutcome in the indifference class [x]i is possible at h”.)

Last, we define a distinction between two kinds of actions: clinching actions and passingactions. Let i be the agent who is to act at a history h. Using our notational conventionthat (h, a) denotes the history obtained by starting at h and following action a, the setPi((h, a)) is the set of payoffs that would be possible for i if she were to follow action a at h.If Pi((h, a)) = {[x]i}, then we say that action a ∈ A(h) clinches payoff x for i. If an actiona clinches x for i, we call a a clinching action. Note that there can be more than one actionthat clinches the same payoff x for i, though different choices may lead to different payoffsfor other agents. Any action of an agent that is not a clinching action is called a passing

49Note that Pi(h) and Gi(h) are well-defined even if i is not the agent who moves at h.

28

Page 29: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

action. We let Ci(h) denote the set of payoffs that are clinchable for i at h; that is,

Ci(h) = {[x]i : ∃a ∈ A(h) s.t. Pi((h, a)) = {[x]i}}.

Note that this definition of Ci(h) presumes that agent i is called to play at history h. If h isa terminal history, then no agent is called to play and there are no actions. However, it willbe useful in what follows to define Ci(h) = {[x]i} for all i, where x is the unique outcomeassociated with the terminal history h.

A.1.2 Proof

With the above definitions in hand, we can prove Theorem 1. We start by proving thatmillipede games are OSP (Proposition 1), and then prove that every OSP game is equivalentto a millipede game (Proposition 2).

Proposition 1. Millipede games with greedy strategies are obviously strategy-proof.

Proof. Let Γ be a millipede game. Recall that the greedy strategy for any agent i is definedas follows: for any history h at which i movies, if i can clinch her top payoff in Pi(h), thenSi(≻i)(h) instructs i to follow an action that clinches this payoff; otherwise, i passes at h.50

We now show that it is obviously dominant for all agents to follow a greedy strategy.Consider some profile of greedy strategies (Si(·))i∈N . For any subset of outcomes X ′ ⊂ X ,define Top(≻i, X

′) as the best possible payoff in the set X ′ according to preferences ≻i,i.e., x ∈ Top(≻i, X

′) if and only if x ≿i y for all y ∈ X ′ (note that we use our standardconvention whereby a payoff x represents the entire indifference class to which x belongs,and so Top(≻i, X

′) is effectively unique). Then, Top(≻i, Pi(h)) denotes i’s top payoff amongall payoffs that are possible at history h, and Top(≻i, Ci(h)) denotes i’s top payoff amongall of his clinchable payoffs at h. It is clear that if Top(≻i, Ci(h)) = Top(≻i, Pi(h)), then thegreedy action of clinching the top payoff is obviously dominant at h. What remains to beshown is if Top(≻i, Ci(h)) = Top(≻i, Pi(h)), then passing is obviously dominant at h.

Assume that there exists a history h that is on the path of play for type ≻i when shefollows the greedy strategy and Top(≻i, Ci(h)) = Top(≻i, Pi(h)), yet passing is not obviouslydominant at h; further, let h be any earliest such history for which this is true. To shortennotation, let xP (h) = Top(≻i, Pi(h)), xC(h) = Top(≻i, Ci(h)), and let xW (h) be the worstpossible outcome from passing (and following the greedy strategy in the future). Sincepassing is not obviously dominant, it must be that xW (h) ≿i xC(h).

50There may be multiple ways for i to clinch the same payoff x at h, and further, x may in principle stillbe possible/guaranteeable if i passes at h. Our goal is simply to prove the existence of at least one obviouslydominant strategy for i.

29

Page 30: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

First, note that h′ ∈ Hi(h) implies xW (h) ≿i xW (h′). Since passing is obviously dominantat all h′ ∈ Hi(h), we have xW (h′) ≿i xC(h

′), and together, these imply that xW (h) ≿i xC(h′)

for all h′ ∈ Hi(h). At h, since passing is not obviously dominant, we have xC(h) ≻i xW (h),and further, there must be some x′ ∈ Pi(h) \ Gi(h) such that x′ ≻i xC(h) ≻i xW (h).51

The above implies that x′ ≻i xC(h) ≻i xC(h′) for all h′ ∈ Hi(h). Let X0 = {x′ : x′ ∈

Pi(h) and x′ ≻i xC(h)}. In words, X0 is a set of payoffs that are possible at all h′ ⊆ h,and are strictly better than anything that was clinchable at any h′ ⊆ h (and therefore havenever been clinchable themselves). Order the elements in X0 according to ≻i, and wlog, letx1 ≻i x2 ≻i · · · ≻i xM .

Definition. Let h′ be a history where either agent i is called to move or h′ is a terminalhistory. Payoff x becomes impossible for i at h′ if: (i) x ∈ Pi(h

′′) for all h′′ ∈ Hi(h′) and

(ii) x /∈ Pi(h′).

In other words, the history h′ at which a payoff becomes impossible is the earliest historyat which i moves and where x “disappears” as a possible outcome for her.

Consider a path of play starting from h and ending in a terminal history h at whichtype ≻i of agent i receives his worst case outcome xW (h). For every xm ∈ X0, let hm

denote the history on this path at which xm becomes impossible for i. Note that becausei is ultimately receiving payoff xW (h), such a history hm exists for all xm ∈ X0.52 Let h =

max{h1, h2, . . . , hM} (ordered by ⊂); in words, h is the earliest history at which everythingin X0 is no longer possible. Further, let h−m = max{h1, . . . , hm−1}, i.e., h−m is the earliesthistory at which all payoffs strictly preferred to xm are no longer possible.

Claim 1. For all xm ∈ X0 and all h′ ⊆ h, we have xm /∈ Ci(h′).

Proof. First, note that xm /∈ Ci(h′) for any h′ ⊆ h by construction. We will show that

xm /∈ Ci(h′) at any h ⊇ h′ ⊃ h as well. Start by considering m = 1, and assume x1 ∈ Ci(h

′)

for some h ⊇ h′ ⊃ h. By definition, x1 = Top(≻i, Pi(h)); since h′ ⊃ h implies that Pi(h′) ⊆

Pi(h), we have that x1 = Top(≻i, Pi(h′)) as well. Since x1 ∈ Ci(h

′) by supposition, greedystrategies direct i to clinch x1, which contradicts that she receives xW (h).53

Now, consider an arbitrary m, and assume that for all m′ = 1, . . . ,m − 1, payoff xm′ isnot clinchable at any h′ ⊆ h, but xm is clinchable at some h′ ⊆ h. Let xm′ be (a) payoff thatbecomes impossible at h−m and is such that xm′ ≻i xm. There are two cases:

51At least one such x′ exists by the assumption that Top(≻i, Ci(h)) = Top(≻i, Pi(h)), though there ingeneral may be multiple such x′.

52It is possible that hm is a terminal history.53Recall that for terminal histories h, we define Ci(h) = {x}, where x is the unique payoff associated with

the terminal history. Thus, if h′ is a terminal history, then i receives payoff x1, which also contradicts thatshe receives payoff xW (h).

30

Page 31: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

Case (i): h′ ⊂ h−m. This is the case where xm is clinchable while there is some strictlypreferred payoff xm′ ≻i xm that is still possible. Note that at history h−m, neither (a) nor (b)in the definition of a millipede game hold for xm′ . Further, by the inductive hypothesis, xm′

is nowhere clinchable, and so (c) does not hold either. This implies that (d) must hold, andso xm must be clinchable at h−m. Then, since all preferred payoffs are no longer possible ath−m, xm is the best possible payoff remaining, and is clinchable. Therefore, greedy strategiesinstruct agent i to clinch xm, which contradicts that she receives xW (h).

Case (ii): h′ ⊇ h−m. In this case, xm becomes clinchable after all strictly preferredpayoffs are no longer possible. Thus, again, greedy strategies instruct i to clinch xm, whichcontradicts that she is receiving xW (h).

To finish the proof, again let h = max{h1, h2, . . . , hM} and let x be a payoff that becomesimpossible at h. The claim shows that x is not clinchable at any h′ ⊆ h. The preceding twostatements imply that conditions (a)-(c) in the definition of a millipede game do not holdat h, and so condition (d) must hold, i.e., xC(h) ∈ Ci(h). Since xC(h) is the best possibleremaining payoff at h, greedy strategies direct i to clinch xC(h), which contradicts that shereceives xW (h).54

We now prove the second part of the theorem, restated below as Proposition 2. To doso, we first need to introduce the pruning principle of Li (2017b), which will simplify someof the arguments. Given a game Γ and strategy profile (Si(≻i))i∈N , the pruning of Γ withrespect to (Si(≻i))i∈N is a game Γ′ that is defined by starting with Γ and deleting all historiesof Γ that are never reached for any type profile. Then, the pruning principle says thatif (Si(≻i))i∈N is obviously dominant for Γ, the restriction of (Si(≻i))i∈N to Γ′ is obviouslydominant for Γ′, and both games result in the same outcome. Thus, for any OSP mechanism,we can find an equivalent OSP pruned mechanism. When proving this proposition, we assumethat all OSP games have been pruned with respect to the equilibrium strategy profile. Notealso that we actually prove a slightly stronger statement, which is that every OSP game isequivalent to a millipede game that satisfies the following additional property: for all i, allh at which i moves, and all x ∈ Gi(h), there exists an action ax ∈ A(h) that clinches x.55

Proposition 2. Every obviously strategy-proof mechanism (Γ, SN ) is equivalent to a milli-pede game with the greedy strategy.

Proof. The proof of this proposition is broken down into several lemmas.

54If h is a terminal history, then we make an argument analogous to footnote 53 to reach the samecontradiction.

55See Lemma 3 below.

31

Page 32: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

Lemma 1. Every OSP game is equivalent to an OSP game with perfect information in whichNature moves at most once, as the first mover.

Proof. Ashlagi and Gonczarowski (2016) briefly mention this result in a footnote; here, weprovide the straightforward proof for completeness. We first show that every OSP game isequivalent to an OSP game with perfect information. Denote by A (I) the set of actionsavailable at information set I to the agent who moves at I. Take an obviously strategy-proofgame Γ and consider its perfect-information counterpart Γ′, that is the perfect informationgame at which at every history h in Γ the moving agent’s information set is {h} in Γ′, theavailable actions are A (I), and the outcomes in Γ′ following any terminal history are thesame as in Γ. Notice that the support of possible outcomes at any history h in Γ′ is asubset of the support of possible outcomes at I (h) in Γ. Thus, the worst-case outcome fromany action (weakly) increases in Γ′, while the best-case outcome (weakly) decreases. Thus, ifthere is an obviously dominant strategy in Γ, following the analogous strategy in Γ′ continuesto be obviously dominant. Hence, Γ′ is obviously strategy-proof and equivalent to Γ.

We now show that every OSP game is equivalent to a perfect-information OSP game inwhich Nature moves once, as the first mover. Consider a game Γ, which, by the previousparagraph, we can assume has perfect information. Let Hnature be the set of histories h atwhich Nature moves in Γ. Consider a modified game Γ′ in which at the empty history Naturechooses actions from ×h∈HnatureA (h). After each of Nature’s initial moves, we replicate theoriginal game, except at each history h at which Nature is called to play, we delete Nature’smove and continue with the subgame corresponding to the action Nature chose from A(h)

at ∅. Again, note that for any agent i and history h at which i is called to act, the supportof possible outcomes at h in Γ′ is a subset of the support of possible outcomes at thecorresponding history in Γ (where the corresponding histories are defined by mapping theA (h) component of the action taken at ∅ by Nature in Γ′ as an action made by Nature ath in game Γ). Using reasoning similar to the previous paragraph, we conclude that Γ′ isobviously strategy-proof, and Γ and Γ′ are equivalent.

Lemma 2. Let Γ be an obviously strategy-proof game of perfect information that is prunedwith respect to the obviously dominant strategy profile (Si(≻i))i∈N . At any history h where anagent i is called to move, there is at most one action a∗ ∈ A(h) such that Pi((h, a

∗)) ⊆ Gi(h).

Proof. We prove this result via two steps.Step 1. Suppose there are two distinct payoffs x, y ∈ Pi(h) \Gi(h) and a preference type

≻i such that (i) x and y are the first and second ≻i −best possible payoffs in Pi(h), and (ii)h is on the path of the game for type ≻i. Then, there is at most one action a∗ ∈ A(h) suchthat Pi((h, a

∗)) ⊆ Gi(h), and type ≻i must choose action a∗ at h.

32

Page 33: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

To prove the claim of this step, it is enough to consider the case x ≻i y. First, notethat if x ∈ Pi((h, a)) for some a ∈ A(h), then y ∈ Pi((h, a)) as well. Indeed, if not thenx ∈ Pi((h, a)) and y /∈ Pi((h, a)). For type ≻i, the worst case outcome from following a isstrictly worse than y because x and y are assumed to be the ≻i −best possible payoffs inPi(h), x is not guaranteeable at h, and y is not possible following a. Because y ∈ Pi(h),action a is not obviously dominant. As x is not guaranteeable at h, the worst case outcomefrom any other a′ = a is strictly worse than x, while the best case outcome from a is x, andso no a′ can obviously dominate a. Thus, type ≻i has no obviously dominant action, whichcontradicts that the game is OSP, and proves the claim of this paragraph.

Now, assume that there are two actions a∗1 and a∗2 such that Pi((h, a∗j)) ⊆ Gi(h) for

j = 1, 2. Consider some x ∈ Pi(h) \ Gi(h). By the previous paragraph, we know thatx ∈ Pi((h, a

∗1)) and x ∈ Pi((h, a

∗2)). However, by assumption, x is not guaranteeable at h,

and so, for type ≻i, the worst case payoff from any action a′ must be strictly worse than x,while the best case outcomes from a∗1 and a∗2 are both x. Therefore, no action a′ ∈ A(h) isobviously dominant, which contradicts that Γ is OSP.56

We can conclude that there is at most one action a∗ that leads to x for some continuationstrategies of players. Because x is only possible following a∗, if an obviously dominantstrategy profile exists, any type ≻i that ranks any x ∈ Pi(h) \Gi(h) first among the payoffsin Pi(h) must select a∗ at this history, concluding the proof of the claim of Step 1.

Step 2. Suppose i moves at history h. Then, there is at most one action a∗ ∈ A(h) suchthat Pi((h, a

∗)) ⊆ Gi(h).To prove this step, first consider any earliest history hi

0 at which i is to move. Notethat since this is an earliest history for i, history hi

0 is on the path of play for all types ofagent i. If Pi(h

i0) \Gi(h

i0) = {x} and there were two actions a∗1 and a∗2 as in the statement,

then for any type that ranks x first, the worst case from any action is strictly worse than x

(because x is not guaranteeable), while the best case from both a∗1 and a∗2 is x, so nothingcan obviously dominate a∗1; by similar reasoning, a∗1 does not obviously dominate a∗2. If thereare two payoffs x, y ∈ Pi(h

i0) \Gi(h

i0), then we can apply Step 1 to type x ≻i y ≻i · · · .

Now, consider any successor history h′ ⊃ hi0 at which i is to move, and make the inductive

assumption that at every h ⊂ h′, there is only one possible action a∗ ∈ A(h) such thatPi((h, a

∗)) ⊆ Gi(h). First, consider the case where |Pi(h′) \ Gi(h

′)| = 1, and let x be theunique payoff that is possible but not guaranteeable. By way of contradiction, assume therewere two actions, a∗1 and a∗2, such that x was a possible outcome. By the pruning principle,

56Note that if there were only one such action a∗1, then it would still be true that there is no action thatobviously dominates a∗1 for this type. However, a∗1 itself might be obviously dominant. When there are twosuch actions, a∗1 does not obviously dominate a∗2, nor does a∗2 obviously dominate a∗1, and thus there are noobviously dominant actions.

33

Page 34: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

some type ≻i must be receiving x at some terminal history h ⊃ h′. If x is the top choiceamong all payoffs in Pi(h

′) for some type ≻i for which history h′ is on-path, then, followingsimilar reasoning as above, neither a∗1 nor a∗2 can be obviously dominant. If x is not the bestpossible payoff in Pi(h

′) for any such type, then, since x is the only payoff that is possible,but not guaranteeable, every other payoff in Pi(h

′) is guaranteeable. This implies that everytype of agent i for which h′ is on-path can guarantee herself her best possible outcome inPi(h

′), and so no type should ever play a strategy for which she ends up receiving a payoffof x at any terminal history h ⊃ h′, which is a contradiction.

Finally, assume |Pi(h′) \ Gi(h

′)| ≥ 2, and let x, y ∈ Pi(h′) \ Gi(h

′). First, consider thecase that there is some x ∈ Pi(h

′) \ Gi(h′) such that x /∈ Gi(h) for all h ⊂ h′ at which i is

to move. Recall the inductive hypothesis says that at all such h, there is a unique actionsuch that Pi((h, a

∗)) ⊆ Gi(h). Thus, if x has never been guaranteeable, all types x ≻i · · ·must have followed this unique action at all such h ⊂ h′, and we can apply Step 1 to thetype x ≻i y ≻i · · · . The last case to consider is where all x, y ∈ Pi(h

′) \ Gi(h′) were also

guaranteeable at some earlier history h. Consider a type ≻i of agent i who receives payoff x

at some terminal history h ⊃ h′. First, note that for any z ≻i x, z /∈ Gi(h′) as otherwise this

type would not follow a strategy whereby x was a possible outcome.57 Let z be the ≻i −bestpayoff in Pi(h

′), and w be the second-best possible payoff in Pi(h′) for this type; we allow

w = x. We can again apply Step 1 to type ≻i and conclude there is at most one action a∗

such that Pi((h, a∗)) ⊆ Gi(h). □

Clinching actions are those for which i’s payoff is completely determined after followingthe action. Lemma 2 shows that if a game is OSP, then at every history, for all actions a

with the exception of possibly one special action a∗, all payoffs that are possible following a

are also guaranteeable at h; note, however, it does not say that all actions but at most oneare clinching actions. Indeed, it leaves open the possibility that there are several actions thatcan ultimately lead to multiple final payoffs for i, which can happen when different payoffsare guaranteeable for i by following different actions in the future of the game. The nextlemma shows that if this is the case, we can always construct an equivalent OSP game suchthat all actions except for possibly one are clinching actions.

Lemma 3. Let Γ be an OSP game of perfect information that is pruned with respect to the(obviously dominant) strategy profile (Si(≻i))i∈N . There exists an equivalent OSP game Γ′

with perfect information such that the following hold at each h:(i) At least |A(h)| − 1 actions at h are clinching actions

57If x is the ≻i-best possible payoff in Pi(h′) for all types that reach h′, then apply the same argument to

a type that receives y at some terminal history and set z = x.

34

Page 35: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

(ii) For every payoff x ∈ Gi(h), there exists an action ax ∈ A(h) that clinches x for i

(where i(h) = i).(iii) If Pi(h) = Gi(h), then i(h′) = i for any h′ ⊋ h; further, if i(h) = i, then all actions

in A(h) are clinching actions.

Proof. We first show parts (i) and (ii). Consider some history h of game Γ at which themover is i(h) = i. By Lemma 2, all but at most one action (denoted a∗) in A(h) satisfyPi((h, a)) ⊆ Gi(h); this means that any obviously dominant strategy for type ≻i that doesnot choose a∗ guarantees the best possible outcome in Pi(h) for type ≻i. Define the setSi(h) = {Si : Si(h) = a∗ and |X(h, Si)| = 1}, and notice that each Si ∈ Si(h) guarantees aunique payoff for i if she plays strategy Si starting from history h, no matter what the otheragents do.

We create a new game Γ′ that is the same as Γ, except we replace the subgame startingfrom history h with a new subgame defined as follows. If there is an action a∗ such thatPi((h, a

∗)) ⊆ Gi(h) in the original game (of which there can be at most one), then there isan analogous action a∗ in the new game, and the subgame following a∗ is exactly the sameas in the original game Γ. Additionally, there are M = |Si(h)| other actions at h, denoteda1, . . . , aM . Each am corresponds to one strategy Sm

i ∈ Si(h), and following each am, wereplicate the original game, except that at any future history h′ ⊇ h at which i is called onto act, all actions (and their subgames) are deleted and replaced with the subgame startingfrom the history (h′, a′), where a′ = Sm

i (h′) is the action that i would have played at h′ inthe original game had she followed strategy Sm

i (·). In other words, if i’s strategy was tochoose some action a = a∗ at h in the original game, then, in the new game Γ′, we ask agenti to choose not only her current action, but all future actions that she would have chosenaccording to Sm

i (·) as well. By doing so, we have created a new game in which every action(except for a∗, if it exists) at h clinches some payoff x, and further, agent i is never calledupon to move again.58

We construct strategies in Γ′ that are the counterparts of strategies from Γ, so that forall agents j = i, they continue to follow the same action at every history as they did inthe original game, and for i, at history h in the new game, she takes the action am that isassociated with the strategy Sm

i in the original game. By definition if all the agents followstrategies in the new game analogous to the their strategies from the original game, thesame terminal history will be reached, and so Γ and Γ′ are equivalent under their respective

58More precisely, all of i’s future moves are trivial moves in which she has only one possible action; hencethese histories may further be removed to create an equivalent game in which i is never called on to moveagain. Note that this only applies to the actions a = a∗; it is still possible for i to follow a∗ at h and becalled upon to make a non-trivial move again later in the game.

35

Page 36: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

strategy profiles.We must also show that if a strategy profile is obviously dominant for Γ, this modified

strategy profile is obviously dominant for Γ′. To see why the modified strategy profile isobviously dominant for i, note that if her obviously dominant action in the original game waspart of a strategy that guarantees some payoff x, she now is able to clinch x immediately,which is clearly obviously dominant; if her obviously dominant strategy was to follow astrategy that did not guarantee some payoff x at h, this strategy must have directed i tofollow a∗ at h. However, in Γ′, the subgame following a∗ is unchanged relative to Γ, andso i is able to perfectly replicate this strategy, which obviously dominates following anyof the clinching actions at h in Γ′. In addition, the game is also obviously strategy-prooffor all j = i because, prior to h, the set of possible payoffs for j is unchanged, while forany history succeeding h where j is to move, having i make all of her choices earlier inthe game only shrinks the set of possible outcomes for j, in the set inclusion sense. Whenthe set of possible outcomes shrinks, the best possible payoff from any given strategy onlydecreases (according to j’s preferences) and the worst possible payoff only increases, and so,if a strategy was obviously dominant in the original game, it will continue to be so in thenew game. Repeating this process for every history h, we are left with a new game where, ateach history, there are only clinching actions plus (possibly) one passing action, and further,every payoff that is guaranteeable at h is also clinchable at h.

For part (iii), this follows immediately from part (ii) if i(h) = i. If i(h) = i, we use ananalogous construction: let Si(h) = {Si : |X(h, Si)| = 1}. We create a new game wherei(h) = i, and i has M = |Si(h)| actions, where each am ∈ A(h) corresponds to one strategySmi ∈ Si(h), and, following each am, we replicate the original game, except that at any future

history h′ ⊇ h at which i is called to act, all actions (and their subgames) are deleted andreplaced with the subgame starting from the history (h′, a′), where a′ = Sm

i (h′) is the actioni chooses at h′ according to strategy Sm

i in the original game. This new game continues tobe obviously strategy-proof for the same reasons as in the previous paragraph. Note thatin the new game, as soon as we reach a history such that Pi(h) = Gi(h), we ask agent i tomake all future choices, at which point she clinches her final payoff leaves the game (i.e., isnever called on to move again at any h′ ⊋ h).59

Lemma 4. Let Γ be an obviously strategy-proof game that is pruned with respect to theobviously dominant strategy profile SN and that satisfies Lemmas 1 and 3. At every history hi

at which an agent i moves and every terminal history h, for every payoff z, one of conditions(a)-(d) must hold.

59If some history h is the first history at which Pi(h) = Gi(h) for more than one agent then the choice setsof all these agents must be disjoint and we can ask them to make all future choices in an arbitrary order.

36

Page 37: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

Proof. Assume not. First, consider the case of a non-terminal history hi where i is called tomove and a payoff z be such that (a), (b) and (c) do not hold at hi, i.e., the following aretrue:

(a’) z /∈ Pi(hi)

(b’) z ∈ Pi(h) for all h ∈ Hi(hi)

(c’) z /∈ ∪h∈Hi(hi)Ci(h) and

Points (b’) and (c’) imply that z is possible at every h ⊊ hi where i is to move, but it isnot clinchable at any of them. This implies that for any type of agent i that ranks z first,any obviously dominant strategy must have the agent passing at all h ∈ Hi(h

i).60

Towards a contradiction, assume that (d) did not hold, i.e., there exists some h′ ∈ Hi(hi)

and x ∈ Ci(h′) such that x /∈ Ci(h

i). Consider a type z ≻i x ≻i · · · . We argue that if (d)does not hold at hi, then there is some hi ⊊ hi such that type ≻i has no obviously dominantaction. First, note that at any such hi ⊊ hi, no clinching action can be obviously dominant,because z is always possible following the passing action, but is never clinchable, and so theworst case from clinching is strictly worse than the best case from passing, which is z. Next,there must be some hi ⊊ hi such that the passing action also is not obviously dominant.To see why, note that hi must be on the path of play for type ≻i, since she must pass atall h′ ⊊ hi. By assumption, z /∈ Pi(h

i) and x /∈ Ci(hi), which implies that the worst case

outcome from passing at any h′ ⊊ hi is some y that is strictly worse than x according to≻i. However, we also have x ∈ Ci(h

i) for some hi ⊊ hi, and so, the best case outcome fromclinching x at hi is x. This implies that passing is not obviously dominant, which contradictsthat Γ is OSP.

Last, consider a terminal history h. As above, let z be a payoff such that (a’), (b’), and(c’) hold (i.e., (a), (b), and (c) are false). By definition, Ci(h) = {y} for all i, where y is theunique outcome associated with terminal history h (note also that z /∈ Pi(h) implies thaty = z). Assume that (d) does not hold, i.e., there exists some h′ ∈ Hi(h) and x ∈ Ci(h

′)

such that x /∈ Ci(h). Note that (i) z = y (because z /∈ Pi(h)); (ii) z = x (by (c’)); and (iii)x = y (because x /∈ Ci(h)). In other words, x, y, z must all be distinct payoffs. Consider thetype z ≻i x ≻i y ≻i · · · . By (b’) and (c’), z is possible at every h ⊊ h where i is to move,but is not clinchable at any such history. Thus, any obviously dominant strategy of type ≻i

must have agent i passing at any such history. However, at h′, i could have clinched x, andso passing is not obviously dominant (because y is possible from passing).

60At all such h, since z is not clinchable, but is possible, it must be possible following the (unique) passingaction. This means that best case outcome from passing is z, while the worst case outcome from clinchingis strictly worse than z. Thus, no clinching action can be obviously dominant, so, if an obviously dominantstrategy exists, it must instruct i to pass.

37

Page 38: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

Proposition 2 follows from Lemmas 1, 3, and 4. Theorem 1 then follows from Propositions1 and 2.

A.2 Proof of Theorem 4

The proof follows the same steps as the proof of the analogous result for OSP in Li (2017b).We present it for the case of SOSP. Consider any two strategies Si and S ′

i of agent i; let h

be the earliest point of departure for these two strategies.Suppose Si is strongly obviously strategy-proof in Γ. Then any outcome that is possible

after playing Si is weakly better than any outcome that is possible after playing S ′i in game

Γ′, and hence in the outcome set-equivalent game Γ′. Hence, Si strongly obviously dominatesS ′i in game Γ′, and thus it weakly dominates it.

Now suppose Si weakly dominates S ′i in all games Γ′ that are outcome equivalent to Γ.

Consider such a Γ′ in which all moves of agent i following history h are made by Natureinstead. Since, Si weakly dominates S ′

i in Γ′, we conclude that any outcome that is possibleafter playing Si is weakly better than any outcome that is possible after playing S ′

i in gameΓ′, and hence in the outcome set-equivalent game Γ. Hence, Si strongly obviously dominatesS ′i in game Γ. □

A.3 Proof of Theorem 5

(⇐) Let Γ be a monotonic game. Fix an agent i, and, for any history h∗ at which i moves, letxh∗ = Top(≻i, Pi(h

∗)) and yh∗ = Top(≻i, Ci(h∗)). Let Hi,h∗ = {h ∈ Hi(h

∗)|h∗ ⊊ h′ ⊊ h =⇒h′ /∈ Hi} be the set of one-step-foresight simple nodes. Consider the following strategic planfor any h∗:

• If xh∗ ∈ Ci(h∗), then Si,h∗(h∗) = axh∗ , where axh∗ ∈ A(h∗) is a clinching action for xh∗ .

• If xh∗ /∈ Ci(h∗), then Si,h∗(h∗) = a∗ (i passes at h∗), and, for any other h ∈ Hi,h∗ :

– If Pi(h∗) \ Ci(h

∗) ⊆ Ci(h), then Si,h∗(h∗) = axh∗ .

– If Ci(h∗) ⊆ Ci(h), then Si,h∗(h∗) = ayh∗ .

Note that by monotonicity, at any h ∈ Hi,h∗ , one of the conditions in the last two bulletpoints must hold. It is straightforward to verify that this strategic plan is OSF-simple atany h∗, and thus the corresponding strategic collection (Si,h∗)h∗∈Hi

is also OSF-simple.

38

Page 39: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

(⇒) Let Γ be a game that is not monotonic, which means there exists an agent i, ahistory h∗ at which i moves, another history h ∈ Hi,h∗ , and payoffs x and y such thatx ∈ (Pi(h

∗) \ Ci(h∗)) \ Ci(h) and y ∈ Ci(h

∗) \ Ci(h). Notice that x = y. Without lossof generality we assume that h is the earliest history at which monotonicity is violatedin this way.61 Since both x, y /∈ Ci(h) by definition, there is some third payoff z = x, y

such that z ∈ Ci(h). Let ≻i be a type of agent i such that Top(≻i, Pi(h∗)) = x and

Top(≻i, Pi(h∗) \ {x}) = y, and let ≻′

i be a type of agent such that Top(≻′i, Pi(h

∗)) = x

and Top(≻′i, Pi(h

∗) \ {x}) = z. Note that for both ≻i and ≻′i, for any OSF-simple plan

Si,h∗(h∗) = a∗ (because x is possible, but not clinchable at h∗).There are two cases, depending on what is possible at h.Case (1): y /∈ Pi(h). From above, Si,h∗(h∗) = a∗. However, for any such strategic plan,

the worst case outcome from the perspective of h∗ is some w = x, y.62 Since she can clinchy at h∗, and y ≻i w, Si,h∗(·) is not OSF-simple.

Case (2): y ∈ Pi(h). Here, there are two subcases.Subcase (2).(i): z ∈ Pi((h, a

∗)). In this case, type ≻i has no OSF-simple strategic planat h∗. Again, in any such plan, we have Si,h∗(h) = a∗. But, since z ∈ Pi((h, a

∗)), for anya ∈ A(h), the worst case from the perspective of node h∗ is at best z (since both x, y /∈ Ci(h),by definition), which is worse than clinching y at h∗, and so Si,h∗(·) is not OSF-simple.

Subcase (2).(ii): z /∈ Pi((h, a∗)). If x ∈ Pi(h), then type ≻′

i has no OSF-simplestrategic plan at h.63 To see this, note that at h, for any strategic plan, the worst casefrom passing at h is strictly worse than z (since x is possible, but not clinchable at h, andz /∈ Pi((h, a

∗)), while z ∈ Ci(h), and so Si,h(h) = a∗. However, Si,h(h) must equal a∗ becausethe best case from passing is x; a contradiction.

If x /∈ Pi(h), then type ≻i has no OSF-simple strategic plan at h∗.64 Once again, any suchplan must have Si,h∗(h∗) = a∗ . Since y ∈ Pi(h) but y /∈ Ci(h), it must be that y ∈ Pi((h, a

∗)),and so there is some other w = x, y such that w ∈ Pi((h, a

∗)) (because x /∈ Pi(h)). Therefore,from the perspective of node h∗, for any fixed plan Si,h∗(h), the worst case is at best w, whichis strictly worse than clinching y at h∗, and thus Si,h∗(·) is not OSF-simple.

61This assumption guarantees that, in a pruned game, history h∗ is on path of the play for the two agenttypes we construct.

62If x /∈ Pi(h), then this is obvious (since in this case, y /∈ Pi(h) either). If x ∈ Pi(h), by definitionx /∈ Ci(h), and so x is only possible following a pass at h, x ∈ Pi((h, a

∗)). By definition of a passing action,there is some other w = x such that w ∈ Pi((h, a

∗)). Since y /∈ Pi(h), w = y.63Note that here, we consider h, not h∗; in fact, the argument actually shows something stronger, which

is that that there is no obviously dominant action at h, and so the game in this case is not OSP, let aloneOSF-simple.

64As with the previous case, the statement can actually be made stronger: there is actually no obviouslydominant action at h∗.

39

Page 40: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

A.4 Proof of Theorem 8

Because of the restriction to paths of the game that are on the path of the greedy strategiesfor some type profile, it is sufficient to prove this theorem for pruned games. We provide theproof for environments without transfers; the general proof follows the same steps.

We first note the following lemma, which says that the first time an agent is called toplay (at a nontrivial history) in an SOSP game, all of her available actions are associatedwith a unique possible payoff, except for possibly one action, which may have two possiblepayoffs.

Lemma 5. Let Γ be a pruned SOSP game. Let hi0 be any earliest history at which agent i

is called to play and has at least two actions, i.e., |A(hi0)| ≥ 2. Then, |Pi((h

i0, a))| ≤ 2 for

all a ∈ A(hi0), with equality for at most one a ∈ A(hi

0).

Proof of lemma. Since hi0 is the first time i is called to move, it is on-path for all types

of agent i. We first show that |Pi((hi0, a))| ≤ 2 for all a ∈ A(hi

0). Assume not, which meansthat there exists some a ∈ A(h) such that |Pi((h

i0, a))| ≥ 3. Let x, y, z ∈ Pi((h

i0, a)) be three

distinct payoffs that are possible following a. There must be some payoff in Pi((hi0, a)), say x,

such that for all a′ ∈ A(hi0) , Pi((h

i0, a

′)) = {x}.65 Consider some action a′ = a. If there existssome w ∈ Pi((h

i0, a

′)) such that w = x, y, consider the type x ≻i w ≻i y ≻i · · · , and notethat this type has no strongly obviously dominant action.66 Otherwise, Pi((h

i0, a

′)) = {x, y}or {y}; in either case, type x ≻i y ≻i z ≻i · · · has no strongly obviously dominant action.67

Finally, we show that |Pi((hi0, a))| = 2 for at most one a ∈ A(hi

0). Let a and a′ be twoactions such that there are two possible payoffs for i following each, and take two distinctpayoffs x, y such that x ∈ Pi(h

i0, a) and y ∈ Pi(h

i0, a

′). There are two cases.Case (i): There exists some z = x, y such that z ∈ Pi((h

i0, a))∪Pi((h

i0, a

′)). Without lossof generality, assume that z ∈ Pi((h

i0, a)), which, by the first part of the theorem, means

Pi((hi0, a)) = {x, z}. Similar to the above, we can assume that Pi((h

i0, a

′′)) = {x} for alla′′ ∈ A(hi

0) (by pruning). Thus, type x ≻i y ≻i z ≻i · · · has no strongly obviously dominantaction.68

65If there were such an a′ for each payoff in Pi((hi0, a)), it would not be strongly obviously dominant for

any type of agent i to follow action a, and it can be pruned.66Action a does not strongly obviously dominate a′, since y is possible from a, whereas the best case from

a′ is at least w. Similarly, no other a′′ = a can strongly obviously dominate a, since the worst case from anyother action is strictly worse than x, while the best case from a is x.

67The worst case from a is at best z, while the best case from a′ is at least y, and so a does not stronglyobviously dominate a′. Similarly, the worst case from any a′′ = a is at best y, and the best case from a is x,and so no other a′′ can strongly obviously dominate a.

68The worst case from a is z, while y is possible from a′, and so a is not strongly obviously dominant. Theworst case from any other a′′ = a is strictly worse than x, while the best case from a is x, so no other a′′ isstrongly obviously dominant either.

40

Page 41: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

Case (ii): There does not exist any z as defined in case (i). In this case, Pi((hi0, a)) =

Pi((hi0, a

′)) = {x, y}. Again, pruning implies that (without loss of generality) Pi((hi0, a

′′)) ={x} for all a′′ ∈ A(hi

0). Then, type x ≻i y ≻i · · · has no strongly obviously dominantaction.69 □

Continuing with the main proof, if a history h is payoff-relevant, then by definition|A(h)| ≥ 2 and |Pi(h)| ≥ 2. Assume that there was a path of the game with two payoff-relevant histories h1 ⊂ h2 for some agent i, and note that it is without loss of generality toassume that h1 and h2 are the first and second times i is called to play on the path, and that|Pi((h1, a))| > 1 for some a ∈ A(h1).70 By the lemma, there can only be one such action,denoted a∗1, and |Pi((h1, a

∗1))| = 2. For notational purposes, define Pi(h1, a

∗1) = {x, y}. By

pruning, the following must hold for at least one of w = x,y: there is no action a ∈ A(h1)

such that Pi((h1, a)) = {w}.71 Without loss of generality, assume that this holds for y, andnote that this implies that no action a′1 = a∗1 can be strongly obviously dominant for anytype that ranks y first. In particular, if there exists some z = x, y such that z ∈ Pi(h1), thentype y ≻i z ≻i · · · does not have a strongly obviously dominant action at h1.72 Therefore,at h1, we must have Pi((h1, a

∗1)) = {x, y} and Pi((h1, a

′1)) = {x} for all a′1 = a∗1.

If i follows a′1 = a∗1, then any potential future histories are payoff-irrelevant for i, andso h2 ⊇ (h1, a

∗1). Strong obvious dominance requires that every type ≻i that prefers x to

y must select some action a′1 = a∗1 at h1, and so history h2 is only on the path of play forthose types that prefer y to x. Since h2 is payoff-relevant, it must be that Pi(h2) = {x, y}.Further, there can be no action a2 ∈ A(h2) such that Pi((h2, a2)) = {y}; otherwise, all typeswho prefer y to x would have to follow such an action, and so any action for which x isa possible outcome could be pruned, which contradicts x ∈ Pi(h2). Thus, for all actionsa ∈ A(h2), either Pi((h2, a)) = {x, y} or {x}. Pruning once again rules out the latterpossibility, and so Pi((h2, a)) = {x, y} for all a ∈ A(h2). If there are two such actions, thenneither strongly obviously dominates the other; hence, |A(h2)| = 1, which contradicts thath2 is payoff-relevant.

69Neither a nor a′ strongly obviously dominate the other, since the worst case from either is y, while thebest case from the other is x. Similarly, no other a′′ strongly obviously dominates a, since the worst casefrom any such a′′ is strictly worse than x, while the best case from a is x.

70If |Pi((h1, a))| = 1 for all a ∈ A(h1) the first time i is called to play, then it is clear that |Pi(h′)| = 1 for

any h′ ⊃ h1, and so i has only one payoff-relevant history along any path that includes h1.71If there were such an action for both x and y, then it would not be strongly obviously dominant for any

type to follow action a∗1, and it can be pruned.72Note that h1 is on-path for all types of agent i, since it is the first time i is called to move. Action a∗1 is

not strongly obviously dominant, since the worst-case outcome is x, while by construction, there exists someother action a′1 = a∗1 where z is possible. No action a′1 = a∗1 is strongly obviously dominant either, as justargued.

41

Page 42: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

A.5 Proof of Theorem 9

That curated dictatorships are SOSP is immediate from the definition, and so we focuson proving that every SOSP game is equivalent to a curated dictatorship. Note first thatthe pruning principle (see the proof of Theorem 1) continues to apply to strong obviousdominance. Similarly, deleting trivial histories at which there is at most one action alsopreserves strong obvious dominance. Finally, following the same reasoning as in the proofof Theorem 1, given any SOSP game, we can construct an equivalent SOSP game of perfectinformation in which Nature moves at most once, as the first mover, and so we can focuson the deterministic subgame after any potential move by Nature. Thus, what remains toshow is that every perfect-information, pruned SOSP game in which there are no moves byNature and no trivial histories is equivalent to a curated dictatorship.

Let Γ be such a game. By Theorem 8, each agent i can have at most one payoff-relevanthistory along any path of game Γ, and this history (if it exists) is the first time i is calledto play. Consider any such history hi

0. If there is some other history h′ ⊃ hi0 at which i is

called to play, then history h′ must be payoff-irrelevant for i; in other words, there is somepayoff x such that Pi((h

′, a′)) = {x} for all a′ ∈ A(h′). Using the same technique as inthe proof of Theorem 1, we can construct an equivalent game Γ′ in which at history hi

0, iis asked to also choose her actions for all successor histories h′ ⊃ hi

0 at which she mightbe called to play, and then is not called to play again after hi

0 (see the proof of Theorem1 for a more formal description of this procedure). Since all of these future histories werepayoff-irrelevant for i, the new game continues to be strongly obvious dominant for i. Strongobvious dominance is also preserved for all j = i, since having i make all of her choicesearlier only shrinks the set of possible outcomes any time j is called to move, and thus, ifsome action was strongly obviously dominant in the old game, the analogous action(s) willbe strongly obviously dominant in the new game. Repeating this for every agent and everyhistory, we have constructed a SOSP game Γ′ that is equivalent to Γ and in which each agentis called to move at most once along any path of play.

We claim that Γ′ is a curated dictatorship. Assume not, and let h be a history wherethe definition of a curated dictatorship is violated. First assume that |Pi(h)| ≥ 3. BecauseΓ′ is pruned, richness implies that any rankings of the outcomes in Pi(h) is possible. SinceΓ′ is not a curated dictatorship, there must be some payoff x ∈ Pi(h) that i cannot clinchat h. This means that for any action a ∈ A(h) such that x ∈ Pi((h, a)), there is some othery ∈ Pi((h, a)). Combining this with Lemma 5 (see the proof of Theorem 8) implies thatat h, there is exactly one action a ∈ A(h) such that Pi((h, a)) = {x, y}, and, for all othera′ = a, x /∈ Pi((h, a

′)). Let z = x, y be some third payoff possible at h, and consider the

42

Page 43: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

type x ≻i z ≻i y ≻i · · · .73 For this type, a is not strongly obviously dominant, since theworst-case from a is y, while there is some other action for which z is possible. Similarly, noother action a′ = a is strongly obviously dominant either, since x is not possible from anysuch a′, but is possible from a. Thus, this type has no strongly obviously dominant action,and the game is not SOSP, a contradiction.

Last, consider the case |Pi(h)| = 2,74 and let Pi(h) = {x, y}. Again by Lemma 5, therecan be at most one action a ∈ A(h) such that |Pi((h, a))| = 2. If there is no such action,then h satisfies the requirements of a curated dictatorship. If there is such an action, it mustbe that Pi((h, a)) = {x, y} for some a, and Pi((h, a

′)) = {x} for all other a′ = a,75 whichagain satisfies the requirements of a curated dictatorship.

A.6 Proof of Theorem 11

We first introduce some additional definitions and notation. Let R = {r1, . . . , rN} be a setof roles. For a game form Γ, let ρ : H → R be a function from histories to roles, where ρ(h)

is the role associated with history h. Let σ : R → N be a bijective function, called a roleassignment function, and let Σ be the space of all role assignment functions. Then, wedefine Γσ as the extensive-form game where, at each history h, the agent called to move isagent σ(ρ(h)). Note that |Σ| = N !, and thus, there are N ! possible games Γσ.

Given a deterministic game form Γ that associates histories to roles (rather than specificagents), let Γ∗ be the random mechanism defined as follows: first, Nature chooses a roleassignment function σ uniformly at random from the set of all possible role assignmentfunctions, and then, the agents play Γσ. Let S∗

N (≻N ) denote a strategy profile for Γ∗. Weassume that an agent’s strategy S∗

i (≻i) depends only on her own preferences and her roleassignment, and not on the roles assigned to other agents.76 For a fixed preference profile≻N , this induces a distribution over outcomes (i.e., matchings), µ, denoted PrΓ

∗(µ), where

PrΓ∗(µ) =

|σ : (Γσ, SN (≻N )) results in µ|N !

.

In order to prove Theorem 11 we need to show that for any OSP and Pareto efficient mecha-73Recall that h is the first (and only) time i is called to play in Γ′, and so is on the path of play for all

types.74If |Pi(h)| = 1, the argument is trivial.75That is to say, all other actions must clinch the same payoff. If there were two actions a′, a′′ such that

Pi((h, a′)) = {x} and Pi((h, a

′′)) = {y}, then it would not be strongly obviously dominant for any type toselect a, and it can be pruned.

76In other words, in any two subgames ΓA and ΓB following Nature’s selection of role assignments σA andσB , if σ−1

A (i) = σ−1B (i) = rn, then S∗

i (≻i)(hA) = S∗i (≻i)(hB) for any equivalent histories hA and hB in these

two games.

43

Page 44: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

nism Γ and for any preference profile ≻N , the probability PrΓ∗(µ) is the same as in Random

Priority.

A.6.1 Proof Approach

The proof builds on the bijective argument used by Abdulkadiroğlu and Sönmez (1998) toshow the equivalence of Random Priority and the Core from Random Endowments (seealso Pathak and Sethuraman, 2011 and Carroll, 2014). Throughout, we fix the profile ofpreferences ≻N . Given any Γ that is OSP and Pareto efficient, we construct a bijectionf : Σ → Ord that associates to each role assignment function σ ∈ Σ a total linear orderof the agents fσ ∈ Ord with the property that game Γσ results in the same final allocation(matching) µ as a serial dictatorship where the first agent called to play is fσ(1), the secondagent called to play is fσ(2), etc. We then show that the mapping we constructed is abijection, which proves Theorem 11.

A.6.2 Efficient Millipedes

Given any OSP game Γ, we can use the techniques of Lemma 3 to construct an equivalentmillipede game which has the following properties:

1. At each history h, there is at most one passing action in A(h); this action, if it exists,is denoted a∗ ∈ A(h).

2. For every object x ∈ Gi(h), there exists a clinching action ax ∈ A(h) that clinches x

for i.

3. As soon as an agent’s top still-possible object is guaranteeable at a history h, sheclinches this object at h (that is, agents follow greedy strategies).

For any history h, we say an agent i is active if she has been previously called to play atsome h′ ⊆ h, and further has not yet clinched an object at h. Let A(h) denote the set ofactive agents at h.

In constructing the bijection f , we make use of the concept of a lurker introduced byBade and Gonczarowski (2017, hereafter BG).77 Informally, a lurker is an agent who hasbeen offered to clinch all objects that are possible for him except for exactly one, which heis said to “lurk”. If an agent lurks some object x, then the mechanism can infer that x is hisfavorite (still available) object, and so it is possible to exclude x from other agents without

77They focus on understanding which OSP mechanisms are Pareto efficient. While in this proof we build ontheir insights, in turn their analysis follows our 2016 characterization of OSP mechanisms through millipedegames as well as our analysis of SOSP and efficient mechanisms.

44

Page 45: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

violating Pareto efficiency. The role of lurkers is to allow more than two agents to be activeat any given point of the game; while there can be an arbitrary number of lurkers, at anypoint, at most two active agents are non-lurkers.

To formally define lurker, let C⊆i (h) = {x : x ∈ Ci(h

′) for some h′ ⊆ h} be the set ofobjects agent i has been offered to clinch at some subhistory of h. Similarly, let C⊂

i (h) =

{x : x ∈ Ci(h′) for some h′ ⊊ h} be the set of objects agent i has been offered to clinch at

some strict subhistory of h. We consider a history h and an agent i who has moved at astrict subhistory of h. Let h′ ⊊ h be the maximal strict subhistory such that ih′ = i. Agenti is said to be a lurker for object x at h if (i) x ∈ Pi(h

′), (ii) C⊆i (h

′) = Pi(h′) \ {x} and (iii)

x /∈ C⊆j (h) for any other active j = i that is not a lurker at h.78

At any h, we partition the set of active agents as A(h) = L(h) ∪ L(h), where L(h) =

{ℓh1 , . . . , ℓhm} is the set of lurkers and L(h) is the set of active non-lurkers. With some abuse ofnotation, we let X (h) denote the set of still-available (unclinched) objects at h (rather thanoutcomes), and partition this set as X (h) = X L(h) ∪ X L(h), where X L(h) = {xh

1 , . . . , xhλ(h)}

is the set of lurked objects and X L(h) = X (h) \ X L(h) is the set of unlurked objects at h.Each ℓhm has a unique object that she lurks, xh

m, and the sets are ordered such that if m′ < m,then lurker ℓhm′ is “older” than lurker ℓhm, in the sense that ℓhm′ first became a lurker for xh

m′

at a strict subhistory of the history at which ℓhm became a lurker for xhm.79

In a millipede game, at any history, there is a set of clinching actions and (possibly) onepassing action action. As the game progresses, agents engage in a sequence of passes, andthe set of lurkers/lurked objects continues to grow, until eventually, we reach a history h

where some agent i clinches some object x. By BG Lemma 12 stated below, at most twoactive agents are non-lurkers at any point. When i clinches at h, this initiates a chain ofclinching among the active agents that proceeds as follows:

• If x ∈ X L(h), each lurker ℓhm ∈ L(h) is immediately assigned to her lurked object, xhm.

• If x = xhm1

for some lurked xhm1

∈ X L(h), then all “older” lurkers ℓhm′ for m′ < m1 receivetheir lurked objects xh

m′ ; since lurker ℓhm1’s lurked object was taken by i, she is offered

to clinch anything from the remaining set of unclinched objects, X (h) \ {xh1 , . . . , x

hm1

}.

• If ℓhm1takes an unlurked object, then all remaining lurkers get their lurked objects; if

ℓhm1chooses a lurked object xh

m2for some m2 > m1, then all “older” unmatched lurkers

78This definition of a lurker modifies Definition E.9 of BG—who do not impose condition (iii) and imposeinstead the requirement that Pi(h) = Gi(h)—but importantly when restricted to the millipede games, theirand our definitions of a lurker are equivalent.

79That this entire construction is well-defined follows from a series of lemmas in the appendix of Bade andGonczarowski (2017), which are presented below for ease of reference.

45

Page 46: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

(ℓhm′ for m1 < m′ < m2) get their lurked objects. Lurker ℓhm2gets to choose from

X (h) \ {xh1 , . . . , x

hm2

}, etc.

• This process is repeated until some lurker ℓhm chooses an unlurked object, y, at whichpoint all remaining unassigned lurkers are assigned to their lurked objects.

• Finally, if y ∈ C⊆j (h) for the other active non-lurker j ∈ L(h) \ {i},80 then j is offered

to clinch anything from what remains, XL(h) \ {y}.

Notice that at the end of the above chain of lurker assignments that was initiated at h, allbut at most one active agent in A(h) has clinched and is thus removed from the game. If allactive agents have been assigned, then the continuation game is just a smaller Pareto efficientmillipede game on the remaining sets of unmatched agents and objects, and everythingproceeds similarly. If there is one active agent left, j, then this subgame begins with agentj carrying over anything that she has been previously offered to clinch, C⊆

j (h), to the nextround.

A.6.3 Constructing the bijection

We now construct the bijection from role assignment functions into serial dictatorship or-derings. We start by providing an ordering algorithm that, for a give game Γ and fixedpreference profile/strategy, follows the path of the game from the root node h∅ to the termi-nal node h and outputs outputs a partial ordering of the agents, denoted ▷. This ordering isonly partial because agents may “tie”. Ties will be of size at most 2, and any way of breakingthese ties to create a total linear order will lead to the same serial dictatorship allocation.

Each role assignment function σA will induce a game ΓA and an associated some partialordering, ▷A, via our ordering algorithm. Running the algorithm on all N ! role assignmentfunctions gives N ! partial orderings. We will then show that it is possible to “break ties”consistently in such a way that we recover a bijection f : Σ → Ord, where Ord is the spaceof all total linear orders on N .

The intuitive idea behind constructing the partial order ▷ is as follows. We start byfinding the first agent to clinch some object x (after a series of passes) at some history h.This induces a chain of assignments of the active agents A(h) as described above. We create▷ by ordering agents who receive lurked objects, starting with the agent who receives the“oldest” lurked object, etc. After this is done, there are at most 2 active agents who haveyet to be ordered, one of whom has clinched an unlurked object, say y; if y was previouslyoffered to the other non-lurker, then we add both remaining agents to the order, i.e., these

80Such an agent may or may not exist, but if they do, they are unique.

46

Page 47: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

two agents “tie”; if not, then we just add the agent who clinched y, and the other activeagent (if they exist) carries over their “endowment” (the set C⊆

j (h)) to the next stage. Afterclearing this first segment of agents, we continue along the game path and find the firstunordered agent to clinch an object, and repeat.

Ordering Algorithm. Each step k of this algorithm produces a partial ordering ▷k

on the set of agents who are processed in step k. At the end of the final step K, we willconcatenate the K components to produce ▷, the final partial ordering on the set of all agentsN .

Step 1 Find the first object to be clinched along the path to the terminal node h, say x1

at history h1 by agent i1.81 Let L(h1) = {ℓ1, . . . , ℓλ(h1)} be the set of lurkers, andX L(h1) = {x1, . . . , xλ(h1)} be the set of lurked objects (note that these sets may beempty, in which case skip immediately to step 1.2 below).

1. Let ix1 be the agent who ultimately receives xh11 , ix2 be the agent who ultimately

receives xh12 , up to ixλ(h1)

(note that ixkis not necessarily the agent who lurks xk, but

the agent who ultimately receives xk).

2. Let j ∈ L(h1) ∪ {i1} be the unique agent that is not one of the ixkfrom step 1.1. By

construction, j clinches some unlurked object y /∈ X L(h1). In addition, there may beanother active agent j′ ∈ A(h1) \ (L(h1) ∪ {i1}) (if there is, there is at most one suchj′, and j /∈ L(h1)).

(a) If such a j′ exists and y ∈ C⊆j′ (h

1), then define ▷1 as:

ix1 ▷1ix2 ▷

1 · · · ▷1ixλ(h1)▷1{j, j′}

(b) Otherwise, define ▷1 asix1 ▷

1ix2 ▷1 · · · ▷1ixλ(h1)

▷1j

In particular, we do not yet order agent j′.

Step k Find the first object to be clinched along the path to h by an agent that has not yetbeen ordered, say xk at history hk by agent ik, and carry out a procedure analogousto that from step 1 to produce the step k order ▷k.82

81That is, ih1 = i1, and i1 selects a clinching action ax1 ∈ A(h1) that clinches x1.82Note that at the end of step k − 1, there is at most one active agent j′ ∈ A(hk−1) that was not ordered

in step k−1. This agent j′, if she exists, is the active non-lurker other than the non-lurker ik−1 that clinchedat hk−1 to initiate the step k − 1 assignments. Thus, after the step k − 1 assignments are all made, we

47

Page 48: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

This produces a collection of partial orderings (▷1, . . . , ▷K), where each ▷k is a partial orderon the agents processed in step k. We then create the final ▷ in the natural way: for anytwo agents i, j who were processed in the same step k, i ▷ j if and only if i▷kj. For any twoagents i, j processed in different steps k < k′, respectively, we order i ▷ j.

Remark. The output of the ordering algorithm is a partial order, ▷, on N , the set of agents.If there are two agents j and j′ such that j ⋫ j′ and j′ ⋫ j, then we say j and j′ tie under▷. Note that by construction, all ties are of size at most 2, and agents can only tie if theyare processed in the same step of the algorithm.

A.6.4 Key Lemmas

We will prove Theorem 11 using three key lemmas. The proofs of these lemmas are somewhatinvolved, and so to streamline the presentation of the main argument, we relegate them tothe following subsections.

Take a role assignment function σ, corresponding game Γσ, and the partial ordering ▷σ

that results from applying the ordering algorithm to Γσ. Let fσ be a total ordering of theagents, where fσ(1) = i is the first agent, fσ(2) = j is the second agent, etc. We say thattotal ordering fσ is consistent with ▷σ if, for all j, j′: j ▷σ j′ implies fσ(j) < fσ(j

′). In otherwords, given some partial ordering ▷σ, total order fσ is consistent if there is some possibleway to break the ties in ▷σ that delivers fσ.

Lemma 6. For any total order fσ consistent with ▷σ, a serial dictatorship under agentordering fσ results in the same final allocation as Γσ.

Let hkA be the history that initiates step k of the ordering algorithm when it is applied to

game ΓA. For instance, h1A = (h∅, a

∗, . . . , a∗) is a history following a sequence of passes suchthat agent ih1

Amoves at h1

A and is the first agent to clinch in the game. This induces a chainof assignments of the agents in L(h1

A) ∪ {ih1A}, plus possibly one other active non-lurker at

h1A as described above. History h2

A ⊋ h1A is then the next time in game ΓA that an agent

who was not ordered in step 1 of the ordering algorithm clinches an object, etc.

Lemma 7. Let σA,σB be two role assignment functions, ΓA and ΓB their associated mech-anisms, and ▷A = (▷1A, . . . , ▷

KAA ), ▷B = (▷1B, . . . , ▷

KBB ) the respective partial orderings pro-

duced from the ordering algorithm. For all k, if ▷kA = ▷kB, then hkA = hk

B, and further,σ−1A (i) = σ−1

B (i) for all agents i that are ordered in step k of algorithm.

are left with a subgame where agent j carries over her previous endowment, C⊂j (hk−1). This subgame is

again a Pareto efficient millipede game, and so the same structure as the original game, but among only theunmatched agents and unclinched objects after step k − 1. At the “root node” of this subgame, hk

0 , agent j

is offered to clinch Cj(hk0) ⊇ C⊆

j (hk−1). All of the structure and arguments from the previous steps are thenrepeated.

48

Page 49: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

In particular, Lemma 7 implies the following corollary.

Corollary 2. If ▷A = ▷B, then σA = σB.

Lemma 8. There are no three role assignment functions σA, σB and σC such that the re-sulting partial orders ▷A, ▷B and ▷C take the form:83

i1 ▷A · · · ▷A in ▷A {i, j} · · ·

i1 ▷B · · · ▷B in ▷B i ▷B j · · ·

i1 ▷C · · · ▷C in ▷C j ▷C i · · · .

A.6.5 Preliminary Results for the Proofs of the Key Lemmas

Before proving the core Lemmas 6, 7, and 8, we state several preliminary lemmas that willbe used many times in the proofs. Lemmas 9-13 are shown in BG (though the versionspresented here are slightly simplified to apply to millipede games that satisfy the propertiesof Lemma 3). Lemmas 14 and 15 are new. Throughout, we fix a Pareto efficient millipedegame Γ.

Lemma 9. (BG Lemma E.11) If an agent is not yet matched at a history h, then X L(h) ⊆Pi(h) ∪ C⊂

i (h). If i ∈ L(h), then X L(h) ⊆ C⊂i (h).

Lemma 10. (BG Lemma E.14) If i ∈ L(h) and xℓ ∈ C⊆i (h) for some xℓ ∈ X L(h), then

ih = i, Pi(h) = Gi(h) = Ci(h), and there is no passing action a∗ in A(h).

Lemma 11. (BG Lemma E.16) Let L(h) = {ℓh1 , . . . , ℓhλ(h)} be the set of lurkers at h andX L(h) = {xh

1 , . . . , xhλ(h)}, with ℓh1 lurking xh

1 , ℓh2 lurking xh2 , etc., where m < m′ if and only if

ℓhm became a lurker at a strict subhistory of the history at which ℓhm′ became a lurker. Then,

1. xh1 , . . . , x

hλ(h) are all distinct objects.

2. For all m = 1, . . . λ(h), Pℓhm(h) = X (h) \ {xh

1 , . . . , xhm−1}.

Lemma 12. (BG Lemma E.19) For all h, |L(h)| ≤ 2.83What is meant here is that σA, σB, and σC restricted to the agents {i1, . . . , in, i, j} are all distinct role

assignment functions that produce partial orderings ▷A, ▷B, and ▷C that begin by ordering agents i1, . . . , inin the exact same way (possibly with ties), and continue by ordering agents i, j in the manner specified.

49

Page 50: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

Lemma 13. (BG Lemma E.18, E.20) Let h be a history with lurked houses and let ih′ = t

be the agent who moves at the maximal superhistory of the form h′ = (h, a∗, . . . , a∗). Then:(i) Agent t is not a lurker at h.(ii) If ih = t, then Cih(h) ∩ C⊆

t (h) = ∅.(iii) If xℓ ∈ Pj(h) for some non-lurker j and lurked object xℓ ∈ X L(h), then j = t and

C⊆j (h

′) = X (h).

The agent t who moves at h′ is called the terminator.

We also prove the following additional lemmas.

Lemma 14. Let h be a history such that L(h) = ∅. For any superhistory h′ of the formh′ = (h, a∗, a∗, . . . , a∗), we have ih′ /∈ L(h).

Lemma 14 has the following key implication: let h be a history with lurkers L(h), andh′ = (h, a∗, . . . , a∗, ax) be a superhistory such that x is the next object to be clinched (withpossibly agents passing in the mean time). Then, the agent that clinches x is not a lurker.

Proof. Let L(h) = {ℓh1 , . . . , ℓhλ(h)} be the set of lurkers at h and X L(h) = {xh1 , . . . , x

hλ(h)}

the set of lurked objects. Assume that the statement was false, and let h′ be the smallestsuperhistory of h such that ih′ = ℓhm for a lurker ℓhm (that is, ih′′ /∈ L(h) for all h ⊆ h′′ ⊊ h′).Note first that, for any h′′ such that h ⊆ h′′ ⊊ h′, ih′′ = j ∈ L(h), and if there existssome lurked xh

m ∈ C⊆j (h

′′), by BG Lemma 10, there is no passing action at h′′, which is acontradiction. Therefore, any clinching action ay ∈ A(h′′) clinches some y ∈ X (h) \ X L(h),and for all terminal histories h ⊃ (h′′, ay), each lurker ℓhm ∈ L(h) receives his lurked objectxhm. Finally, consider history h′. By BG Lemma 11, for each ℓhm ∈ L(h), Pℓhm

(h′) = Pℓhm(h) =

X (h) \ {xh1 , . . . , x

hm−1} (note that h′ is reached from h via a series of passes, and so X (h) =

X (h′)), and Top(≻ℓhm, Pℓhm

(h′)) = xhm for all types ≻ℓhm

such that h′ is on the path of play.Therefore, by pruning and greedy strategies, at h′, there is no clinching action ax for anyx ∈ Pℓhm

(h′) \ {xhm}. Thus, the only possibility is that every action a ∈ A(h′) clinches xh

m.84

This then implies that ℓhm gets xhm at all terminal h ⊃ h′. Combining this with the previous

statement that ℓhm gets xhm for all terminal h ⊃ (h′′, ay) for any h ⊆ h′′ ⊊ h′ and clinching

action ay ∈ A(h′′), we conclude that ℓhm gets xhm for all terminal h ⊃ h, which means that ℓhm

was not a lurker at h, which is a contradiction. ■

Lemma 15. Let h be a history such that L(h) = ∅ and A(h) = L(h) ∪ {i, j}, where i and j

are active non-lurkers at h, and let y ∈ X L(h) be an unlurked object at h. Further, assumethat (i) ih = i (ii) y ∈ Ci(h) ∩ C⊊

j (h), and define x = Top(≻j, X L(h)). Then, x ≻j y.

84Note that there cannot be a passing action either: if there were, then, since every history is non-trivial,there must be another action. But, as just argued, there can be no clinching actions for any other x = xh

m,and thus there must be a clinching action for xh

m, and the passing action would be pruned.

50

Page 51: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

Proof. Let h′ ⊊ h be the largest subhistory such that y ∈ Cj(h′), and note that

for this history, Pj(h′) = Pj(h).85 By construction, j passed at h′ when she was offered

to clinch y. If Pj(h) ∩ X L(h) = ∅, then, by BG Lemma 13, j is the terminator (i.e.,j = t), and so by that same lemma, Ci(h) ∩ C⊆

j (h) = ∅, which is a contradiction (sincey ∈ Ci(h) ∩ C⊆

j (h)). Therefore, Pj(h) = X L(h),86 and so Pj(h′) = X L(h) as well. Since j

passes at h′ and y ∈ Cj(h′), Top(≻j, Pj(h

′)) ≻j y. Since Pj(h′) = X L(h), Top(≻j, Pj(h

′)) =

Top(≻j, XL(h)) = x ≻j y, as required. ■

A.6.6 Proofs of the Key Lemmas

In the proofs that follow, we will often make statements referring to generic “roles” in a gameform Γ, to state properties of Γ that are independent of the specific agent that is assigned tothat role. For instance, we previously defined Ci(h) as the set of outcomes that are clinchablefor an agent i at h. Below, we will sometimes write Cr(h) to refer to the set of outcomesthat are clinchable for the role r ∈ R at h, or Pr(h) for the set of outcomes that are possiblefor role r. (If the role assignment function is such that σ(r) = i, then Ci(h) = Cr(h),Pi(h) = Pr(h), etc.) Analogously to the sets A(h) and L(h) for active agents and lurkers ata history h, we write AR(h) for the set of active roles at a history h, and LR(h) for the setof roles that are lurkers at h. When we want to refer to the game form with agents assignedto roles via a specific role assignment function σA, we will write ΓA. In the proofs, we willoften move fluidly between agents and roles; to avoid confusion, we use the notation i, j, k

to refer to specific agents, and the notation r, s, t to refer to generic roles. Finally, note thatwhile the set of lurkers at any h may differ depending on the role assignment function, theset of lurked objects (and the order in which they become lurked) depends only on h, and isindependent of the specific agent assigned to the role that moves at h.

Unless otherwise specified, when we write the phrase “i clinches x at h” (or similarvariants), what is meant is that i moves at h, takes some clinching action ax ∈ A(h), andreceives object x at all terminal histories h ⊇ (h, ax).

Finally, the following remark is simply a restatement of part (iii) of the definition of alurker, but deserves special emphasis, as it will arise frequently in the arguments below.

Remark 3. If an object x has been offered to an active non-lurker at a history h (i.e.,x ∈ C⊆

i (h) for some i ∈ L(h)), then x /∈ X L((h, a)) for any a ∈ A(h).85If Pj(h) ⊊ Pj(h

′) (because some new object became lurked between h′ and h, and so disappeared asa possibility for j), then there must be a more recent subhistory h′′ ⊋ h′ where role j was re-offered theopportunity to clinch y, by definition of a millipede game (or, more primitively, by OSP).

86For any active nonlurker i at any history h, either Pi(h) = X (h) or Pi(h) = XL(h), with the formerholding for at most one of the (possibly) two active non-lurkers; see Remark 7.1 of Bade and Gonczarowski(2017).

51

Page 52: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

Proof of Lemma 6

Let agent i∗ be the first agent to clinch in game Γσ, which induces the ordering of the firstsegment of agents in step 1 of the ordering algorithm. Let X L(h∗) = {x1, . . . , xn} be the setof lurked objects at h∗ (which may be empty).

Case (1): A(h) = L(h) ∪ {i∗}.If i∗ clinches an unlurked object y ∈ X L(h∗), then, in Γσ, all lurkers get their lurked

objects (the oldest lurker ℓ1 gets x1, the second oldest lurker ℓ2 gets x2, etc.), and in theresulting serial dictatorship fσ, the agents are ordered fσ : ℓ1, ℓ2, . . . , ℓn, i

∗. By BG Lemma11, for each lurker ℓm, we have xm = Top(≻ℓm ,X \ {x1, . . . , xm−1}). When it is agent ℓm’sturn in the serial dictatorship, she is offered to choose from X \ {x1, . . . , xm−1}, and thusselects xm. Finally, consider agent i∗. In game Γσ, when she clinches y at h∗, it is unlurked.By BG Lemma 9, XL(h∗) ⊆ Pi(h

∗) ∪ C⊂i (h

∗), which implies that so y = Top(≻i∗ , X L(h∗)).At her turn in the serial dictatorship, the set of objects remaining is precisely X L(h∗), andso i∗ will select y.

If, on the other hand, i∗ clinches some lurked object xm, then all older lurkers ℓ1, . . . , ℓm−1

get their lurked objects in Γσ, and the resulting serial dictatorship begins as fσ : ℓ1, . . . , ℓm−1, i∗.

By an argument equivalent to the previous paragraph, each of these agents once again getsthe same object under the serial dictatorship.87 Then, in Γσ, agent ℓm is offered to clinchanything from X \ {x1, . . . , xm}. If ℓm takes another lurked object xm′ for some m′ > m,then each lurker ℓm+1, . . . , ℓm′−1 is assigned to their lurked object, and we add to the se-rial dictatorship order as fσ : ℓ1, . . . , ℓm−1, i

∗, ℓm+1, . . . , ℓm′−1, ℓm. By the same argument asabove, at their turn in the resulting SD, each agent ℓm+1, . . . , ℓm′−1, ℓm gets the same objectin the SD.88 This process continues until someone eventually takes an unlurked object, allremaining lurkers are ordered, and step 1 is completed.

Case (2): A(h) = L(h) ∪ {i∗, j} for some j ∈ A(h) \ (L(h) ∪ {i}).First consider the case that i∗ clinches an unlurked object y ∈ XL(h∗). If y /∈ C⊆

j (h∗),

then the argument is exactly the same as in Case (1) (note that j is not ordered in step 1in this case). If y ∈ C⊆

j (h∗), then the step 1 partial order is ℓ1▷

1 · · · ▷1ℓλ▷1{i∗, j}. We mustshow that any serial dictatorship run under fσ : ℓ1, . . . , ℓn, i

∗, j, . . . and f ′σ : ℓ1, . . . , ℓn, j, i

∗, . . .

result in the same outcome as Γσ for these agents. For the lurkers, the argument is as abovein either case. For i∗ and j, in game Γσ, by construction, y ∈ Cj(h

′) for some h′ ⊊ h∗. Let

87For agent i∗, since she took a lurked object at h∗ in Γσ, we have xm = Top(≻i,X ), and thus, at herturn in the serial dictatorship, she will once again select xm, since it is still available.

88When it is agent ℓm’s turn in the SD, the set of available objects is a subset of the set of objects thatwere offered to her when she clinched in Γσ : X \{x1, . . . , xm′−1} ⊆ X \ {x1, . . . , xm}. However, xm′ belongsto both sets, and so since ℓm took xm′ in Γσ, she will also to take it at her turn in the SD, when her offerset is smaller.

52

Page 53: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

z = Top(≻j, X L(h∗)), and note that by Lemma 15, z ≻j y. Since i clinched y at h∗, we havey ≻i z. In the serial dictatorship, after all lurkers have picked, the set of remaining objectsis precisely X L(h∗). Thus, it does not matter whether i∗ or j is ordered next in the serialdictatorship, as there is no conflict between them: in both cases, i∗ will take y, and j willtake z, and both fσ and f ′

σ give the same allocation as Γσ. For the case where i∗ beginsby clinching some lurked object xm ∈ X L(h∗), we consider agent j and the lurker who, inthe chain of assignments, eventually takes an unlurked object y; otherwise, the argument isanalogous.

This shows that we get the same allocation for all agents ordered in step 1 of the orderingalgorithm. If all active agents at A(h∗) are processed in step 1 of the ordering algorithm,then we effectively have a smaller subgame on the remaining agents, and we just repeat thesame argument. If not, then there is at most one active agent j ∈ A(h∗) who is not processedin step 1. Agent j has been previously offered some objects in the set C⊆

j (h∗) (note that

C⊆j (h

∗) ⊆ X L(h)). The subgame that begins after all of the agents in step 1 have clinchedcan equivalently be written as a Pareto efficient millipede subgame that begins with agent jbeing offered C⊆

j (h∗) at the “root node”. We then find the first agent to clinch (after a series

of passes) in this subgame, and repeat the same argument as for step 1 above. ■

Proof of Lemma 7

We will first show the result for k = 1, and the proof for remaining steps will follow recur-sively. We first show that h1

A = h1B. Towards a contradiction, assume h1

A = h1B, and, wlog,

h1A ⊊ h1

B. Let r = ρ(h1A) be the role associated with history h1

A, and define iA = σA(r)

and iB = σB(r), where iA = iB. Since there can be at most one passing action at h1A and

h1A ⊊ h1

B, agent iA must clinch some xA at h1A in ΓA, while agent iB must pass at h1

A in ΓB.Let AR(h

1A) be the set of active roles at h1

A. Note that by definition, LR(h1A) ⊆ LR(h

1B) and

X L(h1A) ⊆ X L(h1

B). Also, for the constructed partial order ▷A, let gA(i) = |j : j ▷A i| + 1.89

Define gB similarly. Since we assume ▷1A = ▷1B, we have gA(i) = gB(i) for all i ordered instep 1 of the ordering algorithm applied to ΓA and ΓB, respectively.

Since there is a passing action at h1A, we have xA /∈ X L(h1

A) (by BG Lemma 10). SinceiA clinches an unlurked object xA ∈ X L(h1

A) at h1A, we have xA = Top(≻iA , XL(h1

A)),90 andalso gA(iA) = λ(h1

A) + 1, where λ(h1A) = |LR(h

1A)| is the number of lurkers that are present

at h1A. Therefore, gB(iA) = λ(h1

A) + 1 as well, which implies X L(h1A) = X L(h1

B).91 This also89This is almost the same as i’s picking order in the resulting serial dictatorship, except that this allows

for the fact that two agents may tie under ▷A, i.e., gA(i) = gA(i′) if i ⋫A i′ and i′ ⋫A i

90This follows because XL(h1A) ⊆ PiA(h

1) ∪ C⊂iA(h1

A), by Lemma 9.91To see this, note that if XL(h1

B) ⊋ XL(h1A), then xλ(h1

A)+1 = xA, i.e., the (λ(h1A) + 1)th lurked object

must be xA (because the ordering algorithm puts the agent who receives xλ(h1A)+1 as the (λ(h1

A)+1)th agent

53

Page 54: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

means that LR(h1A) = LR(h

1B). Let xB be the object clinched at h1

B.Case (1): xB /∈ X L(h1

B).Subcase (1).(i): ρ(h1

B) = r. There can be at most one other active non-lurker role ath1B, denoted s ∈ LR(h

1B). We have σ−1

B (r) = iA (or else iA would again clinch xA at h1A),

and σ−1B (iA) = rn for any lurker role rn′ ∈ LR(h

1B) (because then gB(iA) = n′ < λ(h1

A) + 1, acontradiction).92 Thus, it must be that σB(s) = iA, ih1

B= iA, and iA clinches xA at h1

B (i.e.,xB = xA). By construction of ▷1B, xA ∈ C⊆

j (h1B), where j = σB(r), and so gB(iA) = gB(j) =

λ(h1B) + 1 = λ(h1

A) + 1. So, gA(j) = λ(h1A) + 1; in other words, in ΓA, when iA clinches xA

at h1A, j must be an active non-lurker at h1

A, and xA ∈ C⊆s (h

1A), where s = σ−1

A (j) is j’s rolein ΓA. Since σB(s) = iA, in game ΓB, there is some h′ ⊊ h1

A such that xA ∈ CiA(h′) and i

passes at h′. Let x = Top(≻iA , X L(h1A)), and note that by Lemma 15, x ≻iA xA. However,

we saw above that xA = Top(≻iA , X L(h1A)), which is a contradiction.

Subcase (1).(ii): ρ(h1B) = r. In game ΓB, ih1

B= iB, and, at h1

B, iB clinches some xB =xA. Since xB is unlurked at h1

B, we have xB = Top(≻iB , X L(h1B)). Since gB(iA) = λ(h1

A) + 1

(as required by ▷1A = ▷1B), we have iA ∈ A(h1B) and xB ∈ C⊆

iA(h1

B) in ΓB. This again impliesgA(iA) = gA(iB) = gB(iA) = gB(iB) = λ(h1

A) + 1, i.e., iA and iB tie in both ▷1A and ▷1B. Lets = σ−1

B (iA). Since gA(iB) = λ(h1A) + 1, iB must have been an active nonlurker at h1

A in ΓA,which means that σA(iB) = s. Therefore, in game ΓA, iB passes at some history h′ ⊊ h1

A

such that xB ∈ Cs(h′). An argument equivalent to the previous paragraph applied to iB

again reaches a contradiction.Case (2): xB ∈ X L(h1

B). Note that in this case, since a lurked object is clinchable ath1B, there is no passing action at h1

B, by BG Lemma 10. Further, the role/agent who movesat h1

B satisfies the conditions of the terminator t defined in BG Lemma 13; denote ρ(h1B) = t,

and note that C⊆t (h

1B) = X (h1

B) = X . Also, recall from the discussion before Case (1) thatgA(iA) = λ(h1

A) + 1, where λ(h1A) = |LR(h

1A)|, and therefore, gB(iA) = λ(h1

A) + 1 as well.Subcase (2).(i): t = r. In this case, in ΓA, when iA clinches xA at h1

A, we havexA = Top(≻i,X ) (because σ−1

A (iA) = t and iA chose to clinch first). Now, since σ−1B (iA) = t,

the only way for gB(iA) = λ(h1A)+ 1 is for iA to be the active non-lurker at h1

B that does notclinch, and y ∈ C⊂

iA(h1

B), where y is the unlurked object chosen by some lurker iℓ ∈ L(h1B) in

the assignment chain initiated when iB selected xB at h1B. Let s = σ−1

B (iA). Since iℓ ∈ L(h1B),

and chose y at her turn, we have y = Top(≻iℓ , X L(h1B)). Note that gB(iℓ) = λ(h1

A) + 1, andso, since ▷1A = ▷1B, we have gA(iℓ) = λ(h1

A) + 1 as well. This is only possible if σ−1A (iℓ) = s,

and xA ∈ C⊂s (h

1A). But then, in game ΓB, agent iA(= σB(s)) was offered to clinch xA at some

in the ordering, and we know gB(iA) = λ(h1A)+ 1). Thus, let h′ ⊋ h1

A be the history where xA first becomeslurked. Note that xA ∈ C⊆

r (h′). However, role r is still a non-lurker at h′, and so xA cannot become lurked(see the definition of lurker /Remark 3).

92Note that in this case, xB is unlurked, and so all lurkers are immediately assigned to their lurked objects.

54

Page 55: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

history h′ ⊊ h1B. Since xA = Top(≻i,X ), iA clinches at h′ in ΓB, which is a contradiction.

Subcase (2).(ii): t = r. In this case, in ΓA, when iA clinches xA at h1A, since σA(t) = iA,

we have xA /∈ C⊆t (h

1A), by BG Lemma 13. Therefore, gA(iA) = λ(h1

A) + 1, and gA(i′) =

λ(h1A) + 1 for all other i′ = iA ordered in step 1 (in other words, iA does not tie with

another agent in ▷1A), and so the same is true for gB.93 Since the first agent to clinch inΓB is σB(t) = j = iA who clinches some lurked object xB ∈ X L(h1

B) and σB(r) = iB = iA,σ−1B (iA) = rn for some lurker role rn′ that lurks object xn′ ∈ X L(h1

B). Now, when iA

eventually clinches xA (after someone else has selected xn′ in the chain of lurker assignments),xA ∈ C⊆

iB(h1

B), where iB = σB(r) (since h1B ⊋ h1

A and xA ∈ Cr(h1A)), which implies that

gB(iB) = λ(h1A) + 1, i.e., iB ties with iA—a contradiction.

Thus far, we have shown that ▷1A = ▷1B implies h1A = h1

B, or, in other words, if step 1of the ordering algorithm produces the same ordering, then step 1 must be initiated at thesame history in ΓA and ΓB. Next, we show that σA(r

′) = σB(r′) for all r′ that are ordered

in step 1 of ΓA and ΓB.Define h1 := h1

A = h1B. Let LR(h

1) = {r1, . . . , rλ(h1)} be the set of lurker-roles at h1, andX L(h1) = {x1, . . . , xλ(h1)} the set of lurked objects. Notice that, since h1

A = h1B, the lurked

objects and active lurker-roles are equivalent in both ΓA and ΓB. Towards a contradiction,assume that σA(r

′) = σB(r′) for some r′ that is ordered in step 1. Letting r0 = ρ(h1), write

σA(r0) → xa1 → σA(ra1) → xa2 → · · · → σA(raM ) → yA (1)

to represent the chain of clinching that is initiated in ΓA by agent σA(r0) at h1: agent σA(r0)

clinches some (possibly lurked) object xa1 , the agent σA(ra1) who was lurking xa1 clincheslurked object xa2 , etc., until eventually agent σA(raM ) ends the chain by clinching someunlurked object yA.94 Similarly, for ΓB, write

σB(r0) → xb1 → σB(rb1) → xb2 → · · · → σB(rbM ) → yB. (2)

We will show that chains (1) and (2) above are in fact equivalent: σA(r0) = σB(r0) andσA(ram) = σB(rbm) for all m. Since any lurked object xn ∈ X L(h1) that does not appear inthe above chain must be assigned to its lurker σA(rn), this will deliver the result.

First, note that if xa1 = xb1 and xa1 /∈ X L(h1), then both (1) and (2) begin with the sameagent taking the same (unlurked) object. Therefore, all lurkers are immediately assigned

93Note that since LR(h1A) = LR(h

1B), there cannot be any additional role r′ /∈ LR(h

1A)∪{r, t} that is active

at h1A.

94Recall that any lurked object that does not appear in the chain is assigned to its lurker. For example,if a1 < a2, then xa′ is assigned to σA(ra′) for all a′ such that a1 < a′ < a2.

55

Page 56: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

to their lurked objects. If there is another active non-lurker role s ∈ LR(h1), then the

agent in role s clinches his object favorite remaining object. In either case, it is clear thatσA(r

′) = σB(r′) for all roles r′ ordered in step 1 of the ordering algorithm. Thus, assume

that xa1 = xb1 , and therefore, σA(r0) = σB(r0).

Claim 2. At least one of xa1 or xb1 is lurked at h1; i.e., X L(h1) ∩ {xa1 , xb1} = ∅.Proof of claim. Assume that xa1 , xb1 /∈ X L(h1). For shorthand, define σA(r0) = iA, and

σB(r0) = iB. Then, since ▷1A = ▷1B, we have gA(iA) = gA(iB) = gB(iA) = gB(iB) = λ(h1) + 1,i.e., iA and iB tie under both ▷1A and ▷1B. This means that there is another active non-lurkerrole s ∈ AR(h

1) \ (LR(h1) ∪ {r0}), and σA(r0) = σB(s) = iA, σA(s) = σB(r0) = iB. Further,

xa1 , xb1 ∈ C⊂s (h

1).If L(h1) = ∅, then X = Pr0(h

1) ∪ C⊆r0(h1), which implies that xa1 = Top(≻iA ,X ) and

xb1 = Top(≻iB ,X ). But, this means that in ΓA, iB will clinch xb1 at some h′ ⊊ h1 (inparticular, the earliest h′ such that xb1 ∈ Cs(h

′)), a contradiction. Therefore, L(h1) = ∅.Now, by BG Lemma 9, xa1 = Top(≻iA , X L(h1)) and xb1 = Top(≻iB , X L(h1)). Since xb1 ∈C⊂

s (h1), there is some history h′ ⊊ h1 such that in game ΓA, xb1 ∈ CiB(h

′), and iB passes ath′. By Lemma 15, Top(≻iB , X L(h1)) ≻iB xb1 , which is a contradiction. ■

By the previous claim, at least one (possibly both) of xa1 and xb1 are lurked objects;wlog, assume that xa1 ∈ X L(h1). Since Cr0(h

1) contains a lurked object, BG Lemma 13implies r0 = t is the terminator role. Consider agent σA(raM ) = i′, and note that xaM ≻i′

yA = Top(≻i′ ,X \ {x1, . . . , xaM}) where xaM is the object i′ lurks at h1 in ΓA.We first claim that σ−1

B (i′) = raM and yA = yB. To see this, first note that σ−1B (i′) = rn′′

for any n′′ > aM . Indeed, if this were the case, this would imply that xn′′ = Top(≻i′

,X \ {x1, . . . , xn′′−1}), where xn′′ is the object lurked by role rn′′ . However, this contradictsyA = Top(≻i′ ,X \ {x1, . . . , xaM}). Next, we show that σ−1

B (i′) = rn′′ for any n′′ < aM either.In game ΓB, when i′ becomes a lurker for xn′′ at some h′, he eventually must get no worsethan his second-best choice from the set Pi′(h

′) = X \ {x1, . . . , xn′′−1}. Since xaM ∈ Pi′(h′),

we have xn′′ ≻i′ xaM ≻i′ yA, and i′ can do no worse than xaM , which means he cannot endup with yA—a contradiction. The final case to consider is σ−1

B (i′) = r′ for some r′ ∈ LR(h1).

We cannot have r′ = r0 (since r0 is the terminator role, i′ would then be able to clinchher top choice at h1 in ΓB, and Top(≻i′ ,X ) = yA). Thus, r′ must be the (unique) othernon-lurker role that is active at h1: r′ = AR(h

1) \ (LR(h1) ∪ {r0}). Recall that, for this

role, Pr′(h1) = X L(h1). Further, yA /∈ C⊆

r′ (h1) (or else i′ would have clinched yA at some

strict subhistory of h1), and yB ∈ C⊆r′ (h

1) (or else i′ would not be ordered in step 1 of theordering algorithm under ΓB). However, the former implies that |j : gA(j) = λ(h1) + 1| = 1,while the latter implies |j : gB(j) = λ(h1) + 1| = 2, which contradicts ▷1A = ▷1B. Therefore,σ−1A (i′) = σ−1

B (i′) = raM . Finally, note that σA(raM ) = σB(raM ) further implies yA = yB and

56

Page 57: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

raM = rbM . Indeed, if not, then the final person to clinch in chain 2 is some σB(rbM ) = j = i′.However, agent i′ is a lurker in ΓB for xaM that was not previously taken by any other agentin chain 2, and thus, i′ is assigned to xaM , which is a contradiction.

Next, consider agent σA(raM−1) = i′ in chain 1, i.e., i′ lurks xaM−1 in ΓA and even-tually ends up with (lurked) object xaM . By construction of the chain, xaM = Top(≻i′

,X\{x1, . . . , xaM−1}). Similar to the previous paragraph, σ−1B (i′) = rn′′ for any n′′ > aM − 1.

Indeed, if this were true, then xn′′ = Top(≻i′ ,X\{x1, . . . , xn′′−1}). If n′′ < aM , xn′′ ≻i′ xaM ,

which contradicts xaM = Top(≻i′ ,X\{x1, . . . , xaM−1}). If n′′ > aM , then xaM is not possiblefor i′ (BG Lemma 11), which is also a contradiction. Finally, n′′ = aM , since we alreadyhave shown that σB(raM ) = σA(raM ). Similarly, σ−1

B (i′) = rn′′ for any n′′ < aM − 1, either,since this would imply that xn′′ ≻i′ xaM−1 ≻i′ xaM , and xaM−1 ∈ Pi′(h

′) at the historyh′ where i′ became a lurker for xn′′ . Since i′ cannot do any worse than his second-bestchoice from Pi′(h

′), we have a contradiction. The last case to consider is σ−1B (i′) = r′ for

r′ = AR(h1) \ (LR(h

1) ∪ {r0}).95 But, for this role, Pr′(h1) = X L(h1), and thus, no lurked

objects are possible for i′, which is a contradiction. Therefore, σA(raM−1) = σB(raM−1). Asin the previous paragraph, this also implies that raM−1 = rbM−1 and xaM = xbM .

The same argument can be repeated to show that xam = xbm and σA(ram) = σB(rbm)

for all m = 1, . . . ,M . The final case to consider is role r0. Let σA(r0) = i′. Since i′

starts the chain of assignments at h1 by taking some lurked object xa1 ∈ X L(h1), we havexa1 = Top(≻i′ ,X ). Once again, we cannot have σ−1

B (i′) = rn′′ for any n′′ < a1, as this wouldimply that xn′′ ≻i′ xa1 , which is a contradiction. We also cannot have σ−1

B (i′) = ra1 , sincewe have already shown that σB(ra1) = σA(ra1). Further, we cannot have σ−1

B (i′) = rn′′ forany n′′ > a1, since xa1 would not be possible for i′. Last, we cannot have σ−1

B (i′) = r′ forr′ = AR(h

1) \ (LR(h1) ∪ {r0}), since no lurked objects are possible for the agent in role r′.

Therefore, σ−1B (i′) = r0, and chains 1 and 2 are equivalent.

To summarize: We have shown that if we have two role assignment functions σA, σB suchthat ▷1A = ▷1B, then (i) σ−1

A (i) = σ−1B (i) for all agents i that are ordered in step 1 of the

ordering algorithm and (ii) at the conclusion of the chain of clinching initiated by the firstagent to start the chain at h1

A/h1B, we will end up at the same history in ΓA as in ΓB.96 At this

point, we have a smaller subgame consisting of the agents and objects that were unmatchedin the first round. This subgame is another Pareto efficient millipede game (that may begin

95The case where σ−1B (i′) = r0 can be dispensed with similarly as in the previous paragraph.

96This follows because the chain of clinching starts at the same history in both games, and all agents whoare ordered in step 1 are in the same roles, so will take the same actions. While there may be some otherrole r′ that is not ordered in step 1 and σA(r

′) = σB(r′), this agent must have passed every time she was

called to move at a history h′ ⊊ h1, and is not called to move in the chain of lurker assignments, and so atthe end of the chain, we will still end up at the same history to begin the next round/subgame.

57

Page 58: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

with the unique unmatched agent from step 1 carrying over his endowment C⊆j (h

1), if suchan agent exists), and so we simply repeat the above arguments for each round k = 1, . . . , K.■

Proof of Lemma 8

Assume there are three permutations σA, σB, and σC that deliver (initial) partial orderings▷A, ▷B, ▷C as in the statement. We’ll show that these 3 conditions lead to a contradiction.

As with Lemma 7, we show this first for the case that under ▷A, all agents {i1, . . . , in, i, j}are processed in step 1 of the ordering algorithm, and the argument for later steps will beequivalent. Let ΓA, ΓB, and ΓC denote the specific games under role assignments σA, σB,

and σC respectively. Further, let h∗A, h

∗B, and h∗

C be the first history at which an object isclinched in the respective games, following a sequence of passes.97 In particular, this meansthat in ▷A, agents {i1, . . . , in} are getting lurked objects X L(h∗

A) = {x1, . . . , xn}, while agentsi and j are getting some unlurked objects, y, z /∈ X L(h∗

A), respectively. By construction, oneof i or j must be an active non-lurker at h∗

A who is not called to move at h∗A; without loss of

generality, assume that this is j. For notational purposes, denote by ih∗A∈ {i1, . . . , in, i} the

agent who moves (and clinches) at h∗A, thereby starting the chain of lurker assignments that

ends with i clinching y followed by j clinching z (note that ih∗A/∈ L(h∗

A), by Lemma 14, andit is possible that ih∗

A= i). The structure of ▷A implies that y ∈ C⊂

j (h∗A).

Case 1: L(h∗A) = ∅

In this case, by definition of ▷A, the set of active agents at h∗A is A(h∗

A) = {i, j}, whereih∗

A= i. For notational purposes, let s = σ−1

A (i) and s′ = σ−1A (j) be the roles assigned to

agents i and j in game ΓA, and note that y ∈ C⊆s′ (h

∗A). Further, both i and j are getting

their first choice objects, i.e., Top(≻i,X ) = y and Top(≻j,X ) = z.Now, consider σB, where the ordering algorithm produces i ▷B j. By Remark 3, y cannot

be the first lurked object in the game, and thus, for i to be ordered first without ties accordingto ▷B, we must have X L(h∗

B) = ∅ and ih∗B= i. These facts imply that σ−1

B (i) = s′.Now, consider σC , which begins j ▷C i · · · . There are two subcases, depending on whether

j is a lurker at h∗C or not.

Subcase 1.(i): j ∈ L(h∗C). In this case, by construction, j is the first lurker of the game

and and z is the first lurked object. Further, h∗C ⊇ h∗

A. Now, in order for i to be the (unique)next agent added to ▷C , either (i) y ∈ X L(h∗

C) and, in particular, y is the second object tobecome lurked in the game or (ii) X L(h∗

C) = {z} and i clinches y at h∗C . But, by Remark

3, y cannot be the second lurked object of the game, because y was previously offered to97That is, h∗

α = (h∅, a∗, . . . , a∗) for α = A,B,C, though the number of passes (a∗’s) may vary.

58

Page 59: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

both roles s and s′, and, even after z becomes lurked by j at some history h′, we will stillhave y ∈ C⊆

r′ (h′) for the role r′ ∈ {s, s′} such that σ−1

C (j) = r′. Therefore, i must clinch y

at h∗C ⊋ h∗

A. Now, we have σ−1C (i) = s, s′ (because Top(≻i,X ) = y, and so i would have

clinched y earlier along the path to h∗C , since it has been previously offered to both roles).

Therefore, agent k = σC(r′) is an active non-lurker at h∗

C such that y ∈ C⊆k (h

∗C), and so, by

construction of ▷C , we have j ▷C {i, k} ▷C · · · , which is a contradiction.Subcase (ii): j /∈ L(h∗

C). In this case, L(h∗C) = ∅ and ih∗

C= j (since j is ordered

first without ties). Further, σ−1C (j) ∈ {s, s′}. If σ−1

C (j) = s′, then σC(s) = k = i (or elsewe are back to σA) and h∗

C ⊃ h∗A. Thus, y ∈ C⊆

k (h∗A), which implies that i cannot be the

next agent added to ▷C uniquely—a contradiction.98 The last case is σ−1C (j) = s. Note that

z /∈ C⊂s (h

∗B),99 and so h∗

C ⊃ h∗B. This further implies that y ∈ C⊆

s′ (h∗C) and so σC(s

′) = k = i

(since otherwise, i would have clinched y prior to h∗C , because Top(≻i,X ) = y). By an

argument similar to footnote 98, i cannot be the next agent added to the order uniquely,which is again a contradiction.

This completes the argument for Case 1.

Case 2: L(h∗A) = ∅.

Now, we consider the case where there are lurkers at h∗A (and hence also lurked objects,

X L(h∗A) = ∅). By definition of ▷A, we have AR(h

∗A) = {r1, . . . , rn, s, s′}, where r1, . . . , rn ∈

LR(h∗A) are lurker roles, and s, s′ ∈ LR(h

∗A) are non-lurker roles. Let ρ(h∗

A) = s be thenon-lurker role that moves at h∗

A (and thus clinches in ΓA).

Claim 3. At h∗A, y has been previously offered to both active non-lurker roles: y ∈ C⊆

s (h∗A)

and y ∈ C⊆s′ (h

∗A).

Proof of claim. If σA(s) = i, then it is obvious that y ∈ C⊆s (h

∗A) by definition. If

σA(s) = in′ , then at h∗A, agent in′ clinches a lurked object xn′ ∈ Cs(h

∗A), which initiates the

chain of lurker assignments.100 This implies that there is no passing action at h∗A (by Lemma

10) and that s is the terminator role defined t defined in Lemma 13. Therefore, by Lemma13, y ∈ C⊆

s (h∗A). That y ∈ C⊆

s′ (h∗A) for the other non-lurker role s′ follows immediately from

the construction of ▷A. ■

Claim 4. At h∗B, X L(h∗

A) ⊆ X L(h∗B). Similarly, at h∗

C , X L(h∗A) ⊆ X L(h∗

C).98For agent i to be ordered next in ▷C without ties, she must be the first agent ordered in step 2 of the

ordering algorithm. By Remark 3 again, y cannot be the next lurked object in the game, which means thati must clinch y at some h′ ⊋ h∗

C such that L(h′) = ∅. But, y ∈ C⊆k (h′) and so, by construction of ▷C , she

will tie with agent k.99If z ∈ C⊂

s (h∗B), then, σ−1

B (s) = k = j (if σ−1B (s) = j, then j would clinch z prior to h∗

B , since z = Top(≻j

,X )). But, by the same argument in footnote 98, j could not be the next agent added to ▷B uniquely, acontradiction.

100By definition, σA(s) = j, and so this exhausts all possibilities.

59

Page 60: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

Proof of claim. Assume not, and let X L(h∗B) = {x1, . . . , xn′} ⊊ X L(h∗

A) = X L(h∗B) ∪

{xn′+1, . . . , xn}. (Recall that following a sequence of passes to start the game, there is aunique order in which objects will become lurked that is independent of the role assignmentfunction. Since h∗

A and h∗B are by definition the first histories at which an object is clinched in

their respective games, at least one of X L(h∗A) ⊆ X L(h∗

B) or XL(h∗B) ⊆ X L(h∗

A) must hold).Agent ih∗

B(the agent who clinches at h∗

B) cannot clinch xn for any n ≤ n′. To see why, notethat this would imply that agent ih∗

Bis offered a previously lurked object at h∗

B. By Lemma10, there is no passing action at h∗

B, which contradicts xn′+1 ∈ X L(h∗A). Thus, the only other

possibility consistent with ▷B is that ih∗B= in′+1, who clinches xn′+1 (which is unlurked) at

h∗B, i.e., xn′+1 ∈ Cin′+1

(h∗B). But, xn′+1 is the (unique) next object to become lurked in game

form Γ following a sequence of passes from h∗B, which implies that xn′+1 /∈ Cin′+1

(h∗B), by

Remark 3. An analogous argument applies for h∗C . ■

Claim 5. X L(h∗A) = X L(h∗

B) = X L(h∗C).

Proof of claim. Given Claim 4, it is sufficient to show X L(h∗B),X L(h∗

C) ⊆ X L(h∗A).

Let X L(h∗A) = {x1, . . . , xn}, and note again that the order in which objects become lurked

following a series of passes to start the game is unique and independent of the role assignmentfunction. Thus, it is sufficient to consider the next object that can become lurked, xn+1, andshow that xn+1 /∈ X L(h∗

B) (resp. xn+1 /∈ X L(h∗C)). Thus, assume that xn+1 ∈ X L(h∗

B), andnote that this implies h∗

B ⊋ h∗A. By construction of ▷B, we must have xn+1 = y (the object

received by i); indeed, if xn+1 = y, then, the agent, say k, who receives xn+1 will be suchthat k ▷B i, which is a contradiction. However, since h∗

B ⊋ h∗A, y has previously been offered

to both active non-lurkers at h∗B (from the construction of ▷A). Thus, by Remark 3, y cannot

become the next object lurked, i.e., xn+1 = y—a contradiction.Next consider ΓC , and assume that xn+1 ∈ X L(h∗

C). Let rn+1 be the role that lurksxn+1, and hn+1 be the history at which role r becomes a lurker for xn+1 (i.e., role rn+1

passes at hn+1, and becomes a lurker at h′ = (hn+1, a∗)). Note that h′ ⊃ h∗

A. Further,from what we know about the structure of the game tree Γ from σA, there is another activenon-lurker role at hn+1, denoted r, and we have y ∈ C⊆

rn+1(hn+1) and y ∈ C⊆

r (hn+1). Now,since i1 ▷C · · · ▷C in ▷C j ▷C i · · · , it must be that xn+1 = z (the object received by j). Since i

is uniquely ordered immediately after j, we have in+1 = j, and ih∗C= i who clinches y at h∗

C .(The only other possibility is that there is another lurked object at h∗

C , xn+2 = y, but, byRemark 3, this is impossible, since y has been previously offered to both active nonlurkersat hn+1). Since i is ordered uniquely, σC(r) = i, by an argument equivalent to footnote98. Now, this implies that i was previously offered y at some h′′ ⊊ h∗

C , and chose to pass,which implies Top(≻i, Pi(h

′′)) = x ≻i y. Letting h′′ be the most recent subhistory suchthat y ∈ Ci(h

′′) and i passes, and noting that i chose to clinch y at h∗C , we conclude that

60

Page 61: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

x /∈ Pi(h∗C) and x = z.101 But, z /∈ X L(h∗

A), which contradicts that she clinches y in ΓA.102

Claim 5 thus implies that LR(h∗A) = LR(h

∗B) = LR(h

∗C). Further, we know from BG

Lemma 12 that there can be at most two active non-lurker roles at any history, which wewill denote s and s′. By construction, we know that both of these roles are in fact active ath∗A, i.e., AR(h

∗A) = LR(h

∗A) ∪ {s, s′}. For h∗

B, it is possible that only one of s or s′ are active(but this can only occur if h∗

B ⊊ h∗A). A similar remark applies to h∗

C .

Claim 6. h∗B, h

∗C ⊊ h∗

A.

Proof of claim. First, assume h∗A ⊆ h∗

B. Then, when agent i clinches y in ΓB (either ath∗B, or in the chain of lurker assignments that follows), it has already been offered to both of

the agents in roles s and s′, including the (unique) active non-lurker that does not move atih∗

B, say agent k, and so k is ordered in step 1 (and in particular, k will “tie” with i), which

is a contradiction to the definition of ▷B.Next, assume that h∗

A ⊆ h∗C . The agents processed in step 1 of the ordering algorithm

applied to ΓC are {i1, . . . , in, j} (a set that does not include i), and the chain ends when j

clinches z. Since h∗C ⊇ h∗

A, both s, s′ ∈ AR(h∗C), and y ∈ C⊆

s (h∗C) and y ∈ C⊆

s′ (h∗C). Since i

is the next agent ordered in ▷C without ties (in step 2 of the ordering algorithm), we haveσC(s

′) = i and σC(s) = j (by an argument equivalent to footnote 98). Since i clinchesy ∈ X L(h∗

A) in ΓA, we have y = Top(≻i, X L(h∗A)). Since σC(s

′) = i, there is some h′ ⊊ h∗A

such that y ∈ Ci(h′) and i passes at h′ in ΓC . By Lemma 15, Top(≻i, X (h∗

A)) = x ≻i y,which is a contradiction. ■

Claim 7. At h∗B, we have XL(h∗

B) ∩ Cr′(h∗B) = ∅, where ρ(h∗

B) = r′. The same holds at h∗C .

Proof of claim. By Claim 6, h∗B ⊊ h∗

A. This implies that there must be a passing actionat h∗

B, which means that xn′ /∈ Cr′(h∗B) for any xn′ ∈ X L(h∗

B) by Lemma 10. An equivalentargument holds for h∗

C . ■In words, Claim 7 says that the object that is clinched at h∗

B/h∗C is not lurked. Note that

the claim also implies that in both ΓB and ΓC , σB(rn′) = σC(rn′) = in′ for all n′ = 1, . . . , n,and so ih∗

B= i who clinches y first in ΓB, and ih∗

C= j, who clinches z first in ΓC .

We can now finish the proof of Lemma 8. Recall that s = ρ(h∗A) is the role of the first

101Each time a new object becomes impossible for i (due to becoming lurked by another agent), i must onceagain be offered the opportunity to clinch y, by definition of a millipede game. Agent i must have passed atall such opportunities (including h′′) up to h∗

C , which implies that x = z.102There are actually two subcases here: if ih∗

A= i, then i must be a lurker for some xn′ . At some point in

the lurker assignment chain, someone (either ih∗A, or an earlier lurker) takes xn′ ; since z is still unlurked at

that point, it is possible for i. Similarly, if ih∗A= i, then z is still unlurked at h∗

A, which again contradictsthat i clinches y.

61

Page 62: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

agent to clinch in ΓA, and s′ is the role of the other active non-lurker at h∗A (for ΓA, we know

that σA(s′) = j). There are two cases.

Case (1): σA(s) = i. This is the case where the two non-lurkers at h∗A are {i, j}, so

that s = σ−1A (i) and s′ = σ−1

A (j). Note that at h∗A, both s, s′ ∈ LR(h

∗A), and y ∈ C⊆

s′ (h∗A).

Now, consider σB/▷B. By the discussion following Claim 7, ih∗B

= i. This implies thatσ−1B (i) = s′.103

Now consider σC/▷C . Again, by the discussion following Claim 7, ih∗C

= j; further,σ−1C (j) = s.104 If z ∈ C⊂

s (h∗B), then, by an argument equivalent to footnote 98, it must be

that σB(s) = j. Further, h∗B ⊊ h∗

C (otherwise, j would clinch at h∗C ⊊ h∗

B in ΓB, since she hasthe same role in both games). Therefore, y ∈ C⊂

s′ (h∗C) (in particular, y ∈ Cs′(h

∗B)). Again

by the same argument as footnote 98, σC(s′) = i. But, σC(s

′) = i implies that i clinchesat h∗

B ⊊ h∗C in ΓC (since i has the same role as in ΓB), which is a contradiction. Finally,

if z /∈ C⊂s (h

∗B), we once again have h∗

B ⊊ h∗C and so y ∈ C⊂

s′ (h∗C), and we reach the same

contradiction as in the previous case.Case (2): σA(s) = i. In this case, in ΓA, i must be a lurker for some xn′ , and the first

agent to clinch is some ih∗A= in1 who clinches some lurked object xn1 . This causes a chain

of assignments of lurked objects, that ends with some other agent in′ taking xn′ , after whichi clinches y and all lurked objects xn′+1, . . . , xn are immediately assigned to their lurkers.Note that y ∈ Cs′(h

∗A) here, by construction of ΓA. So, we have σ−1

A (s) = ih∗A= i, and

σ−1A (s′) = j. Since there is a lurked object xn′ ∈ Cs(h

∗A), there is no passing action at h∗

A, byBG Lemma 10. Further, this implies that role s is the terminator role defined in BG Lemma13.

In game ΓB, the discussion following Claim 7 again gives ih∗B= i. If σ−1

B (i) = s, then,since i is the first agent to clinch and is the terminator, we have Top(≻i,X ) = y. But, ingame ΓA, i was a lurker for some xn′ = y (since y is not a lurked object at h∗

B), which impliesTop(≻i,X ) = y, a contradiction. Therefore, σ−1

B (i) = s′.In game ΓC , the discussion following Claim 7 gives ih∗

C= j. Just as in Case (1) above,

we can show that σ−1C (j) = s. Since s is the terminator role of BG Lemma 13, and j

clinches z at h∗C , we conclude that Top(≻j,X ) = z. If z ∈ C⊂

s (h∗B), then let h′ ⊊ h∗

B be ahistory such that z ∈ Cs(h

′). By an argument equivalent to footnote 98, we have σB(s) = j.However, this implies that j must clinch z at h′ in ΓB, which contradicts that ih∗

B= i. Thus,

z /∈ C⊂s (h

∗B), which implies that h∗

C ⊋ h∗B, and so y ∈ C⊂

s′ (h∗C) (in particular, y ∈ Cs′(h

∗B)).

By an argument equivalent to footnote 98, σC(s′) = i. However, if σC(s

′) = i, then i clinches

103In particular, we cannot have σ−1B (i) = s again, because this would imply h∗

B = h∗A, contradicting Claim

6.104If σ−1

C (j) = s′, then, j must pass at all h′ ⊊ h∗A at which she is called to play (since σA(s

′) = j, and jpassed at all such h′ in ΓA). Since ih∗

C= j, this implies h∗

C ⊋ h∗A, which contradicts Claim 6.

62

Page 63: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

at h∗B ⊊ h∗

C in ΓC (since σC(s′) = i = σB(s

′)), which contradicts that the first agent to clinchin ΓC is ih∗

C= j at h∗

C .

We have thus shown that there cannot be three (initial) partial orderings ▷A, ▷B, ▷C ofthe form given in the statement of Lemma 8 for the first step of the ordering algorithm.For the remaining steps, notice that the subgame after clearing all of the agents in step 1is is simply another millipede game with lurkers (possibly being with one agent from thefirst stage carrying over her “endowment” to the second). We then simply repeat the samearguments as above, step-by-step, until we reach the end of the game. This completes theproof of Lemma 8.

A.6.7 Proof of Theorem 11

By Corollary 2 of Lemma 7, the mapping from role assignments σ to partial orders ▷σ

generated by the ordering algorithm is an injection. Lemma 8 shows that we can break allthe ties—recursively, coding step by coding step—creating a complete order ▷Cσ in a way thatpreserves the injectivity. In this way we obtain an injection from role assignments σ to serialdictatorships with orders ▷Cσ . Because in this injection the domain of role assignments σ andthe range of serial dictatorship orderings ▷Cσ are finite and have equal size, this injection isactually a bijection.

It remains to check that the millipede Γσ generates the same allocation as the serialdictatorship with ordering ▷Cσ . This is implied by Lemma 6 because, by definition, eachcomplete order ▷Cσ generated by the tie breaking in partial order ▷σ is consistent with ▷σ.QED

A.7 Proof of Theorem 12

Suppose a game Γ is strongly obviously strategy-proof, efficient, and satisfies equal treatmentof equals. Our characterization of SOSP and efficient mechanisms tells us that we can assumethat Γ′ is equivalent to an almost-sequential dictatorship, which can be run as follows: at eacheven-size history, including the empty history, Nature chooses an agent from among thoseagents who have yet to move, and this agent moves at the following odd-size history.105 Ifthere are three or more objects or exactly one object still unallocated, then this agent selectshis most preferred still available object and sends an additional message. If, for the firsttime, there are exactly two unallocated objects (and thus two agents yet to move), then the

105We include this rephrasing as it originally helped us think about this problem, and hence we think thatit might help the reader. It actually is not needed and essentially the same proof works for the standardalmost-sequential dictatorship (millipede) construction.

63

Page 64: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

agent who moves either (i) selects his most preferred object and sends a message, or (ii) hasa choice of clinching one of the two objects (and sending a message) or passing. In the lattercase, Nature selects the other agent who has yet to move, who chooses his best object, andthe agent who passed obtains the remaining object.

The reminder of the proof is by induction on the number k of agents that have beencalled to move. We will show that each time Nature calls an agent to move, it must selectuniformly at random from all agents who have yet to be called. Consider k = 1 (i.e., thefirst agent called), and suppose there are more than three agents and objects. Consider apreference profile where all agents rank objects in the same order, o1 ≻ o2 ≻ · · · ≻ oN . Byequal treatment of equals, all agents must receive object o1 with equal probability. Since, atthis profile, o1 is always taken by the first mover, we conclude that Nature must select eachagent to be the first mover with equal probability.

Now, consider any k-th move, where k < N , and assume that for all moves 1, . . . , k − 1,each remaining agent at that point was called on by Nature with equal probability. Labelthe history under consideration as h, and name the agents so that along the path to h, thefirst mover is i1 who chooses o1, then i2 chooses o2, etc. until ik−1 chooses ok−1.

Suppose first that there are at least three objects left. We claim that each agent whohas not moved yet has an equal chance to be called by Nature at history h. To see why,consider some object o = o1, ..., ok−1 and the preference profile in which each agent iℓ, forℓ = 1, ..., k − 1, ranks objects so that

o1 ≻iℓ o2 ≻iℓ ... ≻iℓ oℓ ≻iℓ o ≻iℓ · · ·

and other objects are ranked below o. Let Y (h) = I−{i1, . . . , ik−1} be the set of agents whohave yet to move at h, and assume that all i ∈ Y (h) have the same preferences and rank o

first. The total probability that i receives o is equal to the sum of (1) the probability thatNature chooses i at h and (2) the probability that Nature chooses i to move at any otherh′ = h where o is still available. The crux of the argument is to note that for the preferencesspecified, for any other branch of the tree (that does not contain history h), object o willbe claimed by someone at the (k − 1)-th move or earlier. By the inductive hypothesis, eachtime Nature picks an agent to move at any of these histories, all agents in Y (h) have anequal probability of being picked, and, if they are, they will immediately claim object o.This implies that (2) is the same for all agents i ∈ Y (h). Since the sum (1)+(2) must alsobe the same for all agents (by equal treatment of equals), we conclude that (1) is also thesame for all i, j ∈ Y (h), i.e., the probability that i is chosen to move at h is equal to theprobability that j is chosen to move at h, as desired.

64

Page 65: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

Last, suppose that there are exactly two objects o and o′ left at history h. With twoobjects, Bogomolnaia and Moulin (2001) show that Random Priority is the only strategy-proof and efficient mechanism that satisfies equal treatment of equals, and the claim followsfrom their work.106

A.8 Proof of Theorem 13

Given any SOSP and efficient game Γ, the proof of Theorem 9 shows how to construct anequivalent curated dictatorship Γ′. By efficiency and the richness of the preference domain,all objects are possible for the first mover in Γ′. Therefore, by definition of a curateddictatorship, the first mover is offered the opportunity to clinch any object immediately, solong as |O| ≥ 3. Similarly, efficiency and richness imply that the second mover is offered theopportunity to clinch any still-remaining object, so long as there are at least 3 or more objectsremaining to be allocated. The same applies to the third, fourth, etc. movers until exactlytwo objects remain. When two objects, say {x, y}, remain, efficiency still implies that bothobjects are possible for the penultimate mover. If this agent is offered the opportunity toclinch either, the game is a serial dictatorship, which is a special case of an almost-sequentialdictatorship. Alternatively, he may be given a choice between clinching one of x or y, say x,or passing and allowing the final mover to allocate x and y between them. In either case, itis easy to see that the overall game Γ′ is an almost-sequential dictatorship.

References

Abdulkadiroğlu, A. and T. Sönmez (1998): “Random Serial Dictatorship and the Corefrom Random Endowments in House Allocation Problems,” Econometrica, 66, 689–701.

——— (2003): “School Choice: A Mechanism Design Approach,” American Economic Re-view, 93, 729–747.

Armstrong, M. (1996): “Multiproduct Nonlinear Pricing,” Econometrica, 64, 51–76.

Arribillaga, R. P., J. Massó, and A. Neme (2017): “Not All Majority-based SocialChoice Functions Are Obviously Strategy-proof,” .

106Bogomolnaia and Moulin actually prove that Random Priority is the only strategy-proof and ordinallyefficient mechanism that satisfies equal treatment of equals when there are three objects and three agents,and with two objects Pareto efficiency and ordinal efficiency become equivalent. The two object case is muchsimpler than the three object problem that Bogomolnaia and Moulin study, and one can easily verify theabove claim without reliance on Bogomlnaia and Moulin’s seminal analysis.

65

Page 66: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

Arrow, K. J. (1963): Social Choice and Individual Values, New York: Wiley, 2nd editioned.

Ashlagi, I. and Y. A. Gonczarowski (2016): “No Stable Matching Mechanism is Ob-viously Strategy-Proof.” Working Paper.

Bade, S. and Y. Gonczarowski (2017): “Gibbard-Satterthwaite Success Stories andObvious Strategyproofness,” .

Bogomolnaia, A. and H. Moulin (2001): “A New Solution to the Random AssignmentProblem,” Journal of Economic Theory, 100, 295–328.

Budish, E. and E. Cantillon (2012): “The Multi-unit Assignment Problem: Theory andEvidence from Course Allocation at Harvard,” American Economic Review, 102, 2237–71.

Carroll, G. (2014): “A general equivalence theorem for allocation of indivisible objects,”Journal of Mathematical Economics, 51, 163–177.

Chawla, S., J. D. Hartline, D. L. Malec, and B. Sivan (2010): “Multi-parametermechanism design and sequential posted pricing,” in Proceedings of the forty-second ACMsymposium on Theory of computing, ACM, 311–320.

Che, Y.-K. and F. Kojima (2010): “Asymptotic Equivalence of Random Priority andProbabilistic Serial Mechanisms,” Econometrica, 78, 1625–1672.

Clarke, E. H. (1971): “Multipart Pricing of Public Goods,” Public Choice, 11, 17–33.

Dasgupta, P., P. Hammond, and E. Maskin (1979): “The Implementation of SocialChoice Rules: Some General Results on Incentive Compatibility,” Review of EconomicStudies, 46, 185–216.

Dur, U. M. and M. U. Ünver (2015): “Two-Sided Matching via Balanced Exchange:Tuition and Worker Exchanges,” NC State University and Boston College, Working Paper.

Ehlers, L. (2002): “Coalitional Strategy-Proof House Allocation,” Journal of EconomicTheory, 105, 298–317.

Ehlers, L. and T. Morrill (2017): “(Il) Legal Assignments in School Choice,” .

Einav, L., C. Farronato, J. Levin, and N. Sundaresan (2018): “Auctions versusposted prices in online markets,” Journal of Political Economy, 126, 178–215.

66

Page 67: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

Feldman, M., N. Gravin, and B. Lucier (2014): “Combinatorial auctions via postedprices,” in Proceedings of the twenty-sixth annual ACM-SIAM symposium on Discretealgorithms, SIAM, 123–135.

Gibbard, A. (1973): “Manipulation of Voting Schemes: A General Result,” Econometrica,41, 587–601.

——— (1977): “Manipulation of Schemes That Mix Voting with Chance,” Econometrica, 45,665–681.

Green, J. and J.-J. Laffont (1977): “Characterization of Satisfactory Mechanisms forRevelation of Preferences for Public Goods,” Econometrica, 45, 427–438.

Groves, T. (1973): “Incentives in Teams,” Econometrica, 41, 617–631.

Gul, F. and W. Pesendorfer (2001): “Temptation and Self-Control,” Econometrica, 69,1403–1435.

——— (2004): “Self-control and the theory of consumption,” Econometrica, 72, 119–158.

Hagerty, K. M. and W. P. Rogerson (1987): “Robust Trading Mechanisms,” Journalof Economic Theory, 42, 94–107.

Hakimov, R. and O. Kesten (2014): “The Equitable Top Trading Cycles Mechanism forSchool Choice,” WZB and Carnegie Mellon University, Working Paper.

Hatfield, J. W. (2009): “Strategy-Proof, Efficient, and Nonbossy Quota Allocations,”Social Choice and Welfare, 33 No. 3, 505–515.

Holmstrom, B. (1979): “Groves’ Scheme on Restricted Domains,” Econometrica,, 47,1137—1144.

Jehiel, P. (1995): “Limited Horizon Forecast in Repeated Alternate Games,” Journal ofEconomic Theory, 67, 497–519.

——— (2001): “Limited Foresight May Force Cooperation,” The Review of Economic Stud-ies, 68, 369–391.

Jones, W. and A. Teytelboym (2016): “Choices, preferences and priorities in a matchingsystem for refugees,” Forced Migration Review.

Ke, S. (2015): “Boundedly rational backward induction,” Tech. rep., Mimeo, PrincetonUniversity.

67

Page 68: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

Kohlberg, E. and J.-F. Mertens (1986): Econometrica, 54, 1003–1037.

Laibson, D. (1997): “Golden eggs and hyperbolic discounting,” The Quarterly Journal ofEconomics, 112, 443–478.

Lee, T. and J. Sethuraman (2011): “Equivalence results in the allocation of indivisibleobjects: A unified view,” Working Paper.

Li, S. (2017a): “Obvious Ex Post Equilibrium,” American Economic Review Papers andProceedings, 107, 230–34.

——— (2017b): “Obviously Strategy-Proof Mechanisms,” American Economic Review, 107,3257–87.

Liu, Q. and M. Pycia (2011): “Ordinal Efficiency, Fairness, and Incentives in Large Mar-kets,” .

Loertscher, S. and L. M. Marx (2015): “Prior-Free Bayesian Optimal Double-ClockAuctions,” Working Paper.

Mackenzie, A. (2017): “A Revelation Principle for Obviously Strategy-proof Implementa-tion,” working paper.

Morrill, T. (2014): “Making Just School Assignments,” Games and Economic Behavior,(forthcoming).

Pápai, S. (2000): “Strategyproof Assignment by Hierarchical Exchange,” Econometrica, 68,1403–1433.

——— (2001): “Strategyproof and Nonbossy Multiple Assignments,” Journal of Public Eco-nomic Theory, 3, 257–271.

Pathak, P. A. and J. Sethuraman (2011): “Lotteries in student assignment: An equiv-alence result,” Theoretical Economics, 6, 1–17.

Pycia, M. (2011): “Ordinal Efficiency, Fairness, and Incentives in Large Multi-Unit-DemandAssignments,” .

——— (2016): “Invariance and Matching Market Outcomes,” Working Paper.

Pycia, M. and M. U. Ünver (2016): “Arrovian Efficiency in Allocation of Discrete Re-sources,” UCLA and Boston College, Working Paper.

68

Page 69: Obvious Dominance and Random Prioritypeople.virginia.edu/~pgt8y/Pycia-Troyan-SOSP.pdf · First presentation: April 2016. First posted draft: June 2016. For their comments, we would

——— (2017): “Incentive Compatible Allocation and Exchange of Discrete Resources,” The-oretical Economics, 12, 287–329.

Rosenthal, R. W. (1981): “Games of perfect information, predatory pricing and thechain-store paradox,” Journal of Economic theory, 25, 92–100.

Roth, A. (2015): “Migrants Aren’t Widgets,” Politico Europe,http://www.politico.eu/article/migrants–arent–widgets–europe–eu–migrant–refugee–crisis/.

Roth, A. E., T. Sönmez, and M. U. Ünver (2004): “Kidney Exchange,” QuarterlyJournal of Economics, 119, 457–488.

Satterthwaite, M. (1975): “Strategy-proofness and Arrow’s Conditions: Existence andCorrespondence Theorems for Voting Procedures and Social Welfare Functions,” Journalof Economic Theory, 10, 187–216.

Shapley, L. and H. Scarf (1974): “On Cores and Indivisibility,” Journal of MathematicalEconomics, 1, 23–37.

Sönmez, T. and M. U. Ünver (2010): “Course Bidding at Business Schools,” InternationalEconomic Review, 51, 99–123.

Troyan, P. (2016): “Obviously Strategyproof Implementation of Top Trading Cycles,”Working paper, University of Virginia.

Troyan, P., D. Delacretaz, and A. Kloosterman (2018): “Essentially Stable Match-ings,” working paper.

Vickrey, W. (1961): “Counterspeculation, Auctions and Competitive Sealed Tenders,”Journal of Finance, 16, 8–37.

Zhang, L. and D. Levin (2017a): “Bounded Rationality and Robust Mechanism Design:An Axiomatic Approach,” American Economic Review Papers and Proceedings, 107, 235–39.

——— (2017b): “Partition Obvious Preference and Mechanism Design: Theory and Exper-iment,” .

69