25
Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathana¨ el Hyafil 1 Introduction Mechanism design studies protocols through which self-interested agents can interact to achieve some (e.g., socially desirable) objective. In the standard setting, the type (utility function) of an agent is not known by other agents, nor is it known by the mechanism designer. The game theoretical concept of equilibrium allows agents to deal with the uncertainty over the others’ types and actions. In classical mechanism de- sign, the designer’s uncertainty is removed “simply” by having agents reveal their types directly and fully. However, as agents increasingly interact in powerful computational settings, outcome spaces are becoming more complex. Combinatorial or multi-attribute auctions are common examples. Eliciting complete type information would require determining the value each agent places on every possible outcome. This can have significant communication as well as cognitive costs on the agent side due to the complexity of assess- ing utility information precisely. Although utility functions often have compact, structured representation, in many settings full revelation is unlikely to be successful. The aim of the proposed thesis is to investigate how to design mechanisms that only require the revela- tion of partial type information. With only partial revelation, the designer has some remaining uncertainty over the agents’ types, even after the mechanism has been executed. Thus, in general, the outcome chosen will not be optimal with respect to the designer’s objective function. This alone raises interesting ques- tions about the type of (partial) information that should be elicited in order to minimize the degree of sub-optimality incurred by the mechanism. But this sub-optimality of the mechanism’s outcome choice function has additional important consequences: most of the results in classical mechanism design which guarantee that agents will reveal their type truthfully to the mechanism rely on the fact that the optimal outcome is chosen. We must therefore also investigate if and how appropriate incentives can be maintained in partial revelation mechanisms. In our study of partial revelation mechanisms, we will be led to consider elicitation protocols that only provide strict (i.e., non-probabilistic) uncertainty over agents’ types (such as bounds on the utility of an outcome). The standard Bayes-Nash equilibrium concept requires a probabilistic prior and thus does not apply in this context. We therefore investigate how agents might behave under strict uncertainty, and introduce a new equilibrium concept for such settings. The purpose of this document is to provide a detailed outline of the proposed thesis as well as a high- level description of its contents. More details are available when appropriate in the appendix sections. Each section of this document corresponds to a chapter of the thesis. After briefly going over the necessary game theory and mechanism design background (Section 2), Section 3 describes our partial revelation frameworks and the relevant theoretical results. Sections 4 and 5 present our approaches to the design of one-shot and incremental partial revelation (approximately) efficient mechanisms respectively. Section 6 extends our one-shot approach to more general objective functions, and Section 7 presents our work on minimax regret equilibria. 1

Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

Mechanism Design with Partial Revelation(Ph.D. Thesis Proposal)

Nathanael Hyafil

1 Introduction

Mechanism design studies protocols through which self-interested agents can interact to achieve some (e.g.,socially desirable) objective. In the standard setting, the type (utility function) of an agent is not knownby other agents, nor is it known by the mechanism designer. The game theoretical concept of equilibriumallows agents to deal with the uncertainty over the others’ types and actions. In classical mechanism de-sign, the designer’s uncertainty is removed “simply” by having agents reveal their types directly and fully.However, as agents increasingly interact in powerful computational settings, outcome spaces are becomingmore complex. Combinatorial or multi-attribute auctions are common examples. Eliciting complete typeinformation would require determining the value each agent places on every possible outcome. This canhave significant communication as well as cognitive costs on the agent side due to the complexity of assess-ing utility information precisely. Although utility functions often have compact, structured representation,in many settings full revelation is unlikely to be successful.

The aim of the proposed thesis is to investigate how to design mechanisms that only require the revela-tion of partial type information. With only partial revelation, the designer has some remaining uncertaintyover the agents’ types, even after the mechanism has been executed. Thus, in general, the outcome chosenwill not be optimal with respect to the designer’s objective function. This alone raises interesting ques-tions about the type of (partial) information that should be elicited in order to minimize the degree ofsub-optimality incurred by the mechanism. But this sub-optimality of the mechanism’s outcome choicefunction has additional important consequences: most of the results in classical mechanism design whichguarantee that agents will reveal their type truthfully to the mechanism rely on the fact that the optimaloutcome is chosen. We must therefore also investigate if and how appropriate incentives can be maintainedin partial revelation mechanisms.

In our study of partial revelation mechanisms, we will be led to consider elicitation protocols thatonly provide strict (i.e., non-probabilistic) uncertainty over agents’ types (such as bounds on the utility ofan outcome). The standard Bayes-Nash equilibrium concept requires a probabilistic prior and thus doesnot apply in this context. We therefore investigate how agents might behave under strict uncertainty, andintroduce a new equilibrium concept for such settings.

The purpose of this document is to provide a detailed outline of the proposed thesis as well as a high-level description of its contents. More details are available when appropriate in the appendix sections.Each section of this document corresponds to a chapter of the thesis. After briefly going over the necessarygame theory and mechanism design background (Section 2), Section 3 describes our partial revelationframeworks and the relevant theoretical results. Sections 4 and 5 present our approaches to the design ofone-shot and incremental partial revelation (approximately) efficient mechanisms respectively. Section 6extends our one-shot approach to more general objective functions, and Section 7 presents our work onminimax regret equilibria.

1

Page 2: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

2 Chapter 2: Background and Related Work

2.1 Purpose of Chapter

This chapter presents all the game theory background material that is used throughout the proposed thesis,defines and motivates the minimax regret decision criterion, and surveys related work relevant to partialrevelation mechanism design and implementation with approximate incentives. (This partly correspondsto sections 2, 4.2 and 5 of my depth paper [7])

2.2 Game Theory Background

We assume that agent preferences over outcomes in O are represented by a utility function u : O → R.In order to capture the structure that a utility function may have when the outcome space is factored, wewill often use Generalized Additive Independence (GAI,[4]) model. In such a model, the utility of anoutcome u(o = o1 . . . om) is decomposed into a sum of factors uk, each factor involving only a subsetof all the outcome variables. Because the subsets are not necessarily disjoint, any utility function can berepresented using a GAI model. For example, if an outcome is defined by three variables, u(o1, o2, o3) =u1(o1, o2) + u2(o2, o3) + u3(o1, o3).

2.2.1 Bayesian Games

Game theory studies many forms of games, but the type of games relevant to mechanism design is thosewhere there is uncertainty about the agents’ preferences: Bayesian games. The following definitions andnotation have been adapted from [14, 12].

Definition 1. A Bayesian Game consists of:

• a set of N agents

• a product set of actions A = A1 × . . . × AN , one set for each agent

• a product set of types T = T1 × . . . × TN , one set for each agent

• a commonly known joint prior on the agents’ types: Pr(T )

• a utility function ui : A × Ti → R, for each agent

The utility is defined not only over joint actions of the agents but also over types. This is just a differencein representation: the utility mapping itself is assumed to be commonly known and the type of the agentsrepresent the aspects of the utility function that are privately known by that agent.

In a Bayesian game, a strategy for an agent is a mapping from possible types to (possibly mixed)actions: σi : Ti → ∆(Si). Let Σi be the set of agent i’s strategies.

2.2.2 Equilibrium Concepts

We will be interested in outcomes where the agents’ strategies are in some form of equilibrium. There arethree main equilibrium concepts: dominant strategy, ex-post, or Bayes-Nash equilibria.

A dominant strategy maximizes an agent’s utility regardless of the other agents’ actions.

Definition 2. σ∗i is a dominant strategy for agent i if and only if

∀a−i ∈ A−i, ∀σ′i ∈ Σi, ∀ti ∈ Ti,

ui(σ∗i (ti), a−i, ti) ≥ ui(σ

′i(ti), a−i, ti)

2

Page 3: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

A dominant strategy equilibrium (DSE) σ∗ = (σ∗1 , . . . , σ∗

N ) is a strategy profile where for each i, σ∗i is

a dominant strategy.In an ex-post equilibrium (EPE), each agent’s strategy maximizes his utility against the fixed strategies

of the other agents but regardless of their types.

Definition 3. σ∗ = (σ∗1 , . . . , σ∗

N ) is an ex-post equilibrium if and only if

∀t−i ∈ T−i, ∀σ′i ∈ Σi, ∀ti ∈ Ti,

ui(σ∗i (ti), σ

∗−i(t−i)), ti) ≥ ui(σ

′i(ti), σ

∗−i(t−i)), ti)

In a Bayes-Nash equilibrium (BNE), an agent’s strategy maximizes his expected utility against the fixedstrategies of the other agents, with the expectation taken with respect to his beliefs about the others’ types.

Definition 4. σ∗ = (σ∗1 , . . . , σ∗

N ) is a BNE if and only if

∀σ′i ∈ Σi, ∀ti ∈ Ti,

Et−i[ui(σ

∗i (ti), σ

∗−i(t−i)), ti)|ti] ≥ Et−i

[ui(σ′i(ti), σ

∗−i(t−i)), ti)|ti]

A dominant strategy equilibrium is an ex-post equilibrium, which itself is a Bayes-Nash equilibrium.The converses are obviously not true. A Bayes-Nash equilibrium, however, always exists whereas there isno such guarantee for the others.

2.3 Classical Mechanism Design

Mechanism design deals with the design of incentives for a distributed population of agents to behave ina way that will lead to an optimal global outcome. The meaning of “optimal” depends on the mechanismdesigner’s interests. These are represented by an objective function fob : O × T → R, specifying howmuch the designer values each outcome given the types of the agents. The function f : T → O such that∀t, f(t) = arg maxo fob(o, t) is the corresponding social choice function.

We restrict our attention to quasi-linear settings where the outcome space is decomposed into alloca-tions in X and payments in R, such that an agent’s utility for an outcome is its valuation for the allocationminus its payments: if o = (x, p), u(o) = v(x) − p.

Intuitively, a mechanism defines the rules of a game by specifying which outcome (allocation andpayments) will occur for every possible joint action of the agents. In fact, a mechanism, along with a utilityfunction for each agent and a prior over these utilities, induces a Bayesian game.

Definition 5. A mechanism M = (A1, . . . , AN , m, p1, . . . , pN ) consists of N sets of actions, one for eachagent, an allocation function m : A1× . . .×AN → X and N payment functions pi : A1× . . .×AN → R.

A mechanism is said to implement a social choice function f in DSE (resp., EPE, BNE) if there isa DSE (resp., EPE, BNE) σ of the induced Bayesian game, such that f(t) = m(σ(t)). That is, if theoutcome chosen by the mechanism in equilibrium coincides with the one recommended by the socialchoice function.

A direct revelation mechanism, is one where agents directly reveal their types and the mechanism mapstype vectors to outcomes.

Definition 6. M = (A1, . . . , AN , m, p1, . . . , pN ) is a direct revelation mechanism if Ai = Ti, ∀i.

3

Page 4: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

2.3.1 Properties of Mechanisms

A mechanism (or its allocation function) is said to be efficient if it chooses an allocation that maximizessocial welfare, i.e., the sum of the agents’ utilities. That is, if ∀t ∈ T, m(t) = arg maxx SW (x; t), whereSW (x; t) =

∑i vi(x; ti).

A mechanism M is said to be Incentive Compatible (IC) in DSE (respectively, EPE, BNE) if reportingtruthfully (i.e., σi(ti) = ti, ∀i) is a DSE (resp., EPE, BNE).

A major result in mechanism design, the Revelation Principle, says that we can restrict ourselves todirect revelation incentive compatible mechanisms without loss of generality.

Theorem 1. Revelation Principle: If M = (A1, . . . , AN , m, p1, . . . , pN ) implements social choice func-tion f in dominant strategies (resp., EPE, BNE) then there exists a direct, incentive compatible mechanismM ′ = (T1, . . . , TN , m′, p′1, . . . , p

′N ) that implements f in dominant strategies (resp., EPE, BNE).

Intuitively, this is because the mechanism can be modified so as to simulate the non-truthful strategy ofthe agents, therefore making it in their best interest to reveal truthfully since the mechanism will “lie forthem.”

Another relevant property of a mechanism is individual rationality (IR), which requires that an agentalways prefers to participate in the mechanism. Ex-post IR requires that agents choose participation evenif they knew the types of the others, whereas for ex-interim IR agents are only assumed to know their owntype and participation is optimal in expectation over the types of the others.

2.3.2 Vickrey Clarke Groves (VCG) Mechanisms

The class of Groves mechanisms (and its subclass, Vickrey Clarke Groves) is of particular interest. AGroves mechanism has (by definition) an efficient allocation function. The payment functions are of aspecific form: each agent receives an amount equivalent to the social welfare enjoyed by other agents inthe chosen allocation, and pays an amount that is independent of his own report. In a VCG mechanism, thatamount represents the social welfare of the other agents in the allocation that would be efficient had agenti not participated. Thus in a Groves-Clarke mechanism, agents pay what their presence costs to others.Formally:

Definition 7. A Groves Mechanism is a direct revelation mechanism M = (T1, . . . , TN , m∗, p1, . . . , pN )where:

• m∗(t) = arg maxx SW (x; t)

• ∀i ∈ N, pi(t) = −SW−i(m∗(t); t−i) + hi(t−i), for an arbitrary function hi

where SW−i(x; t−i) =∑j 6=i

vj(x, tj)

A Groves-Clarke mechanism, also called a VCG mechanism, is a Groves mechanism with: hi(t−i) =max

x

SW−i(x; t−i)

Giving each agent a credit equivalent to the social welfare of the others aligns that agent’s interestwith that of the designer (maximizing global social welfare). Making the “debit” part of the paymentsindependent of the agent’s own report shields it from manipulation. We therefore have:

Proposition 1. A Groves mechanism M = (T1, . . . , TN , m∗, p1, . . . , pN)) is efficient and incentive com-patible in dominant strategies. A VCG mechanism is, additionally, ex-post individual rational.

4

Page 5: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

In non-trivial settings, the Groves scheme is the only class of mechanisms that can implement any socialchoice function in dominant strategies. This follows from two famous results by Roberts [16] and Greenand Laffont [5]. Roberts showed that, if X contains at least 3 outcomes, and all valuations are possible(i.e. Ti = R

|X|), then a social choice function is implementable in dominant strategies if and only if it isan affine welfare maximizer.1 Green and Laffont proved that to implement an affine welfare maximizer itis necessary to use Groves payments.

Given these results, to implement a social welfare maximizer in dominant strategies, one must notonly elicit enough information to determine the efficient allocation but also generally elicit enough furtherinformation to determine the Groves payments.

2.4 Minimax Regret

Minimax regret can be used as a decision criterion in any decision making setting under certainty. Wedefine it here in the context of choosing allocations x ∈ X with social welfare as the decision maker’sobjective, given that agents’ types lie in the partial type vector θ.

Definition 8. The pairwise regret of decision x with respect to decision x over feasible type set θ is

R(x, x, θ) = maxt∈θ

SW (x; t) − SW (x; t), (1)

This is the most one could regret choosing x instead of x (e.g., if an adversary could impose any type inθ). The maximum regret of decision x and the minimax regret of feasible type set θ are respectively:

MR(x, θ) = maxx

R(x, x, θ) (2)

MMR(θ) = minx

MR(x, θ) (3)

A minimax-optimal decision for θ, denoted x∗(θ), is any allocation that minimizes Eq. 3. Choosing a

minimax-optimal decision x∗ minimizes the worst case loss in efficiency with respect to possible realiza-

tions of the types t ∈ θ. Apart from being applicable to settings without distributional information over theset of possible utility functions, minimax regret has been shown (empirically) to be an excellent guide forthe elicitation of utility information [2].

Minimax regret optimization can be difficult in all but the most trivial settings, but recent approacheshave shown how it can be made quite practical when utility models are factored into a convenient functionalform such as GAI, and utility uncertainty is expressed in the form of linear constraints on such factoredmodels. In this setting, minimax regret optimization can be formulated as a linear, mixed-integer pro-gram (MIP) with exponentially many constraints, but can be solved using an iterative constraint generationprocedure that, in practice, enumerates only a small number of (active) constraints [2].

2.5 Related Work

Partial Revelation Mechanism DesignWe review the existing literature on mechanisms with partial/limited/bounded communication of type

information.

Approximate IncentivesWe discuss the various notions of approximate incentives, including our approach of bounding worst-

case manipulability: i.e., a mechanism is ε-incentive compatible if no agent can gain more than ε byreporting untruthfully. If manipulating the mechanism has a cost higher than ε, agents will reveal theirtypes truthfully.

1An affine welfare maximizer is a function that maximizes some affine transformation of Social Welfare

5

Page 6: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

Utility RepresentationsWe describe possible compact, structured representations for utility functions (linear additive, general-

ized linear additive, bidding languages, etc ...) and review results on the complexity of revealing utilityinformation.

3 Chapter 3: Partial Revelation Framework

3.1 Purpose of Chapter

In this chapter, we emphasize the motivations behind partial revelation mechanisms (PRMs), present formalpartial revelation models for both the one-shot and sequential frameworks, and present negative results onthe implementation of one-shot partial revelation mechanisms.

3.2 General Model

Given our interest in partial revelation, we define a partial type θi ⊆ Ti for agent i to be any subset of i’stypes. A partial type vector θ includes a partial type for each agent.

A (direct) partial revelation mechanism (PRM) is any mechanism in which the action set Ai is a set ofpartial types Θi (i.e., the agent is asked to declare the partial type in which its true type lies). The allocationand payment functions therefore maps partial type vectors to allocations and payments: m : Θ → X,pi : Θ → R.

Since agents only reveal partial types, the notion of truth telling must be relaxed somewhat:

Definition 9. A PRM is incentive compatible (IC)—under the relevant equilibrium concept—if it inducesequilibrium strategies σi for each agent i such that ti ∈ σi(ti).

In other words, an IC PRM will induce each agent to report a partial type that contains its true type.Partial types may or may not be overlapping or exhaustive. If they are not exhaustive, incentive com-

patibility is not generally possible. If they are overlapping, more than one truthful report may be possible,and an agent can reason strategically to choose among them while maintaining truthfulness, something thatis not possible if types do not overlap.

3.3 Negative Results

In general, given a partial type vector θ, a mechanism will not be able to determine an efficient allocationx—the allocation x

∗(t) that maximizes social welfare for type t may be different for various t ∈ θ. Acorollary of Roberts’ result is that a one-shot PRM cannot be used for dominant strategy implementation:unless the partitioning of each agent’s type space is such that each joint partial type θ determines a uniquemaximizing x

∗ for all t ∈ θ, dominant strategy implementation will not be realized (in general). One ofour contributions is to show that relaxing the equilibrium concept to ex-post or Bayes-Nash is unhelpful.

In the Bayes-Nash context, each agent has a probabilistic prior over the types of the others, Pr(t−i|ti).If truth-telling is a Bayes-Nash equilibrium in a PRM, this defines a distribution over the reports (partialtypes) of other agents; hence for each of its reports θi, agent i has a distribution, xθi

i , over allocationsselected by the mechanism. For each x define:

xθi

i (x) = Pr(x|θi) =∑

θ−i

Pr(θ−i|θi) · Pr[m(θiθ−i) = x]

6

Page 7: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

In this section, we restrict ourselves to partitions of the type space that are “grid-based”: each parameter’sspace of values is split into a finite number of intervals of the form [lb, ub), with lb < ub. We can derivethe following results:

Theorem 2. If all valuation functions are possible, in a Bayes-Nash IC grid-based PRM, we have:

∀i, ∀θi, θ′

i : xθi

i = xθ′

i

i =def xi (4)

∀i, j : xi = xj (5)

Property 4 states that, regardless of its report, agent i has the same posterior xi over outcomes. Prop-erty 5 states that this distribution is also the same for all agents. If the allocation function is deterministic,then it selects the same allocation for each report vector. We call such a mechanism trivial. Triviality maybe avoided if allocations are probabilistic, but even then, these properties are very restrictive:

Proposition 2. No Bayes-Nash IC grid-based PRM has higher expected social welfare than the trivialmechanism that always picks the allocation with highest ex ante social welfare.

Proposition 3. If ex-interim (or, a fortiori ex-post) individual rationality (IR) is required, the expected sumof payments of any Bayes-Nash IC grid-based PRM is zero.

In a sense, though partial revelation Bayes-Nash implementation is not strictly trivial, it is useless sinceit achieves the same result as a mechanism with no revelation.

Given that an ex-post equilibrium is a vector of strategies that are in Bayes-Nash equilibrium for allpossible probabilistic priors, the above results imply that any ex-post IC PRM is trivial.

We propose two (non-exclusive) approaches to overcoming these negative results, the frameworks ofwhich are detailed in the next sub-section. The first is to consider approximate incentives properties. Thesecond is to use a sequential model of partial revelation. With sequential elicitation the consequences ofRoberts’ result do not apply and ex-post implementation is possible. However, even in the sequential case,we argue in favor of using approximate incentives in order to allow for a trade-off between efficiency andrevelation costs.

3.4 Consequences of Negative Results

3.4.1 Approximate Incentives

Approximate incentive compatibility has been considered before from several perspectives. Nisan and Ro-nen [13] show that computational approximation of VCG can destroy truth-telling, but manipulation ofapproximate VCG can be made computationally difficult [18], thus inducing “practical” incentive compat-ibility. IC in expectation or with high probability can also be demanded [1].

Our approach to approximate incentives is to bound the gain an agent can achieve by lying. A strategyσi of agent i is ε-dominant if, for every action of other agents and each of agent i’s possible type ti, theutility of reporting σi(ti) is within ε of the utility of agent i’s optimal report. Thus if truth-telling is ε-dominant, agent i’s best-case gain from lying is less than ε. In most settings, the computational cost offinding a good lie, especially given the considerable uncertainty in the value of any lie (due to uncertaintyabout others’ types), will be substantial (see, e.g., [18]). Thus, if ε is small enough, it will not be worththe cost: our formal, approximate incentive compatibility is sufficient to ensure practical, exact incentivecompatibility.

To develop a sense of the difficulty associated with manipulating such a mechanism, consider thatan agent must be able to compute an untruthful strategy (or lie) with greater utility than truth-telling inorder to exploit our approximate incentive guarantee. To do this one must first determine the true valueof a lie (incurring the valuation or cognitive costs similar to revealing truthfully). However evaluating

7

Page 8: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

a lie also requires considerable (and accurate) information about the types and strategies of the others;even with decent priors, the costliness of such computations (e.g., in time, cognitive, or computationalresources) implies that manipulation is not worthwhile unless the bound ε is quite loose. Thus, if ε is smallenough, our formal, approximate incentive compatibility is sufficient to ensure practical, exact incentivecompatibility.

A similar argument can be made regarding approximate IR: determining whether you gain from notparticipating will be very difficult. Thus a potential small loss will be worthwhile for an agent if thesavings provided by the our mechanism in revelation and computational costs (relative to the full revelationalternative) are significant enough. .

3.4.2 Incremental Model

In this section, we elaborate on the definition of a mechanism to draw out the structure of the iterativequerying process involved in incremental elicitation.

Let Qi be the set of queries the mechanism can pose to i, and let Ri(qi) be the responses i can offer toqi ∈ Qi. We interpret each query as asking i about its type; thus each response r ∈ Ri(qi) is equated witha partial type θi(r) ⊂ Ti. For example, the simple query “Is your valuation for outcome x greater thanv?” admits two responses (yes and no) corresponding to the obvious subsets of Ti. Standard queries (e.g.,value, rank, demand queries) can all be represented in this way. Let Q = ∪Qi.

A nonterminal history is any finite sequence of queries and responses (including the empty sequence),and a terminal history is any nonterminal history followed by an outcome x ∈ X. Let H = Ht ∪ Hn

be the set of (terminal or nonterminal) histories, and for any h ∈ H, let hi denote the restriction of h toqueries/responses for agent i. For any h, let h≤k denote the initial k-step history. We use hk to denotethe kth query-response pair or outcome in sequence h and a(hk) refers to the “action” (i.e., query asked oroutcome chosen) at stage k of this history. An incremental mechanism M = 〈m, (pi)i≤n〉 consists of: (a)a mapping m : Hn → Q ∪ X, that for each nonterminal history chooses a query for some agent or selectsan outcome; and (b) a collection of payment functions pi : Ht → R that associates a payment for agent i

with each terminal history. The set of realizable histories induced by m is simply that subset of histories h

for which a(hk+1) = m(h≤k).2

An agent strategy σi associates a response σi(hi, qi; ti) ∈ Ri(qi) with every query, conditioned onits local history and its type. 3 Strategies σi and types ti together with m induce a specific (possiblyunbounded) history: h(m, σ, t). Since each response is associated with a partial type, for any length k

local history hi we say that θi(hi) = ∩j≤kθi(rj) is the revealed partial type of agent i (that is, i has

represented his type to lie within the partial types associated with each response). We say σi is truthfuliff ti ∈ θi(σi(hi, qi; ti)) for all ti ∈ Ti, qi ∈ Qi and hi ∈ Hi. A truthful strategy is necessarily historyindependent if responses correspond to disjoint partial types.

We say an incremental mechanism M is direct iff m and pi depend only on the partial types thatare revealed and not on the precise history: that is, m(h) = m(h′) if θi(h) = θi(h

′) for each i ≤ n,and similarly for payments pi. We restrict attention to direct mechanisms, and write m(θ) and pi(θ) toemphasize the dependence of the mechanism decisions only on the partial type vector revealed so far.If m(θ) ∈ X, we write x∗(θ) to denote the outcome chosen. An incremental mechanism is a partialrevelation mechanism if there exists a realizable terminal history h and agent i such that θi(hi) admitsmore than one possible type ti. In other words, it is possible for the mechanism to terminate without fullknowledge of the types of all agents.

Given a mechanism M and response policies σ−i for agents other than i, the utility of agent i of type2M need only specify queries, etc. for realizable histories.3We assume i knows only its own history; this can be relaxed to admit (partial) revelation of other agent queries/responses.

8

Page 9: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

ti for using strategy σi is defined as:

ui(σi, σ−i; ti) = vi(x∗(θ(h)); ti) − pi(θ(h))

if the history h induced by 〈σi, σ−i〉 is terminal. Otherwise, we set ui = 0.Classic direct mechanisms can be viewed as a special case of incremental mechanisms in which each

agent is asked directly “What is your type?” and the mechanism then terminates with the appropriateoutcome and payment functions.

We continue by describing some general properties of direct, incremental, partial revelation mecha-nisms.

A mechanism M is δ-allocation certain if, whenever it terminates, it has enough information aboutagent types to determine a δ-efficient allocation.

Definition 10. A direct mechanism M satisfies δ-allocation certainty iff for all realizable terminal historiesh, x∗(θ(h)) is such that

∀t ∈ θ(h), ∀x ∈ X, SW (x∗(θ(h)); t) ≥ SW (x; t) − δ

M satisfies allocation certainty if this holds for δ = 0.

Definition 11. A mechanism M is δ-efficient iff it is terminating (i.e., all realizable histories are terminal)and δ-allocation certain. M is efficient if it is terminal and is allocation certain.

Allocation certainty (or its approximation) does not imply that the mechanism “knows” the social wel-fare of the chosen outcome, only that it is (within δ of) optimal.

3.5 Remaining Work

Relaxations of the grid-based assumption made in the negative results above should be explored. Wesuspect that assuming that the partition is convex is sufficient.

4 Chapter 4: One-Shot ε-efficient PRMs

4.1 Purpose of Chapter

In this chapter, we study partial revelation within the framework of one-shot (i.e., non-sequential) mecha-nisms. In the one-shot setting, decisions made under remaining uncertainty cannot, in general, be optimal.Roberts’ result implies that, in this case, we cannot generally induce dominant strategies in such mecha-nisms. We therefore consider approximate dominant incentive compatibility. (This chapter corresponds tothe second part of our paper on One-shot Mechanism Design with Partial Revelation [10].)

4.2 Outline of Results

Regret-Minimizing Mechanisms A partial revelation mechanism must choose an allocation m(θ) foreach reported partial type θ ∈ Θ, as well as payments for each agent.

• We propose the use of minimax regret to choose the allocations associated with each partial typevector. The minimax-optimal decision for θ is denoted x

∗(θ) and a PRM with allocation functionm = x∗ is called a regret-based partial revelation mechanism.

By construction, this mechanism is such that if this worst-case loss is bounded by ε for each decla-ration, then the mechanism will be ε-efficient if all agents declare their partial types truthfully.

9

Page 10: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

• We then introduce payment schemes that generalize full revelation Groves and Clarke paymentsto the case of partial type information and show that our mechanisms are also ε-dominant incen-tive compatible for both schemes (i.e., the incentive to lie is bounded) and ε-individually rationalfor partial Clarke payments. Since determining a useful lie is generally quite difficult and costly,such approximate incentive compatibility will be sufficient to induce exact truth-telling if ε is smallenough. A similar argument applies to individual rationality. See appendix 8.1.1 for more details.

• Finally, we show that various further local optimizations can be gained by choosing outcomes andpayments in specific ways to improve participation constraints and improve revenue over and abovethe Clarkes-Groves scheme. See appendix 8.1.2.

Partitioning AlgorithmWhile our regret-minimizing partial revelation mechanisms have desirable properties, the key mecha-

nism design question we must face is how to choose the set of partial types that must be revealed (i.e., thepartitioning of type space). This partitioning can be optimized based on the particular setting at hand andany prior information one might have about agents’ types. Although the mechanism is fixed and it is thepartition that is being optimized, this can be viewed as automated partial revelation mechanism design.

We propose an algorithm to optimize the choice of partial types so as to allow one to tradeoff theamount of elicitation and communication with the degree of efficiency and incentive compatibility inducedby the mechanism. The key results are all driven by the level of regret induced within any region of (joint)type space. A detailed description of the algorithm is available in the appendix in section 8.2.

Empirical ResultsWe illustrate our concepts using the setting of a negotiation between a buyer and a seller, over a finite set

of multi-attribute products. Computational experiments confirm the efficacy of our approach to one-shot,partial revelation mechanism design by comparison with a uniform partitioning of type space. See section8.3 for a precise description of the comparisons.

ConclusionTaken together, regret-based mechanisms and the optimization algorithm provide a framework for the

automated design of partial revelation mechanisms in which one can explicitly address the tradeoff betweenthe “quality” of the mechanism and the costs of revelation.

4.3 Remaining Work

• The local optimization of secondary objectives (such as payments) needs further investigation andempirical evaluation.

• There are potentially other interesting ways of evaluating our approach empirically, either by apply-ing it to different domains or by using different measures of “quality”.

• Future work, described in Section 6, could provide intuitions about potential new theoretical resultson one-shot partial revelation mechanisms.

5 Chapter 5: Incremental ε-efficient PRMs

5.1 Purpose of Chapter

(This chapter corresponds to our paper on Regret-based Incremental Partial revelation Mechanisms [9].)

10

Page 11: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

Another way of circumventing the negative results of Chapter 3 is to make the elicitation process se-quential. In this case, it is possible to elicit enough information to make optimal decisions and thus obtainexact incentive properties. We argue, however, that even in this context, using approximate incentives canbe desirable in order to allow for a trade-off between decision quality and revelation costs. In this chap-ter, we propose to apply regret-based incremental elicitation to partial revelation sequential mechanisms.While we draw on single-agent regret-based elicitation frameworks, our key contributions are the investi-gation of incentive properties when we adopt these models for the design of incremental, partial-revelationmechanisms.

5.2 Outline of Results

5.2.1 Regret-Minimizing Incremental Mechanisms

Applying regret-based elicitation to the design of incremental mechanisms works as follows: after eachquery the mechanism (assuming truthful revelation) knows that agent types lie within some θ and com-putes MMR(θ) and the minimax optimal x∗. If MMR(θ) ≤ δ, the mechanism terminates with x

∗; other-wise, an elicitation strategy is used to determine the next query. The termination condition ensures such amechanism satisfies δ-allocation certainty.

The Current Solution elicitation strategy (CSS, defined in the appendix, section 9.1.1) satisfies (exactor approximate) allocation certainty and therefore provides an (exact or approximate) efficient allocationfunction. To fully define a mechanism, we require a payment scheme that induces reasonable incentiveproperties. To this end, we propose a partial revelation analog of VCG payments.

Let M have a δ-efficient allocation function m that terminates with δ-allocation certainty and a partialtype vector θ. An agent’s payment is simply the maximum VCG payment over all the possible types ofother agents. More precisely, let M = 〈m, (p>

i )i≤n〉 where:

• m is δ-efficient

• p>i (θ) = maxt−i∈θ−i

pvi (x

∗(θ), t−i)

where pvi is the classic VCG payment scheme. We refer to this payment scheme under partial types as

partial VCG payment.

Theorem 3. Let M have a δ-efficient allocation function and use partial VCG payments. Then M is aδ-efficient, δ-ex post individually rational, (δ + ε(x∗(θ)))-ex post incentive compatible mechanism, whereε(x) = max

iεi(x), and:

εi(x) = maxt′−i

∈θ−i

pvi (x, t

−i) − mint−i

pvi (x, t−i) (6)

Such a partial revelation mechanism will obviously determine an allocation whole social welfare iswithin δ of optimal if all agents reveal their partial types truthfully. Partial VCG payments induce γ-ex-post incentive compatibility (where γ = δ + ε(x∗(θ))). This means that the gain an agent can attain byrevealing its partial type incorrectly is bounded by γ, when all others reveal truthfully, even if the agentknows the types of the others. Finally, it is approximately individually rational.

Two-Phase ApproachThe bounds on the incentive compatibility approximation are, however, not a priori due to uncertainty

over payments. We thus improve incentives by adding a second “payment elicitation” phase, itself a com-mon mechanism design approach, that exploits the notion of regret and allows a priori bounds on manipu-lability to be provided. Details of payment elicitation are in section 9.1.2.

11

Page 12: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

Single-Phase ApproachFinally, we argue that if one is going to allow for both allocation and payment uncertainty, (i.e., some

bounded loss in efficiency and some bounded manipulability), one should not break elicitation into the twophases. Instead, we define a “global regret” over both measures simultaneously and describe a regret-basedelicitation process that quickly reduces both loss in efficiency and manipulability. A precise description ofthe single-phase approach is in section 9.2.

Empirical ResultsWe illustrate our concepts using the setting of a buyer searching for a multi-attribute product from one of

several sellers. The mechanism acts as a facilitator, querying both the buyer for her preferences and sellersfor their costs. Assuming the preference and cost models each have a compact, factored structure (thoughthey need not be the same structure), we show how regret-computation and elicitation can be effectedrelatively practically. See appendix 9.3 for the detailed empirical results.

ConclusionAs in the one-shot case of the previous chapter, the key feature of our sequential elicitation framework

is that, with both the single and two-phase approach, a priori bounds on manipulability can be provided,allowing a tradeoff between the quality and “incentive properties” of the mechanism and the amount ofelicitation required.

5.3 Remaining Work

• There are potentially other interesting ways of evaluating our approach empirically, either by apply-ing it to different domains or by using different measures of “quality”.

6 Chapter 6: Partial Revelation Automated Mechanism Design

6.1 Purpose of Chapter

The classes of mechanisms described in Chapters 4 and 5 both attempt to be as efficient as possible. Thevarious proofs of the incentive properties of these mechanisms relied on the assumption that the designer’sobjective was to minimize worst-case social welfare loss. Thus these mechanisms cannot directly be ap-plied to more general objective functions, such as maximizing expected revenue or those that include thevarious costs incurred when running the mechanism.

Mechanism design can be seen as “optimizing” the rules of a game of incomplete information so as tomaximize the designer’s objective. Automated mechanism design (AMD, [17]) takes this approach quiteliterally by formulating the design problem as a linear optimization of the designer’s expected objective,for any given probabilistic prior over types.

However, AMD currently only applies to settings with finite type spaces, which is rarely realistic. Au-tomatically designing partial revelation mechanisms would allow for more realistic infinite type spaces,while keeping the optimization tractable (in addition to the usual advantages of partial revelation mecha-nisms). Furthermore, considering general objective functions allows one to take into account characteristicsof the mechanism such as communication and valuation costs, manipulability level, etc... when designingthe mechanisms.

In this chapter, we consider various frameworks for partial revelation AMD with general objectivefunctions.

12

Page 13: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

6.2 Outline of expected results

High-level Optimization FrameworkWith general objective functions, manipulability is not directly linked to the mechanism’s objective

value. There are thus three separate criteria to optimize: revelation costs, manipulability and the designer’sobjective. Since exact optimization is not computationally possible, we propose to iteratively optimizeeach criterion sequentially. First, given a fixed partition and manipulability level, we find a mechanism thatoptimizes the designer’s objective. Second, given the same fixed partition, we find the mechanism withlowest manipulability level, among those that achieve the objective level reached in the first step. Finally, ifthe current results are not satisfactory, we attempt to refine the partition of the agents’ type spaces in such away as to allow for further optimization of (hopefully, both) the objective and the manipulability level. Wethen iterate back to the first step.

Bayesian vs Regret-based Partial Revelation AMDGiven a full revelation objective function, there are many ways to define a partial revelation analog. We

consider two approaches: maximizing the expected full revelation objective for some probabilistic priorover agents’ types, and minimizing the designer’s regret, with respect to his objective function, of pickinga mechanism versus another. The regret approach, while computationally slightly more complex, has thedouble advantage of providing additional guidance in the partition optimization phase, and of serving as amore explicit stopping criterion.

Cost ModelsIt would be interesting to formally model the various costs involved in manipulating a mechanism and

incorporate these models into the optimization problem. This would allow for the design of partial reve-lation mechanisms with formal, exact incentives properties. There are several existing models of boundedrationality in the literature that could potentially be used to model the cost incurred by an agent whencomputing or refining its valuation function (see [15]). Communication costs can also be modeled andincorporated in the optimization.

Empirical ResultsEmpirical evaluation is needed to demonstrate how partial revelation AMD can effectively trade off

revelation costs with general objective functions while maintaining acceptable incentives. The regret-based and Bayesian frameworks should be empirically compared to each other and possibly to the class ofmechanisms described in Chapter 4.

6.3 Remaining Work

All of this chapter is either current or future work.

7 Chapter 7: Minimax Regret Equilibrium

7.1 Purpose of Chapter

As mentioned, uncertainty over utility functions often cannot easily be quantified, and obtaining accurateprobabilistic priors over agent types can thus be costly, if not impossible. Furthermore, in our frameworkfor partial revelation mechanism design, the various elicitation protocols involved provide only strict (i.e.,non-probabilistic) uncertainty over agents’ types. Thus, the standard Bayes-Nash equilibrium concept is

13

Page 14: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

not suitable for such settings. Yet stronger concepts such as dominant strategies or ex-post equilibrium donot necessarily exist in all games.

In this chapter, we consider the problem of games of incomplete information in which type uncertaintyis strict. We propose the use of minimax regret as a decision criterion in such games and define a newequilibrium concept for regret-minimizing agents. Such an equilibrium is especially suitable in a partialrevelation setting, but can also be applied to all games of incomplete information. (This chapter correspondsto our paper on Regret-Minimizing Equilibria and Mechanisms for Games with Strict Type Uncertainty [8]and its extensions.)

7.2 Outline of Results

DefinitionIn this setting, an agent’s uncertainty is over the types of the others. Thus the regret of an action ai of

agent i, given a specific type vector for the others, t−i, is the difference between the utility of i’s optimalaction given t−i and that of ai. The worst case regret of ai is the maximum such difference over all possibletypes t−i of the others. A regret-minimizing agent will choose the action with the least worst-case regret.

We define a minimax-regret equilibrium as a set of agent strategies that, for each agent, minimizes hismax regret given the others’ strategies.

ExistenceWe prove that these exist in mixed strategies in very general games:

Theorem 4. Any game of incomplete (private) information, with a finite number of players, a compact setof actions, any sets of types, and utility functions that are bilinear and continuous, has at least one MinimaxRegret equilibrium.

ApplicationThe automated mechanism design (AMD) frameworks (Bayesian or regret-based; full or partial reve-

lation) can be applied with any given equilibrium concept. We therefore consider the automated designof mechanisms that are incentive compatible in minimax-regret equilibrium. We present an algorithm forautomatically designing such mechanisms using a sequence of linear optimization problems, for the fullrevelation (Bayesian or regret-based) cases. The partial revelation cases, however, are much harder (if notimpossible) to formulate as linear optimizations.

7.3 Remaining Work

The empirical evaluation of the application described above remains to be done. (Empirical results pre-sented in [8] were ambiguous)

References

[1] Aaron Archer, Christos Papadimitriou, Kunal Talwar, and Eva Tardos. An approximate truthful mech-anism for combinatorial auctions with single parameter agents. In SODA-03, 2003.

[2] Craig Boutilier, Relu Patrascu, Pascal Poupart, and Dale Schuurmans. Constraint-based optimizationand utility elicitation using the minimax decision criterion. In Artifical Intelligence, 2006.

14

Page 15: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

[3] Craig Boutilier, Tuomas Sandholm, and Rob Shields. Eliciting bid taker non-price preferences in(combinatorial) auctions. In Proceedings of the Twentieth National Conference on Artificial Intelli-gence, pages 204–211, San Jose, CA, 2004.

[4] Peter C. Fishburn. Interdependence and additivity in multivariate, unidimensional expected utilitytheory. International Economic Review, 8:335–342, 1967.

[5] Jerry Green and Jean-Jacques Laffont. Characterization of satisfactory mechanisms for the revelationof preferences for public goods. Econometrica, 45:427–438, 1977.

[6] Benoit Hudson and Tuomas Sandholm. Generalizing preference elicitation in combinatorial auctions.In International Conference on Autonomous Agents and Multi-Agent Systems, 2003.

[7] Nathanael Hyafil. Computational mechanism design – depth report. Technical report, University ofToronto, 2005.

[8] Nathanael Hyafil and Craig Boutilier. Regret Minimizing Equilibria and Mechanisms for Games withStrict Type Uncertainty. In Proceedings of the Twentieth Conference on Uncertainty in ArtificialIntelligence, Banff, Alberta, Canada, 2004.

[9] Nathanael Hyafil and Craig Boutilier. Regret-based incremental partial revelation mechanisms. InProceedings of the Twenty-Second National Conference on Artificial Intelligence, 2006.

[10] Nathanael Hyafil and Craig Boutilier. One-shot mechanism design with partial revelation. In Pro-ceedings of the Twenty-First International Joint Conference on Artificial Intelligence, 2007.

[11] Sebastien Lahaie and David Parkes. Applying learning algorithms to preference elicitation. In ACMConference on Electronic Commerce, pages 180–188, New York, 2004.

[12] Andreu Mas-Colell, Micheal D. Whinston, and Jerry R. Green. Microeconomic Theory. OxfordUniversity Press, New York, 1995.

[13] Noam Nisan and Amir Ronen. Computationally feasible VCG mechanisms. In ACM Conference onElectronic Commerce (EC-2000), pages 242–252, Minneapolis, MI, 2000.

[14] Martin J. Osborne and Ariel Rubinstein. A Course in Game Theory. MIT Press, Cambridge, 1994.

[15] David C. Parkes. Bouded rationality. Technical report, University of Pennsylvania, 1997.

[16] Kevin Roberts. The characterization of implementable choice rules. In Jean-Jacques Laffont, editor,Aggregation and Revelation of Preferences, pages 321–349. North-Holland, Amsterdam, 1979.

[17] Tuomas Sandholm. Automated mechanism design: A new application area for search algorithms. InProceedings of the International Conference on Principles and Practice of Constraint Programming(CP-03), Kinsale, Ireland, 2003.

[18] Saurabh Sanghvi and David C. Parkes. Hard-to-manipulate combinatorial auctions. Technical report,Harvard University, 2004.

15

Page 16: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

8 Appendix 1

8.1 Regret-based PRMs

8.1.1 Partial revelation Payment Schemes

Consider the following generalization of Groves payments. Given joint report θ = (θi, θ−i) of all agents,and the corresponding choice x

∗, agent i’s payment is:

pi(θ) = pi(θ−i,x∗) = hi(θ−i) − SW−i(x

∗; fi(θ−i))

where hi : Θ−i → R is an arbitrary function and fi : Θ−i → Ti is any function that, given partial typevector θ−i, selects a type vector t−i from that set (i.e., fi(θ−i) ∈ θ−i).

Together with regret-based allocation, patrial Groves payments give:

Theorem 5. Let m be a regret-based partial revelation mechanism with partial type space Θ and partialGroves payment functions pi. If MR(x∗(θ), θ) ≤ ε for each θ ∈ Θ, then m is ε-efficient and ε-dominantIC.

We can specialize partial Groves payments to partial Clarke payments:

pi(θ−i,x∗) = SW (x∗

−i(θ−i); fi(θ−i)) − SW−i(x∗; fi(θ−i))

where x∗−i : Θ−i → X is an arbitrary function that chooses an allocation based only the reports of agents

other than i. This restriction allows the following IR guarantee:

Theorem 6. Let m be a regret-based partial revelation mechanism with partial type space Θ and partialClarke payments pi. If MR(x∗(θ), θ) ≤ ε for each θ ∈ Θ, then m is ε-efficient, ε-dominant IC and ex-postε-IR.

In other words, no agent has incentive greater than ε not to participate in the mechanism.

8.1.2 Local Payment Optimization

Recall that under full revelation, fi(θ−i) is the complete type vector t−i reported by the other agents andhi must take that particular t−i as an argument. Our partial Groves payment scheme, however, can selectan arbitrary type for each agent consistent with their declared partial types and apply standard Grovespayments to these. The selected types can differ for each payment function pi, and the arbitrary hi functionsalso depend only on the partial types revealed. Partial Groves payments can thus require significantly lessrevelation.

Even with the “Clarke-style” restriction above, our payment scheme is quite general: x∗−i and fi are

arbitrary functions. The choice of these will not affect the worst-case properties above, but it can be used to:(a) reduce the likelihood (if any) of a rationality violation; and/or (b) maximize revenue of the mechanism.If reducing or removing a rationality violation implies revenue loss, then a trade-off can be made betweenthe two criteria. An attractive feature of our PRMs is the considerable scope for optimization of the paymentscheme due to the nature of the partial type revelation.

8.2 Construction of Partial Types

We assume in what follows that the type space Ti is given by upper and lower bounds over the parametersof agent i’s utility model and focus on partial types specified similarly.

16

Page 17: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

(θ1,...θi,...θn)node 1

(θ'1,...θ'i,...θn)node 4

(θ'1,...θ''i,...θn)node 5

(θ'1,...θi,...θn)node 2

(θ''1,...θ'i,...θn)node 6

(θ''1,...θ''i,...θn)node 7

(θ''1,...θi,...θn)node 3

Figure 1: Example of a mechanism tree.

8.2.1 Partial Type Optimization Algorithm

We describe an offline, iterative, myopic approach to the optimization of agent type space partitions. It ismyopic in the following two senses: (a) at each step, it focuses on reducing the minimax regret of the jointpartial type with greatest regret by refining (or splitting) it, without considering the impact on other partialtypes; and (b) it only considers the immediate effects of this refinement, with no look-ahead to future splits.

To simplify the presentation, we first describe a naive, computationally intensive method, formulated interms of decision tree construction, and then show how it can be modified to be made much more tractable.The algorithm uses a heuristic function which, given a partial type vector, selects an agent whose partialtype will be split. It is important to realize that these splits are not “queries” to the agent—the mechanism isnot sequential. Rather, splitting a partial type further will increase the number of partial types from whichan agent must choose when the mechanism is actually executed. Once all splits to all agents are determined,the mechanism will ask agents to select from the partial types induced by this refinement process. In otherwords, offline we construct the partial types used by the one-shot mechanism. We discuss the heuristicfunction further below.

Figure 1 illustrates the creation of partial types for a PRM in terms of decision tree construction. Atthe outset, the only information available to the mechanism is the set of possible types for each agent givenby our prior, defining the initial partial type vector (θ1, . . . , θn) with θi = Ti. This labels the initial (root)node of our tree (node 1). We call the heuristic function on this vector, which selects an agent, say agent1, and a split of that agent’s partial type θ1 into two more refined partial types θ′1 and θ′′1 . The reasons forchoosing a particular split are elaborated below, but intuitively, such a split should have a positive impactwith respect to max regret reduction of the mechanism. This creates two child nodes corresponding topartial type vectors (θ′1, . . . , θi, . . . , θn) and (θ′′1 , . . . , θi, . . . , θn) (see nodes 2 and 3 in Figure 1). Thesetwo new leaves in the tree correspond to the partial type vectors to be used by the mechanism should thesplitting process terminate at this point. Thus, we update the partial type space Θi for agent 1 by removingθ1 and adding θ′1 and θ′′1 . We then compute the minimax regret level, optimal allocation and witness (in asingle optimization) for these two new leaf nodes given their partial type vectors.

With multiple leaves, the heuristic function must first select a leaf node for splitting before selectinga split. It does this by selecting the partial type vector (leaf node) with greatest minimax regret. Thealgorithm iterates in this fashion, repeatedly selecting the leaf node with greatest minimax regret and usingthe heuristic to decide which agent’s partial type (within that node) to split (and how to split it), untilsome termination criterion is met (e.g., the worst-case max regret is reduced to some threshold, or somemaximum number of partial types—per agent or overall—is reached).

Unfortunately, unlike standard decision tree construction, a split at one leaf has implications for allother leaves as well. For example, after the initial split above, a split may be recommended at node 2 inthe tree, corresponding to (θ′1, . . . , θi, . . . , θn). Suppose a split of agent i’s partial type θi into θ′i and θ′′i issuggested for some i 6= 1. Since θi is included in the partial type vectors of both nodes 2 and 3, this splitaffects both child nodes, since agent i will have to distinguish (at the very least) θ′

i from θ′′i . We there mustreplace nodes 2 and 3 with four new leaf nodes (nodes 4–7), corresponding to the combinations of θ ′

1, θ′′1

17

Page 18: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

and θ′i, θ′′i .

This naive approach has two obvious problems. First, there is an exponential blow-up in the numberof leaves of the “mechanism tree” since any reasonable heuristic (including the one described below) willoften (roughly) alternate splits between the agents whose type uncertainty is still relevant. The algorithm istherefore computationally demanding. Second, consider the example above. The split of θi at the seconditeration of the algorithm was recommended by the heuristic based on the partial type vector at node 2(which includes θ′1), because of its ability to reduce the minimax regret level of that specific partial typevector. However, this split may have little or no effect on minimax regret when applied to node 3 (in whichagent 1’s partial type θ′′1 is different). This split may indeed be “useless” when considered at node 3. Theseproblems can be avoided by modifying the algorithm as follows.

When a split is made at some node k, even though the partition has been updated in a way that mightaffect a partial type at another node k′, we can choose to ignore the effect on k′. Thus node k′ correspondsto a partial type vector that is no longer in our partition but includes a collection of partial type vectorsthat are. We call k′ a joint node, and for each such node, we record the splits that have been ignored. Inour example, the split of node 2 on θi may be ignored at node 3, leaving node 3 to be a joint node. Inthe tree, node 2 would generate two descendants (nodes 4 and 5), while node 3 would remain a leaf node(nodes 6 and 7 would not be generated). While the split of i’s type into θ′

i and θ′′i will eventually need tobe considered at node 3 (or its descendants), we defer this decision (as discussed below), and can considermaking additional splits of node 3 first should this split of θi be of little value. This saves having to considerthe splits of multiple descendants of node 3 independently.

Note that multiple ignored splits may be “nested”. For example, perhaps the first split cuts the partialtype into one where some utility parameter p1 is greater than .5 (call this θ>0.5

i ) and one where it is lower(θ<0.5

i ); the second splits θ>0.5i on p1 at .7; the third splits θ<0.5

i along parameter p2 at .4, and so on. Whensuch joint node is the leaf with greatest regret level (i.e., the one to be considered for expansion), we firstcheck if there is a useful ignored split before considering the split recommended by the heuristic. If so, we“un-ignore” it, thus creating another leaf but without increasing the complexity of the partition. The preciseway in which we select splits is described in Tables 1 and 2. A split is called “good” if both resulting nodeshave lower regret than the node that was split, and “OK” if only one of them does. With this improvedalgorithm, new mechanism nodes are only added if they are helpful. Since the naive approach resultsin useless splits, computational requirements can be greatly reduced by adopting this more sophisticatedapproach.

8.2.2 Heuristic Function

The role of the heuristic function is to split a partial type vector in two, creating one additional node in themechanism. This must be done by splitting the partial type of one of the agents. However, in the case of ajoint node, the partial type of an agent passed to the heuristic function may correspond to several “actual”partial types in the partition. For example, the input partial type may ignore the fact that it has been splitalong parameter µ1 at the value .5. If the heuristic recommends splitting that partial type along parameterµ2, this corresponds to splitting two actual partial types (with values greater or less than .5 for µ1). But thepartial type vector is only split in two, each successor node inheriting the ignored splits of its parent.

In a flat utility model, each utility parameter is simply the valuation for an allocation x. In this case, ourheuristic can be described as follows. At each node, given its partial type vector, we have the correspondingminimax regret solution x

∗ and witness x. Intuitively, regret can be reduced by raising the lower boundof x

∗ or lowering the upper bound on x. However, since the optimization is offline, the split “Is utilityfor x greater than .5?” must account for both possibilities, yes and no. When splitting, say, x∗, the partialtype corresponding to the no answer (i.e., lowering the upper bound) is unlikely to have lower regret unlessx∗ also turns out to be the witness of the second best regret minimizing allocation. In that case, lowering

18

Page 19: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

Find leaf node N with highest regretCall heuristic on N ’s partial type vector. Output: hSplit

If there exists ignored split iSplit on same parameter as hSplit:splitNode(N, iSplit)

Else Search for “good” ignored split gSplit

If there exists one:splitNode(N, gSplit)

Else Test hSplit

If hSplit is “good’:splitNode(N, hSplit)

Else-if hSplit is “OK’:Search for “OK” ignored split okSplit

If there exists one:splitNode(N, okSplit)

Else: splitNode(N,hSplit)

Table 1: Split selection algorithm

splitNode(N, split)θ = partial type vector of N ; i = agent involved in split

split θi into θ′i and θ′′i according to split

update i’s partitioncompute new MMR for both (θ′i, θ−i) and (θ′′i , θ−i)create corresponding new nodes N ′ and N ′′

separate N ’s ignored split list into those for N ′ and N ′′

replace leaf N in tree with leaf nodes N ′ and N ′′

For all leaf nodes with θi in their partial type vectoradd split to list of ignored splits

Table 2: splitNode function

its upper bound will help reduce the second lowest regret and raising the lower bound will help with thelowest. We therefore also compute the second lowest max regret solution x

∗2 and its witness x2. If both

x = x∗2 and x

∗ = x2 are true, or neither is true, we split the one with the largest gap (the difference

between its upper and lower bounds). If only one is true, we split that parameter, unless its gap is belowsome threshold, and the other gap is not.

This seemingly trivial strategy has interesting properties when utility models are factored using GAI.

8.3 Empirical Results

We report on experiments in a multi-attribute negotiation domain. A buyer and a seller are bargainingover a multi-attribute good to trade. The set of 16 possible goods is specified by four boolean variables,X1, X2, X3, X4, denoting the presence or absence of four specific item features. The buyer’s valuationand the seller’s cost are represented using GAI structure. Each agent’s utility function is decomposedinto two factors, each factor with two different variables: vb(x) = v1

b (x1, x2) + v2b (x3, x4) and vs(x) =

v1s(x1, x3) + v2

s (x2, x4). Each subutility function is specified using four parameters, indicating the localvalue of the four possible combinations of features. Thus eight utility parameters fully define each agent’sutility function.

Initial prior bounds on utility parameters (i.e., the initial type space) are drawn uniformly between 0and 100 (seller cost) or 100 and 200 (buyer value), ensuring a positive transaction exists. Though 16 goodsmay seem small, it is much larger than problems solved by existing automated approaches to (one-shot)mechanism design (all of which are restricted to a small, finite type space). The social welfare maximizing

19

Page 20: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

allocation is that good that maximizes surplus (difference between buyer value and seller cost).We assess the performance of our mechanisms by showing how the worst case minimax regret level

(over all possible agent types) reduces with the number of partial types created by our approach (expressedin number of bits). Since PRMs are designed to work for agents with any true types, there is no singletrue type or optimal allocation to compare to. We could simulate specific agents, but our worst-case regretbounds any such “true loss” results, and these bounds are tight in the sense that at least one set of agent typeprofiles will incur this worst case regret. Of course, regret any specific collection of agents (or type profile)will generally be lower, so we also show expected regret in our results, assuming a uniform distributionover type parameters. We compare our myopic approach to type construction (i.e., where regret reductionis used to determine splits of partial types) to a simple approach for partial type construction that simplysplits each partial type evenly across all parameters.4

Figure 2 shows the worst case minimax regret level of our regret-based PRMs (averaged over thirty runsusing different priors) when partial types are constructed using our myopic algorithm and when uniformsplitting is used. We also show expected minimax regret level (assuming a uniform distribution over types).Results are reported as a function of the number of bits of communication necessary for an agent to reportsome partial type in the proposed partition. Bounds on manipulability, efficiency loss, and rationalityviolation are all dictated by this worst-case regret (depending on whether partial Groves or Clarke paymentsare used).5

It is clear that using regret-reduction to decide how to “refine” partial type space offers a significant im-provement over a uniform partitioning. Our regret-based approach provides good anytime behavior whileuniform partitioning reduces ε as a step function (averaging smooths the results in the graph). Naturally,average manipulability is lower than worst-case manipulability, thus further justifying the use of approx-imate incentives. We note that the initial regret level (assuming only one partial type, i.e., Ti, for eachagent i) corresponds to an error between 50% and 146% of the optimal social welfare, depending on theactual agent types, and is reduced using 11 bits of communication (to communicate one’s partial type)to 20–56% error by our regret-based approach versus only 30–86% by the uniform approach. With only11 bits, the regret-based approach reduces efficiency loss and manipulability by about 60% while a uni-form partition reduces it by about 38%. To reach an average manipulability level of around 70, a uniformapproach requires 11 bits compared to 6.5 for regret-based splitting, which constitutes a 40% savings incommunication. To reach manipulability level of roughly 90, the regret-based approach provides a 50%savings in communication (5.5 bits vs. 11 bits). Note that this savings will be realized repeatedly if themechanism is used to support, say, multiple bargaining instances. Finally, it is worth remarking that 11 bitscorresponds to about 1.4 bits per parameter and 0.7 bits per allocation, which is quite small in an exampleof this size.

9 Appendix 2

9.1 Regret-based Elicitation

9.1.1 Allocation Elicitation

Several elicitation strategies have been proposed that attempt to reduce minimax regret quickly. We de-scribe one strategy here, called the current solution strategy (CSS), which has proven quite effective in bothconstraint-optimization problems with GAI models [2] and winner determination in CAs with linear utilitymodels [3]. CSS works as follows: given the current feasible type region θ, let x

∗ and x be the minimax4This is simulated so that at each step only one partial type is split in order to allow a more accurate comparison to our approach.5For agents with a specific type, we can usually derive much tighter bounds on gain from manipulation than the worst case ε;

hence expected manipulability is less.

20

Page 21: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

0 1 2 3 4 5 6 7 8 9 10 11

50

60

70

80

90

100

110

120

130

140

Number of bits per agent

Ep

silo

n

Uniform: WorseUniform: AverageRegret−based: WorseRegret−based: Average

Figure 2: Worst case and expected ε, as a function of the number of bits used per agent. Averaged over 30 runs.

optimal and witness allocations, respectively. Each of these allocations involves a specific instantiation ofthe GAI factors of the agents’ valuations, hence regret can be reduced only by imposing additional con-straints on θ that tighten our knowledge of at least some of these parameters. Direct bound queries can beposed that ask the user to tighten the bounds on one of these parameters. CSS queries the parameter withthe loosest bounds, among those instantiated in x

∗ and x, and has been shown to be extremely effective inpractice in reducing regret.

9.1.2 Payment Elicitation

One might not be satisfied with the guarantees provided by the expected value of ε defined in Thm. 3; if itis too large, our bound γ = δ + ε(x∗(θ)) on manipulability may not induce truthful partial type revelation.In this case, we would like to continue eliciting in a second phase, after reaching allocation certainty, untilwe can guarantee that manipulation is bounded by a pre-specified, type-independent ε. As discussed above,for a suitably small ε, we can expect to induce truthful revelation for purely practical reasons.

The elicitation strategies above will not provide useful queries for payment elicitation since allocationcertainty has been achieved. So we directly elicit information to determine payments that reduce the worst-case bounds on manipulability. This two-phase approach is similar in spirit to other elicitation schemesthat first determine an efficient allocation and then elicit further information to determine VCG payments[6, 11]. Our model differs slightly in that we do not require allocation or payment certainty.

Once allocation certainty has been reached, i’s payment uncertainty depends only on other agent valua-tions for the chosen allocation, as well as the optimal allocation in the sub-economy with agent i removed.In practice, one can compute the types t>−i and t⊥−i that define i’s max and min payments in x

∗ in Eq. 6,respectively, as well as the allocations x

>−i and x

⊥−i that are optimal, under those types, in the sub-economy.

Given these, we have:

εi(x∗) = SW −i(x

>

−i; t>

−i) − SW −i(x∗; t>−i)

−SW −i(x⊥

−i; t⊥

−i) + SW −i(x∗; t⊥−i)

In the spirit of the current solution strategy, we query the GAI parameter, among those involving thesethree allocations, with the most uncertainty. Note that if ε is required to be very close to zero, it is possiblethat we will have to elicit enough information to determine the efficient allocation of the sub-economy x∗

−i

21

Page 22: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

(so that x>−i = x

⊥−i) with certainty. This is not necessarily the case however, and, unlike other two-phase

approaches, we might terminate without knowing either the x∗−i’s, their social welfare, or the social welfare

of x∗ in the sub-economies.

9.2 Direct Optimization

The design approach presented so far follows that of most other work in the literature by decomposingthe mechanism into two phases: one to reduce allocation uncertainty to a satisfactory level and choose anallocation x

∗; and another that independently tackles payment uncertainty in x∗. However, in a partial

revelation setting, the allocation defines both the incurred loss in efficiency as well as a large part of thepayment uncertainty. The choice of x

∗ should therefore account for both criteria. Moreover, the truetype of the agents is unique and, along with x

∗, defines both efficiency and payment uncertainty. Theoptimization of both criteria should therefore not be independent. Finally, when designing approximatelyincentive compatible mechanisms, the main objective is to reduce manipulability below a given threshold.The sum of the efficiency and payment uncertainty bounds is only an upper bound: actual manipulabilitymay be significantly lower, which can allow us to terminate with fewer queries.

If the true type profile is t, the manipulability of our mechanism, when choosing x∗ and applying our

payments p>i , is the maximum over all agents of the difference between the agent’s best-case utility and itsactual utility. So the manipulability of agent i is expressed as:

αi(x∗, t) = max

x

[vi(x; ti) − pvi (x; t−i)] − vi(x

∗; ti) + p>i (x∗; θ−i)

The manipulability of the mechanism is maxi{αi(x∗, t)}, and the worst-case manipulability is α =

maxt maxi{αi(x∗, t)} . We say M is α-manipulable if this expression holds.

Theorem 7. Let M be an α-manipulable mechanism using partial VCG payments. Then M is an α-efficient, α-ex post individually rational, and α-ex post incentive compatible.

Finding the allocation that minimizes worst-case manipulability is equivalent to solving the followingoptimization:

x∗ = arg min

x

maxi,x

Ri(x, x)

where: Ri(x, x) =

maxt

[ vi(x; ti) − vi(x; ti) + p>i (x; θ−i) − pvi (x; t−i)]

= maxti

( vi(x; ti) − vi(x; ti) )

+ maxt′−i

pvi (x; t′−i) − min

t−i

pvi (x; t−i)

This is also a regret minimization problem, where regret is with respect to the global utility of an agent.The high-level idea of regret-based elicitation naturally applies: given an a priori partial type of the

agents, we compute the minimax-optimal allocation x∗ and, in the process, the witness corresponding to

the adversary’s choice xi for each i. If the regret of x∗ (i.e., manipulability) is not small enough, we

choose a query that attempts to reduce the regret of our current solution and iterate until we reach the giventhreshold.

The regret minimization problem can be reformulated as:

minx,δ

δ such that ∀x, ∀i,

δ ≥ maxti

( vi(x; ti) − vi(x; ti) )

+maxt′−i

pvi (x; t′−i) − min

t−i

pvi (x; t−i)

22

Page 23: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

The maximum payment of i in x is not linear in x, but can be linearized by generating (or enumerat-ing) allocations that are potentially optimal when agent i is removed (the allocations used to define VCGpayments).

For each i, generating the witness xi that most violates the constraints given a current solution x∗

involves solving:

xi = arg maxx

�max

ti

[ vi(x; ti) − vi(x∗; ti) ] − min

t−i

pvi (x; t−i) �

Since the minimum payment is not linear either, this optimization requires its own round of constraintgeneration. Witness generation is equivalent to a minimax regret problem. We leave details for a longerversion of the paper.

9.3 Elicitation Strategies and Empirical Results

We consider three elicitation strategies with the aim of reducing manipulability (i.e., α) and SW-regret (i.e.,maximum loss in efficiency δ) of the chosen outcome with as few queries as possible.

To set the stage, recall that we have defined CSSs (current solution elicition strategies) w.r.t. both socialwelfare (call this SW-CSS) and payment uncertainty (P-CSS) in previous sections. Though the detailshave been omitted, computing the α-minimizing x

∗ provides us with three witnesses: x, corresponding tothe adversary’s choice, and x

>−i and x

⊥−i, corresponding to the optimal allocations in the sub-economies

that define the payments of agent i in x∗ and x, respectively. This leads to a third CSS (M-CSS, for

manipulability) that queries the parameter among these four allocations that has the largest gap. WhileM-CSS seems appealing on the surface, it in fact performs quite poorly since it tends to ask queries thatreduce payment uncertainty early on for allocations that won’t in fact be realized. So instead, we will usethese as sub-strategies in the three methods we explore.

The first strategy we test is two-phase (2P), in which we first run SW-CSS until SW-regret δ reacheszero (or some small threshold), finding an efficient allocation, then run P-CSS to determine appropriatepayments until δ + ε is less than some threshold. This is much like standard two-phase approaches. The α-two phase (α2P) strategy works exactly like 2P, but terminates when the manipulability bound α is belowsome threshold. Intuitively, this more accurately reflects the quality of the current decision. The thirdstrategy is called common-hybrid (CH) and proceeds as follows. (a) Let A be the set of GAI-parametersinstantiated in the two allocations (SW-regret minimizing and its witness) that determine SW-regret; let B

be the analogous set of parameters among the four allocations that determine worst-case manipulability. Ifthese sets have any parameter in common, we query that common parameter with the largest gap. (b) If noparameters are in common, then we use a hybrid method that chooses between SW-CSS and M-CSS, witha bias towards SW-CSS early on (for reasons explained above).6

We compared 2P, α2P, and CH on a car rental problem of moderate size (based on [2]), where abuyer wants to rent a car from one of two dealers, and the buyer’s valuation and dealer costs exhibit GAIstructure. A car is defined by eight attributes (e.g., engine size, seating, etc.) with domain size ranging fromtwo to nine. Each of the three agents’ utilities has 13 factors with factor sizes ranging from one to fourvariables (giving a total of 825 utility parameters). We also compared these strategies with a myopicallyoptimal strategy (MY) on small, randomly generated, supplier-selection problems, where a buyer negotiateswith several sellers over a multi-attribute item to trade. These problems have 81 utility parameters. MYconsiders querying the midpoint of each parameter, computes the two new global regret levels that couldresult from each each response, and asks the query with the best average regret reduction. Clearly thisstrategy is only computationally feasible on small problems, but it provides an interesting comparison.

6The best M-CSS query is selected only if its utility gap is at least b times that of the best SW-CSS query. We use b = max(0, 10−0.005 � n

i=1i) at query n in our experiments. With this setting, after 60 queries (total over all agents) M-CSS will always be selected.

23

Page 24: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

10 20 30 40 50 60 70 80 90 100 110

200

400

600

800

1000

1200

1400

1600

1800

Number of queries per agent

Man

ipul

abili

ty

Car Problem

2Pα2PCH

Figure 3: Car Rental Problems. Average of 40 runs. 2 sellers, 1 buyer; 13 factors/agent; 1-4 variables/factor; 2-9 values/variable.825 parameters total.

Figure 3 shows how manipulability α is reduced as a function of the number of queries (per agent)for 2P, α2P and CH on the car rental problem. α2P and CH, the two strategies that exploit manipulabil-ity, exhibit better anytime behavior than 2P, with a slight advantage for CH. 2P and α2P reach near-zeromanipulability in roughly the same number of queries (around 110 per agent), while CH reaches the samelevel in about 95 queries. Independent of the specific strategy, our results make a strong case for regret-based elicitation in mechanism design, as it effectively minimizes elicitation effort. On average only 8%of the utility parameters are ever queried by CH. Furthermore, these are not completely determined, sincewe only tighten the initial bounds. Regret-based elicitation terminates with 92% of the initial utility un-certainty remaining on average (as measured by the “perimeter” of the partial type space) whereas halvingthe gap of the most uncertain parameter (a theoretically motivated method uninformed by regret [2]) leavesonly 64% of the uncertainty remaining after the same number of queries, and is still very far from reachingzero-manipulability. This indicates that regret-based strategies focus on relevant information, rather thanreducing uncertainty for its own sake, thus reducing revelation and improving decision quality. Note thatinitial regret prior to elicitation is on average 99% of optimal social welfare. Despite this, zero-regret (trueefficiency) is attained in only 71 and 77 queries, respectively, for α2P and CH, despite the complexity ofthe problem (involving 825 parameters).

Figure 4 compares CH and MY on small random problems. The large amount of additional computationrequired by MY allows for reasonable anytime behavior, but it is still outperformed by CH except in theearliest stages of elicitation. CH reaches near-zero manipulability in about 15 fewer queries (45 vs. 60).While counter-intuitive, this behavior is plausibly explained by the fact that since CH focuses on “relevant”parameters, it implicitly provides some sequential guidance: the parameters instantiated in the variousregret allocations are likely to remain relevant throughout a large part of the elicitation process. However,further investigation of this phenomenon is needed.

24

Page 25: Mechanism Design with Partial Revelation (Ph.D. Thesis ...nhyafil/Papers/Hyafil_ThesisProposal.pdf · Mechanism Design with Partial Revelation (Ph.D. Thesis Proposal) Nathanael¨

5 10 15 20 25 30 35 40

100

200

300

400

500

600

Number of queries per agent

Man

ipul

abili

ty

Random Problems

CHMY

Figure 4: Small Problems. Average of 40 runs. 2 sellers, 1 buyer; 3 factors/agent; 2 variables/factor; 3 values/variable. 81parameters total.

25