Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
Complex collective choices
Luigi Marengo
1Dept. of Management, LUISS University, Roma, [email protected]
Based on joint work with G. Amendola, G.Dosi, C. Pasqualiand S. Settepanella
Complex collective decisions
I we consider “complex” multidimensional decisions, in thesense that:
I they involve several items (features)I there are non-separabilities and non-monotonicities
(interdependencies) among such items
A simple example: “What shall we do tonight?”
I C = {movie, theater, restaurant, stay home, . . . }I the object “going to the movies” is defined by:
I with whomI which movieI which theaterI what timeI . . .
I the object “stay home” is defined by:I with whomI to do whatI e.g. watch TV, or have a drink put on a nice record and see
what happens . . .I which showI which movie
I what we eatI . . .
Some obvious non-standard properties
1. objects typically do not partition the set of traits/features2. in general there are obvious non-separabilities and
non-monotonicities (interdependencies) among traitsI e.g. I might prefer Françoise to Corrado as instance of the
“with whom” if associated to “staying at home” and“tête-à-tête dinner”, but Corrado to Françoise as aninstance of the “with whom” if associated to “going to thefootball match” and “with ten more male friends”
The general question
I how does the aggregation of items/features into objectsdetermines collective outcomes
Two families of models
1. a committee where a group of people choose (e.g. bypairwise majority voting) a value for all items accordingonly to their preferences
2. an organization where decision rights are divided aredelegated to individual agents and outcomes have some“objective” value
notions of authority and power:I in the committee model: power of object construction
(putting items together to form an object of choice) andpower of agenda
I in the organization model: power of allocating decisions(delegation) and power of vetoing and overruling decisions
The Committee Model I
I Choices are made over a set of N elements or featuresF = {f1, f2, . . . , fN}, each of which takes on a value out of afinite set possibilities.
I Simplifying assumption: such a set is the same for allelements and contains two values labelled 0 and 1:fi ∈ {0, 1}.
I The space of possibilities is given by 2N possible choiceconfigurations: X = {x1, x2, . . . , x2N}.
The Model II
I There exist h individual agents A = {a1, a2, . . . , ah}, eachcharacterized by a (weak) ordering on the set of choiceconfigurations
I We call this ranking agent k ’s individual decision surfaceΩk .
The Model III
I Given a status quo xi and an alternative xj agentssincerely vote according to their preferences.
I A majority rule is used to aggregate their preferences:< : (Ω1, Ω2, . . . , Ωh) 7→ Ω.
The Model IV
I Given an initial configuration and a social decision rule <this process defines a walk on the social decisionsurface which can either:
1. end up on a social optimum2. cycle forever among a subset of alternatives.
Objects (Modules)
Let I = {1, 2, . . . , N} be the set of indexes.An object (decision module) Ci ⊆ IThe size of object Ci is its cardinality |Ci |.An object scheme is a set of modules:
C = {C1, C2, . . . , Ck}
such thatk⋃
i=1
Ci = I
(. . . but not necessarily a partition)
Agendas
An agenda α = Cα1Cα2 . . . Cαk over the object set C is apermutation of the set of objects which states the orderaccording to which objects are examined.
Voting procedure
We use the following algorithmic implementation of majorityvoting:
1. repeat for all initial conditions x = x1, x2, . . . , x2N
2. repeat for all objects Cαi = Cα1 , Cα2 , . . . , Cαk until a cycleor a local optimum is found;
3. repeat for j=1 to 2|Cαi |I generate an object-configuration C jαi of object CαiI vote between x and x ′ = C jαi ∨ x(C−αi )I if x ′ º< x then x ′ becomes the new current configuration
Stopping rule
We consider two possibilities:1. objects which have already been settled cannot be
re-examined2. objects which have already been settled can be
re-examined and if new social improvements have becomepossible
Walking on social decision surfacesGiven an object scheme C = {C1, C2, . . . , Ck}, we say that aconfiguration x i is a preferred neighbor of configuration x j
with respect to an object Ch ∈ C if the following three conditionshold:
1. x i º< x j2. x iν = x
jν ∀ν /∈ Ch
3. x i 6= x jWe call Hi(x , Ci) the set of neighbors of a configuration x forobject Ci .A path P(x i , C) from a configuration x i and for an objectscheme C is a sequence, starting from x i , of preferredneighbors:P(x i , C) = x i , x i+1, x i+2, . . . with x i+m+1 ∈ H(x i+m, C)A configuration x j is reachable from another configuration x i
and for decomposition C if there exist a path P(x i , C) such thatx j ∈ P(x i , C).
Social outcomes
I A configuration x is a local optimum for thedecomposition scheme C if there does not exist aconfiguration y such that y ∈ H(x , C) and y Â< x .
I A cycle is a set X0 = {x10 , x20 , . . . , x j0} of configurationssuch that x10 Â< x20 Â< . . . Â< x j0 Â< x10 and that for allx ∈ X0, if x has a preferred neighbor y ∈ H(x , C) thennecessarily y ∈ X0.
The relevance of objects I
I object construction mechanisms forego and constrainchoices.
I Influence of the generative mechanism:1. define the sequence of voting;2. define which subset of alternatives undergoes examination.
The relevance of objects II
I Different sets of objects may generate different socialoutcomes.
I Social optima do – in general – change when objects aredifferent both because:
1. the subset of generated alternatives is different (and somesocial optima may not belong to many of these subsets)
2. the agenda is different (and this may determine differentoutcomes).
I Framing power appears therefore as a more generalphenomenon than agenda power.
Results in a nutshell
I Under general conditions (notably if preferences are notfully separable) the answer to the previous question isentirely dependent upon decision modules.
I We show algorithmically that, given a set of individualpreferences:
1. by appropriate modifications of the decision modules it ispossible to obtain different social outcomes.
2. cycles à la Condorcet-Arrow may also appear anddisappear by appropriately modifying the decision modules.
3. the median voter theorem is also dependent upon the set ofalternatives (median voter may be transformed into outrightloser)
I trade-off decidability-manipulability: “finer” objects makecycles disappear but generate many local optima (socialoutcome will depend on initial status quo) and simplify thepairwise voting process
Results I
I Social outcomes are, in general, dependent upon theobjects scheme
I Consider a very simple example in which 5 agents have acommon most preferred choice.
I By appropriately modifying the objects scheme one canobtain different social outcomes or even theappearance/disappearance of intransitive limit cycles.
Results II
Rank Agent1 Agent2 Agent3 Agent4 Agent51st 011 011 011 011 0112nd 111 000 010 101 1113rd 000 001 001 111 0004th 010 110 101 110 0105th 100 010 000 100 0016th 110 111 110 001 1017th 101 101 111 010 1108th 001 100 100 000 100
Results III
I With C = {{f1, f2, f3}} the only local optimum is the globalone 011 whose basin of attraction is the entire set X .
I With C = {{f1}, {f2}, {f3}} we have the appearance ofmultiple local optima and agenda-dependence.
I With C = {{f1, f2}, {f3}} multiple local optima butagenda-independence.
Object-dependent cycles I
Redefining modules can make path dependence disappear.I Consider the case of three agents and three objects with
individual preferences expressed by:Order Agent 1 Agent 2 Agent 3
1st x y z2nd y z x3rd z x y
Object-dependent cycles II
I Social preferences expressed through majority rule areintransitive and cycle among the three objects: x Â< y andy Â< z, but z Â< x .
I Imagine that x,y,z are three-features objects which weencode according to the following mapping:
x 7→ 000, y 7→ 100, z 7→ 010
Object-dependent cycles III
I Suppose that individual preferences are given by:
Order Agent 1 Agent 2 Agent 31st 000 100 0102nd 100 010 0003th 010 000 1004th 110 110 1105th 001 001 0016th 101 101 1017th 011 011 0118th 111 111 111
Object-dependent cycles IV
1. With C = {{f1, f2, f3}} the voting process always ends up inthe limit cycle among x,y and z.
2. The same happens is each feature is a separate object:C = {{f1}, {f2}, {f3}}.
3. However, with:C = {{f1}, {f2, f3}}
or with:C = {{f1, f3}, {f2}}
Voting always produces the unique global social optimum010 in both cases.
Median voter I
Order Ag1 Ag2 Ag3 Ag4 Ag5 Ag6 Ag71st 1 2 3 4 5 6 72nd 2 3 4 5 6 7 63rd 0 1 2 3 4 5 54th 3 4 5 6 7 4 45th 4 0 1 2 3 3 36th 5 5 6 7 2 2 27th 6 6 0 1 1 1 18th 7 7 7 0 0 0 0
Median voter theorem: an example
Median voter II
Order Ag1 Ag2 Ag3 Ag4 Ag5 Ag6 Ag71st 001 010 011 100 101 110 1112nd 010 011 100 101 110 111 1103rd 000 001 010 011 100 101 1014th 011 100 101 110 111 100 1005th 100 000 001 010 011 011 0116th 101 101 110 111 010 010 0107th 110 110 000 001 001 001 0018th 111 111 111 000 000 000 000
If C = {{f1, f2, f3}} there is unique social optimum 100 (medianvoter’s most preferred)If C = {{f1}, {f2}, {f3}} two local optima: 100 and 011 (theopposite of median voter’s most preferred).
Simulation Results with random agents I
I For the objects scheme C1, i.e. a single decision modulecontaining all the features, we have almost alwaysintransitive cycles and that these cycles are rather long(almost 40 for N=8, 120 for N=12 different choiceconfiguration on average).
I At the other extreme, i.e. the set of finest objects in mostcases we do not observe cycles, but choice ends in a localoptimum.
I the number of local optima increases exponentially: withN = 8 about 16 local optima, with N = 12 over 300 localoptima
Simulation Results II
I A very clear trade-off between the presence of cycles andthe number of local optima.
I When large objects are employed, cycles almost certainlyoccur.
I The likelihood rapidly drops when finer and finer objectsare employed, but in parallel the number of local optimaincreases.
I This implies that a social outcome becomes well definedbut which social outcome strongly depends upon thespecific objects employed and the sequence in which theyare examined.
The organization model
I decisions are allocated to different agents by a principalI there are good and bad decisions (i.e. social outcomes are
ranked by some objective performance evaluation) and theprincipal want to get the best decisions
I however principal and agent do not know what thedecisions are
Background
I when knowledge is distributed in organizations how shoulddecisions be allocated?
I co-location of knowledge and decision rights (Hayek 1945,Jensen-Meckling 1992)
I but delegation generates agency problems to be solved byincentives and/or authority
I additional complication: delegation, incentives andauthority may interact in unexpected ways (the problem ofmotivation)
I (. . . maybe agency problems have been overrated in theliterature?)
Our contribution
1. not only self-interest but incommensurable beliefs, i.e.“. . . the problem that arises when different individuals orgroups hold sincere but differing beliefs about the nature ofthe problem and its solutions” (Rumelt, 1995)
2. if knowledge is distributed delegation is limited not only byagency problems but also by complexity and uncertainty:
I interdependencies (externalities) among pieces ofknowledge
I the principal may not know where knowledge is actuallylocated
Incommensurable beliefs
I agency models assume that conflict in organizations arisesbecause individuals have diverging objectives andinformation is asymmetric
I but agents may have different cognitions, views, ideas,visions on how to achieve a common objective (especiallywhen facing non-routines situations)
I this is a source of cognitive conflict: diverging ideas aboutthe appropriate course of action
I and a source of political conflict: the actions of one agentproduce externalities on the principal and on the otheragents
Some likely properties
I conflicting views and conflicting interests are oftenintertwined
I conflicting views may be harder to reconcile and symmetricinformation may not help
I one may not want to fully reconcile them if there isuncertainty of what should be done
I mis-aligned views may be a fundamental driver of learningI thus the principal faces a trade off between:
I having her views implemented as closely as possibleI use the agents’ different views to learn and discover better
policies
The Model: policy landscape
I a set of of n (binary) features (or policies)F = {f1, f2, . . . , fn}
I X is the set of 2n policy vectors and xi = [f i1, fi2, . . . , f
in] one
generic elementI an objective and exogenously determined ranking of policy
vectors according to performance (complete and transitive)order: x ÂN y
The Model: principal, agents and organization
I a principal Π and h agents A = {a1, a2, . . . , ah} with1 ≤ h ≤ n
I all of them with a complete and transitive preferenceordering over policy vectors: ºΠ and ºai
I a decomposition of decision rights D = {d1, d2, . . . , dk}
such that:h⋃
i=1
di = P and di⋂
dj = ∅ ∀i 6= j
(for simplicity the principal does not take directly anydecision)
I the organizational structure is a mapping of the set Donto the set A of agents, plus an agenda (a permutation ofthe set of agents) giving the sequence of decision (if any)
Examples of organizational structure
assuming four policy items:I {a1 ← {p1, p2, p3, p4}}, i.e. one agent has control on all
four policiesI {a1 ← {p1}, a2 ← {p2}, a3 ← {p3}, a4 ← {p4}}, i.e. four
agents have each control on one policyI {a1 ← {p1, p2}, a2 ← {p3, p4}}, i.e. two agents have each
control on two policiesI {a1 ← {p1}, a2 ← {p2, p3, p4}}, i.e. two agents with
“asymmetric” responsibilities: one has control on the firstpolicy item and the other on the remaining three
Agents’ decisions
I when asked to choose between xi and xj an agent selectsthe vector which ranks higher in his preference ordering
I unless the principal uses authority:1. veto: the principal can impose the status quo if she prefers
it to agent’s choice2. fiat: the principal can impose her preferred substring to the
agent
Organizational decisions
I an initial status quo policy is (randomly) givenI following the agenda, one agent chooses the policy items
assigned to him that, given the current value of the policiesnot in his control, determine the policy vector that rankshigher in his ordering
I unless the principal forces him to choose a different vectorI the process is repeated for all agents (according to
agenda) until an organizational equilibrium or a cycle arereached, with or without agenda repetition
Two models
1. getting what the principal wants when she knows what shewants: control
2. getting what the principal wants when she does not knowwhat she wants: the principal does not only want to controlagents, but also to learn from them and experimentwhether their rankings (or part of them) are better (closerto the “true” one) than her own
Summary of results
Both problems are better solved by finer delegation structure:decision must be partitioned as much as possible.
I a finer delegation structure generates control: the principalcan get very close to her preferred decision even withoutexercizing power (divide and conquer)
I if the principal does not know what she wants, a finerdelegation structure induces more experimentation andlearning (divide and learn)
I use of authority of course increases control and has aninverted U-shape effect on learning (with veto moreeffective than fiat)
Getting what you want when you know what you want
Rank Agent1 Agent2 Agent3 Principal1st 011 011 011 0002nd 111 000 010 1013rd 000 001 100 1114th 010 110 101 1105th 100 010 000 1006th 110 111 110 0017th 101 101 111 0108th 001 100 001 011
Getting what you want when you know what you want
Rank Agent1 Agent2 Agent3 Principal1st 011 011 011 0002nd 111 000 010 1013rd 000 001 100 1114th 010 110 101 1105th 100 010 000 1006th 110 111 110 0017th 101 101 111 0108th 001 100 001 011
Example I: how to get a different equilibria
With the organizational structure{a1 ← {p1}, a2 ← {p2}, a3 ← {p3}, with agenda (a1, a2, a3) andthe initial status quo [0, 1, 1], [0, 0, 0] is an equilibrium
Different Global Optima
Order Agent1 Agent2 Agent31st 001 000 0012nd 110 111 1103rd 000 001 0004th 010 010 0105th 100 100 1006th 011 011 0117th 111 101 1118th 101 110 101
Example II: cycles or different unique equilibria
I Structure {a1 ← {p1, p2}, a2 ← {p3}} always generates thecycle [001] → [000] → [110] → [111] → [001]. It istherefore a structure in which intra-organizational conflictdoes never settle into an equilibrium
I Structure {a1 ← {p1}, a2 ← {p2}, a3 ← {p3}} has theunique equilibrium [001] that is reached from every initialcondition
I Structure {a1 ← {p1}, a2 ← {p2, p3} also produces aunique equilibrium but a different one, i.e. vector [000]
The role of organizational structure
We simulate random problems with 8 policies, randomprincipals and agents and the following organizationalstructures:
I O1: a1 ← {1, 2, 3, 4, 5, 6, 7, 8}I O2: a1 ← {1, 2, 3, 4}, a2 ← {5, 6, 7, 8}I O4: a1 ← {1, 2}, a2 ← {3, 4}, a3 ← {5, 6}, a4 ← {7, 8}I O8: a1 ← {1}, a2 ← {2}, a3 ← {3}, a4 ← {4}, a5 ←{5}, a6 ← {6}, a7 ← {7}, a8 ← {8}
Organizational structure, equilibria and cycles IWith agenda repetition. Average number of equilibria andcycles over 1000 randomly generated problems:
Org. Structure No. of equilibria Share of cycles
O8 2.78(1.22)
0.78
O4 1.89(0.98) 0.74
O2 1.03(0.45) 0.58
O1 1.00(0.00) 0.00
Organizational equilibria and cycles for differentorganizations
(n=8, 1000 repetitions, standard deviation in brackets)
Organizational structure, equilibria and cycles IIWithout agenda repetition:
Org. Structure N. of different final policy vectors
O8 41.93(3.14)
O4 27.73(2.45)
O2 10.30(1.22)
O1 1(0.0)
Table 5: Number of different outcome vectors withoutagenda reiteration and without overruling
(n=8, 1000 repetitions, standard deviation in brackets)
Divide and conquer!!
Veto power
It increases decidability by sharply reducing the number ofcycles:
P(veto) N. optima N. cycles Control loss Perform. loss0.0 0.94 202.94 -161.20 -159.510.3 13.88 146.99 -71.88 -14.450.5 27.60 86.45 -65.82 -6.900.8 46.67 14.46 -65.74 -3.931.0 56.65 0.00 -64.61 -3.16
The effect of veto in O8
Fiat power
similar to veto, but fewer local optima, therefore more controlbut less performance:
P(fiat) N. optima N. cycles Control loss Perform. loss0.0 0.94 202.94 -161.20 -159.510.3 15.48 192.81 -2.36 -13.910.5 29.63 138.20 -0.59 -7.270.8 35.13 36.55 -0.03 -6.041.0 28.82 0.00 0.00 -7.78
The effect of fiat in O8
Fiat power with coarser partitions
In O2 fiat produces worse results than in O8:
P(fiat) N. optima N. cycles Control loss Perform. loss0.0 0.99 154.08 -156.86 -161.690.3 8.32 164.39 -0.20 -28.650.5 9.84 119.49 -0.01 -24.990.8 9.47 25.64 0.00 -26.161.0 8.29 0.00 0.00 -31.14
The effect of fiat in O2
Learning
I principal and agents may adaptively learn by trial and errorI when a new organizational equilibrium is tried against a
status quo, principal and agents observe which of the tworank better
I and modify their rankings if they differ from the observedones
I agents may also learn and adapt to the principal’spreferences (persuasion, docility)
The fundamental trade-off
I increased used of veto, fiat and more docile agentsincrease control
I up to a a certain point it increases also experimentationand learning (because of less cycles and more localoptima)
I but above that level also experimentation and learning getcurbed by control
Veto power and principal’s learning
Figure: The effect of veto power on principal’s learning in O8 and O2
Fiat power and principal’s learning
Figure: The effect of fiat power on principal’s learning in O8 and O2
The effect of fiat and principal’s learning onperformance and control
Figure: The effect of fiat power and principal’s learning onperformance and control in O8
The effect of veto power on control with agents’ docility
Figure: The effect of veto power on control with agents’ docility in O8
Principal’s learning with high or low agents’ docility
Figure: Principal’s learning with high or low agents’ docility in O8 fordifferent probabilities of veto
Performance with high or low agents’ docility
Figure: Average and best performance with high or low agents’docility in O8 for different probabilities of veto