Upload
moris-lindsey
View
215
Download
0
Embed Size (px)
Citation preview
Rational CryptographySome Recent Results
Jonathan KatzUniversity of Maryland
Rational cryptography
“Applying cryptography to game theory” When can a cryptographic protocol be used to
implement a gave involving a trusted party? [B92, DHR00, LMS05, ILM05, ADGH06, …]
“Applying game theory to cryptography” How to deal with rational, computationally
bounded parties in cryptographic protocols?[HT04, GK06, LT06, KN08, ACH11, …]
The dream?
We want protocols that are resilient to malicious behavior
We believe that (most) parties act
rationally, i.e., in their own self interest
Can we get “better” cryptographic protocols by focusing on rational adversaries
rather than arbitrary adversaries?
The dream?
Can we construct more efficient protocols if we assume a rational adversary (with known utilities)?
Can we circumvent impossibility results if we assume a rational adversary (with known utilities)?
YES!
Two examples
Fairness Two-party setting
[Groce-K (Eurocrypt ’12)]
The multi-party setting, and other extensions[Beimel-Groce-K-Orlov ‘12]
Byzantine agreement / broadcast[Groce-Thiruvengadam-K-Zikas (ICALP ’12)]
Fairness
Fairness
Two parties computing a function f using some protocol
(Intuitively) the protocol is fair if either both parties learn the output, or neither party does
Note: fairness is non-trivial even without privacy, and even in the fail-stop setting
The challenge?
f(x, y) X
x y
Impossibility of fairness
[Cleve ’86]: Fair computation of boolean XOR is impossible
Dealing with impossibility
Fairness for specific functions [GHKL08] Limited positive results known
Partial fairness [BG89, GL90, GMPY06, MNS09, GK10, …]
Physical assumptions [LMPS04, LMS05, IML05, ILM08]
Here: what can be done if we assume rational behavior?
Rational fairness
Fairness in a rational setting [ACH11] Look at a specific function/utilities/setting
Main goal is to explore and compare various definitions of rational fairness
Main result is “pessimistic”: boolean XOR can be computed in a rationally fair way, but only with probability of correctness at most ½
Consider the following game…
1. Parties run a protocol to compute some function f
2. Receive inputs x0, x1 from known distribution
3. Run the protocol…
4. Output an answer
5. Utilities depend on both parties’ outputs, and the true answer f(x0, x1)
D
Utilities
Each party prefers to learn the correct answer, and otherwise prefers that the other party output an incorrect answer This generalizes the setting of rational secret
sharing [HT04, GK06, LT06, ADGH06, KN08, FKN10, …]
Correct Incorrect
Correct (a0, a1) (b0, c1)
Incorrect (c0, b1) (d0, d1)
b0 > a0 ≥ d0 ≥ c0
b1 > a1 ≥ d1 ≥ c1
Player 1
Player 0
Deviations?
Two settings: Fail-stop: parties can (only) choose to abort
the protocol, at any point
Byzantine: parties can arbitrarily deviate from the protocol (including changing their input)
Parties are computationally bounded
Definition
Fix f, a distribution D, and utilities for the parties. A protocol π computing f is rationally fair (for f, D, and these utilities) if running π is a (Bayesian) computational Nash equilibrium
Note: stronger equilibrium notions have been considered in other work We leave this for future work
Question
For which settings of f, D, and the parties’ utilities do rationally fair protocols exist?
Consider the following game…
1. Parties have access to a trusted party computing f
2. Receive inputs x0, x1 from known distribution
3. Send input or to trusted party; get back result or
4. Output an answer
5. Utilities depend on both parties’ outputs, and the true answer f(x0, x1)
D
Revisiting [ACH11]
The setting of [ACH11]: f = boolean XOR D = independent, uniform inputs utilities:
Evaluating f with a trusted party gives both parties utility 0
They can get the same expected utility by random guessing! The parties have no incentive to run any protocol
computing f Running the ideal-world protocol is a Nash equilibrium,
but not strict Nash
Correct Incorrect
Correct (0, 0) (1, -1)
Incorrect (-1, 1) (0, 0)
Back to the ideal world
To fully define a protocol for the ideal world, need to define what a party should output when it receives from the trusted party (cooperate, W0): if receive , then generate
output according to the distribution W0(x0)
Definition
Fix f, a distribution D, and utilities for the parties. These are incentive compatible if there exist W0, W1 such that ((cooperate, W0), (cooperate, W1)) is a Bayesian strict Nash equilibrium (Actually only need strictness for one party)
Main result
If computing f in the ideal world is a strict Nash equilibrium, then there is a real-world protocol π computing f such that following the protocol is a computational Nash equilibrium
If f, a distribution D, and the utilities are incentive compatible, then there is a protocol π computing f that is rationally fair (for f,
D, and the same utilities)
The protocol I
Use ideas from [GHKL08, MNS09, GK10]
ShareGen
• Choose i* from geometric distribution with parameter p
• For each i ≤ n, create values ri, 0 and ri,1
• If i ≥ i*, ri, 0 = ri,1 = f(x0, x1)
• If i < i*, ri, 0 and ri,1 are chosen according to distributions W0(x0) and W1(x1), respectively
• Secret share each ri, j value between P0 and P1
The protocol II
Compute ShareGen (unfairly)
In each round i, parties exchange shares P0 learns ri,0 and then P1 learns ri, 1
If the other party aborts, output the last value learned If the protocol finishes, output rn,0 and rn,1
Note: correctness holds with all but negligible probability; can modify the protocol so it holds with probability 1
Will P0 abort early?
Assume P0 is told when i* has passed Aborting afterward cannot increase its utility
Consider round i ≤ i*:If P0 does not abort utility a0
If P0 aborts: i = i* utility b0
i < i* utility strictly less than a0
Because strict Nash in ideal world
Will P0 abort early?
Use W0, W1 with full support Always possible
Set p to a small enough constant so that the above is strictly less than a0
Expected utility if abort
Probability i = i*
Probability i < i*
+
=
b0
a0 -
Summary
By setting p=O(1) small enough, we get a protocol π computing f for which following π is a computational Nash equilibrium
Everything extends to the Byzantine case also, with suitable changes to the protocol
Recent extensions [BGKO]
More general classes of utility functions Arbitrary functions over the parties’ inputs and
outputs Randomized functions
Extension to the multi-party setting, with coalitions of arbitrary size
Open questions
Does a converse hold? I.e., in any non-trivial setting*, does existence
of a rationally fair protocol imply that the ideal-world computation is strict Nash for one party?
Stronger equilibrium notions
More efficient protocols Handling f with exponential-size range
* You get to define “non-trivial”
Byzantine agreement /broadcast
Definitions
Byzantine agreement: n parties with inputs x1, …, xn run a protocol giving outputs y1, …, yn. Agreement: All honest parties output the same value y Correctness: If all honest parties hold the same input,
then that will be the honest parties’ output
Broadcast: A dealer holds input x; parties run a protocol giving outputs y1, …, yn. Agreement: All honest parties output the same value Correctness: If the dealer is honest, all honest parties
output x
Rational BA/broadcast?
Definitions require the security properties to hold against arbitrary actions of an adversary controlling up to t parties
What if the adversary has some (known) preference on outcomes?
E.g., Byzantine generals: Adversary prefers that only some parties attack
(disagreement) Else prefers that no parties attack (agree on 0) Least prefers that they all attack (agree on 1)
Rational BA/broadcast
Consider preferences over {agree on 0, agree on 1, disagreement}
(Informally:) A protocol achieves rational BA / broadcast (for a given ordering of the adversary’s preferences) if: When all parties (including the adversary) follow the
protocol, agreement and correctness hold The adversary never has any incentive to deviate from
the protocol
Note
A different “rational” setting from what we have seen before Previously: each party is rational Here: some parties honest; adversary rational
Though could also model honest parties as rational parties with a specific utility function
A surprise(?)
Assuming the adversary’s complete preference order is known, rational BA is possible for any t < n(!) with no setup Classical BA impossible for t ≥ n/3 w/o setup (Classical BA undefined for t ≥ n/2)
Protocol 1
Assume the adversary’s preferences are agree on b > agree on 1-b > disagreement
Protocol Every party sends its input to all other parties If a party receives the same value from everyone,
output it; otherwise output 1-b
Analysis: If honest parties all hold b, no reason to deviate In any other case, deviation doesn’t change outcome
Protocol 2
Assume the adversary’s preferences aredisagreement > agree on b > agree on 1-b
Protocol All parties broadcast their input using detectable
broadcast If a party receives the same value from everyone,
output it; otherwise output 1-b Analysis:
Adversary has no incentive to interfere with any of the detectable broadcasts
Agreement/correctness hold in every case
Other results
We also show certain conditions where partial knowledge of the adversary’s preferences is sufficient for achieving BA/broadcast for t < n See paper for details
Other surprises(?)
(Sequential) composition is tricky in the rational setting
E.g., classical reduction of BA to broadcast fails Main problem: incentives in the sub-protocol may not
match incentives in the larger protocol Some ideas for handling this via different modeling of
rational protocols
Summary
Two settings where game-theoretic analysis allows us to circumvent cryptographic impossibility results Fairness Byzantine agreeement/broadcast
Other examples?
Realistic settings where such game-theoretic modeling makes sense? Auctions? (cf. [MNT09])
Thank you!