59
On necessary and sufficient cryptographic assumptions: the case of memory checking Lecture 4 : Lower bound on Memory Checking, Lecturer: Moni Naor Weizmann Institute of Science Web site of lectures: www.wisdom.weizmann.ac.il/~naor/COURSE/ens.html

Lecturer: Moni Naor Weizmann Institute of Science

  • Upload
    summer

  • View
    25

  • Download
    0

Embed Size (px)

DESCRIPTION

On necessary and sufficient cryptographic assumptions: the case of memory checking Lecture 4 : Lower bound on Memory Checking,. Lecturer: Moni Naor Weizmann Institute of Science. Web site of lectures : www.wisdom.weizmann.ac.il/~naor/COURSE/ens.html. Recap of Lecture 3. - PowerPoint PPT Presentation

Citation preview

Page 1: Lecturer: Moni Naor Weizmann Institute of Science

On necessary and sufficient cryptographic assumptions: the case of memory checking

Lecture 4 : Lower bound on Memory Checking,

Lecturer: Moni Naor

Weizmann Institute of ScienceWeb site of lectures: www.wisdom.weizmann.ac.il/~naor/COURSE/ens.html

Page 2: Lecturer: Moni Naor Weizmann Institute of Science

Recap of Lecture 3• The memory checking problem

– The online vs. offline versions– The relationship the Sub-linear authentication problem– A good offline protocol

• based on hash function that can be computed on the fly• Small biased probability space• Hash functions for set equality

– A good computational solution for the online problem, assuming one-way functions• Two solutions, both tree based• Using Pseudo-random tags• Using families of UOWHF

– Small memory need be only reliable

• The Consecutive Message Protocol model– Tight (sqrt(n)) bound for equality

• t(n) ¢ s(n) is (n)– Similar to the simultaneous model– But: sublinear protocols exist iff one-way functions exist

Page 3: Lecturer: Moni Naor Weizmann Institute of Science

The lecture

• Learning Distributions– Static and adaptive case

• Lower bounds on memory checking– Existence of sublinear protocols implies one-way

functions

Page 4: Lecturer: Moni Naor Weizmann Institute of Science

Learning Distributions We are Given many samples from a distribution D

w1, w2, … wm• Would like to `learn’ D

– What does that mean?– Large parts of statistics are devoted to this question…

• In Computational Learning Theory two notions exist:– Learn by generation

• should come up with a probabilistic circuit• The output has distribution D provided the inputs are random

– Approximation possible

– Learn by evaluation• Given x can compute (approximate) PrD[x]

Distributed ~ D

Page 5: Lecturer: Moni Naor Weizmann Institute of Science

Learning Distributions

• Suppose D is determined by a string k of s `secret’ bits

• Everything else is known

If one-way functions exist: there are circuits C where it is computationally hard

to learn the output distributionLet Fk be a pseudo-random function

C’s output is x ◦ FK(x)

for a random x

k

C

Page 6: Lecturer: Moni Naor Weizmann Institute of Science

Learning Adaptively Changing Distributions

Learning to predict and imitate the distribution of probabilistic, adaptively changing processes.

E.g.: the T-1000 Robot can:“imitate anything it touches … anything it samples”

Page 7: Lecturer: Moni Naor Weizmann Institute of Science

Examples of adaptively changing distributions

• Impersonation– Alice and Bob agree on a secret and engage in a sequence of

identification sessions.– Eve want to learn to imitate one (or both) of the parties– How long should she wait

• Classification of a changing bacteria– How long must the scientist observe before making the

classification• Catching up: Sam and Sally are listening to a visiting

professor’s lecture. Sam falls asleep for a while– How long would it take for Sam to catch up with Sally

Page 8: Lecturer: Moni Naor Weizmann Institute of Science

Learning Adaptively Changing DistributionsWhat happens if the generating circuit C changes in time and as a reaction to events and the environment• Secret state• Public state• Transition function

D: Sp x Ss x R Sp x Ss Size of secret and public state are not restricted

But size of initial secret is restricted to s bits.

How long would it take us to learn the distribution the next public state given the sequence of past public states

First answer: may be impossible to learn Example:

the next public state may be the current secret stateThe current secret state chosen at random

So we want to be competitive with a party that knows the initial secret stateSecret state chosen at random

Page 9: Lecturer: Moni Naor Weizmann Institute of Science

Definition of Learning an Adaptively Changing Distribution

Let D be an adaptively changing distribution (ACD)D: Sp x Ss x R Sp x Ss

Then given public states P0, P1, ... Pk and the initial secret s0 there is the induced distribution Dk on the next public state

Definition: A learning algorithm (,)-learns the ACD if • It always halts and outputs an hypothesis h• With probability and least 1- we have that

(Dk, h) · probability is over the random secret and the randomness in the evolution

of the state

Page 10: Lecturer: Moni Naor Weizmann Institute of Science

Algorithm for learning ACDsTheorem: For any ε, δ > 0, for any ACD there is an algorithm that activates the

system for O(s) rounds (,)-learns the ACD

Repeat until success (or give up)If there is a very high weight subset of initial secret states A whose distributions are

close:Close = distance less than εHigh weight = 1- /2

Then can pick any h 2 AElse activate the ACD and obtain the next public state

Claim: if the algorithm terminates in the loop, then with probability at least 1- /2

(Dk, h) ·

Conditioned on the public states seen so far

Page 11: Lecturer: Moni Naor Weizmann Institute of Science

Analysis• Main parameter for showing that the algorithm advances:

Entropy of the initial secret

• Key lemma:If the high weight condition does not hold, then the expected

entropy drop of the initial secret is high– At least 2/

• After O(s) iterations not much entropy left

Constant depends on and

The (Shannon) entropy of X is

H(X) = - ∑ x Γ Px (x) log Px (x)

Page 12: Lecturer: Moni Naor Weizmann Institute of Science

Efficiency of the Algorithm

• Would like to be able to learn all ACD where D is an efficiently computable function

Theorem: One-way functions exist iff there is an efficiently computable ACD D and some ε, δ > 0, for which it is (ε, δ)-hard to learn D

Page 13: Lecturer: Moni Naor Weizmann Institute of Science

Connection to memory checking and authentication: learning the access pattern distribution

Corollary from ACD Learning Theorem:For any ε, δ > 0, for any x, when:

– E is activated on x, secret output sx

– Adversary activates V at most O(s(n)/ε2δ2) times – Adversary learns secret encoding sL.

px: The final public encoding reached.Dp(s): Access pattern distribution in the next activation on public p

Randomness over activations of E, V.

Guarantee: With probability at least 1–δ, the distributions Dpx(sx) and Dpx(sL) are ε -close (statistically).

Page 14: Lecturer: Moni Naor Weizmann Institute of Science

Memory CheckersHow to check a large and unreliable memory

• Store and retrieve requests to large adversarial memory – a vector in {0,1}n

• Detects whether the answer to any retrieve was different than the last store

• Uses small, secret, reliable memory: space complexity s(n)• Makes its own store and retrieve requests:

query complexity t(n)

C memory checkerU

userP

public memoryS

secret memorys(n) bits

t(n) bits

Page 15: Lecturer: Moni Naor Weizmann Institute of Science

Computational assumptions and memory checkers

• For offline memory checkers no computational assumptions are needed:

Probability of detecting errors: εQuery complexity: t(n)=O(1) (amortized)

Space complexity: s(n)=O(log n + log 1/ε)

• For online memory checkers with computational assumptions, good schemes are known:Query complexity t(n)=O(log n) Space complexity s(n)=n (for any > 0)Probability of not detecting errors: negligible

Main result: computational assumptions are necessary for sublinear schemes

Page 16: Lecturer: Moni Naor Weizmann Institute of Science

Recall: Memory Checker Authenticator

If there exists an online memory checker with– space complexity s(n) – query complexity t(n)

then there exists an authenticator with– space complexity O(s(n)) – query complexity O(t(n))

Strategy in showing lower bound for memory checking: show it on authenticator

Page 17: Lecturer: Moni Naor Weizmann Institute of Science

The Lower Bound

Theorem 1 [Tight lower bound]:For any online Memory Checker (Authenticator)

secure against a computational unbounded adversary

s(n) x t(n) is (n)

Page 18: Lecturer: Moni Naor Weizmann Institute of Science

Memory Checkers and One-Way FunctionsBreaking the lower bound implies one-way functions

Theorem 2:If there exists an online memory checker (authenticator)

– Working in polynomial time – Secure against polynomial time adversaries – With query and space complexity:

s(n) x t(n) < c · n (for a specific c > 0)

then there exist functions that are hard to invert for infinitely many input lengths (“almost one-way” functions)

Page 19: Lecturer: Moni Naor Weizmann Institute of Science

Program for showing the lower bound:

• Prove lower bound:– First a simple case

• By a reduction to the consecutive message model– Then the generalized reduction

Page 20: Lecturer: Moni Naor Weizmann Institute of Science

Simultaneous Messages Protocols

• For the equality function:– |mA| x |mB| = (n) [Babai Kimmel 1997]

mA

mB

f(x,y)

x {0,1}n

y {0,1}n

ALICE

BOB

CAROL

Page 21: Lecturer: Moni Naor Weizmann Institute of Science

Consecutive Messages Protocols

Theorem For any CM protocol that computes the equality function,If |mP| ≤ n/100 then |mA| x |mB| = (n)

f(x,y)

x {0,1}n

y {0,1}n

ALICE

CAROL

BOB

mA

mP

mB

s(n)

t(n)

rp

Page 22: Lecturer: Moni Naor Weizmann Institute of Science

The Reduction

Idea:

Use an authenticator to construct a CM protocol for equality testing

Page 23: Lecturer: Moni Naor Weizmann Institute of Science

public encodingpx

Recall: AuthenticatorsHow to authenticate a large and unreliable memory with a small and secret memory

E

secret encoding sx

V

Dpublic encodingpy

acceptreject

x {0,1}n xy

s(n) bits

t(n) bits

sx=Esecret(x, r)

px= Epublic(x,r)

Page 24: Lecturer: Moni Naor Weizmann Institute of Science

A Simple(r) Construction

Simplifying Assumption:

V chooses which indices of the public encoding to access independently of the secret encoding In particular: adversary knows the access pattern distribution

Page 25: Lecturer: Moni Naor Weizmann Institute of Science

public encodingpx

E

secret encoding sx

V

Dpublic encodingpy

acceptreject

x {0,1}n xy

bits

accept

x {0,1}n

y {0,1}n

ALICE

CAROL

BOB

x {0,1}n

reject

sx

Page 26: Lecturer: Moni Naor Weizmann Institute of Science

To show it works

Must show

• When x=y then the CM protocol accepts– Otherwise the authenticator will reject when no changes

were made• How to translate

– an adversary to the CM protocol that makes it accept when x≠y

– into an adversary that cheats the verifier

Page 27: Lecturer: Moni Naor Weizmann Institute of Science

Why It Works (1)Claim: If (E,D,V) is an authenticator then the CM

protocol is good.Correctness when x=y:

Alice and Bob should use same public encoding of x.To do this, use rpub

use it as the randomness for the encoding

Page 28: Lecturer: Moni Naor Weizmann Institute of Science

Security: suppose adversary for CM protocol breaks itsMakes Carol accept when x≠y

Want to show: can break the authentication as well • Tricky: “CM adversary” sees rpub!

– Might leek information since sx is chosen as Esecret(x, rpub) • Solution:

For sx Alice selects different real randomness giving the same public encoding! – Choose r’ 2R Epublic

-1(x, rpub) – Let sx = Esecret(x, r’)

• Exactly the same information is available to the authenticator adversary in a regular execution– The public encoding px = Epublic(x, r) – Hence: probability of cheating is the same

Conclusion: s(n) x t(n) is (n)

Why It Works (2)

Rerandomizing

Page 29: Lecturer: Moni Naor Weizmann Institute of Science

The Rerandomizing Technique

Always choose `at random’ the random coins consistent with the information you have

Page 30: Lecturer: Moni Naor Weizmann Institute of Science

Why it doesn’t work always• What if the verifier uses the secret encoding to determine

its access pattern distribution?• The simple lower bound applies tor “one-time”

authenticators.– Where the adversary sees only a single verification

• Is this true without simplifying assumption?

Page 31: Lecturer: Moni Naor Weizmann Institute of Science

“One-Time Authenticators”

• Space complexity: O(log(n)), Query Complexity: O(1)• Lesson: use the fact that V is secure when run many times.

E

V

D

accept

x

x {0,1}n

public encoding E(x)

secret encoding

Page 32: Lecturer: Moni Naor Weizmann Institute of Science

Progess:

• Prove lower bounds:– First a simple case– Then the generalized reduction

• A discussion on one-way functions

Page 33: Lecturer: Moni Naor Weizmann Institute of Science

Authenticators: Access PatternAccess Pattern: Indices accessed by V and bit values read.Access Pattern Distribution: distribution of the access pattern in

V’s next activation, given:– E’s initial secret string– Past access patterns

Randomness: over V’s coin tosses in all its activations.

Want to be able to state: The adversary knows the access pattern distribution

Even though he can’t see E’s secret output.

Page 34: Lecturer: Moni Naor Weizmann Institute of Science

Access Pattern Distribution

E

secret sx

V

D

acceptreject

x {0,1}n

public

secret

public

secret

V

public

x

Dx

Dxy

public py

randomness

given

Page 35: Lecturer: Moni Naor Weizmann Institute of Science

Learning the Access Pattern Distribution

• Important Lesson: if adversary doesn’t know the access pattern distribution, then V is “home free”.

• In “one-time” example– V exposes the secret indices!

• Lesson:– activate V many times, “learn” its distribution!

Recall: learning adaptively changing distributions.

Page 36: Lecturer: Moni Naor Weizmann Institute of Science

Connection to memory checking and authentication: learning the access pattern distribution

Corollary from ACD Learning Theorem:For any ε, δ > 0, for any x, when:

– E is activated on x, secret output sx

– Adversary activates V at most O(s(n)/ε2δ2) times – Adversary learns secret encoding sL.

px: The final public encoding reached.Dp(s): Access pattern distribution in the next activation on public p

Randomness over activations of E, V.

Guarantee: With probability at least 1–δ, the distributions Dpx(sx) and Dpx(sL) are ε -close (statistically).

Page 37: Lecturer: Moni Naor Weizmann Institute of Science

public encodingpx

E

secret encoding sx

V

D

accept

x {0,1}n

bits

x {0,1}n

ALICE

CAROL

BOB

sx

a, sL

rpub

x

sL

x {0,1}n

accept

Page 38: Lecturer: Moni Naor Weizmann Institute of Science

Sampling by sL, simulating by sx

Access pattern distributions by sL and sx are ε-close:

Bob generates access pattern a using sL

Carol selects a random string r from those that give a on secret input sx

– Rerandomziation

Simulate V using the random string rClaim: the distribution of r is ε-close to uniform

Page 39: Lecturer: Moni Naor Weizmann Institute of Science

Does it Work?

Security?The adversary sees sL!

Not a problem: could have learned this on its own

What about y≠x?

Page 40: Lecturer: Moni Naor Weizmann Institute of Science

Recap (1)

Adversary wants to know access pattern distribution

1. Can learn access pattern distribution2. Saw protocol that accepts when y=x3. What about y≠x?

Page 41: Lecturer: Moni Naor Weizmann Institute of Science

public encodingpx

E

secret encoding sx

V

Dpublic encodingpy

?

x {0,1}n

y

bits

x {0,1}n

y {0,1}n

ALICE

CAROL

BOB

?

sx

a, sL

rpub

x

sL

Page 42: Lecturer: Moni Naor Weizmann Institute of Science

Does it Work? (2)• Will this also work when y≠x?• No! Big problem for the adversary:

– it can learn access pattern distribution on correct and unmodified public encoding…

– really wants the distribution on different modified encoding!

• Distributions by sx and sL may be:– very close on unmodified encoding (px)– very far on any other (e.g. py)

• Can’t hope to learn distribution on modified public encoding– Not enough information/iterations

Page 43: Lecturer: Moni Naor Weizmann Institute of Science

Back to The Terminator:

TERMINATOR: What's the dog's name?JOHN: Max.

TERMINATOR: Hey, Janelle, what's wrong with Wolfy? I can hear him barking. Is he okay?

T-1000 impersonating Janelle:Wolfy's fine, honey. Where are you?

Terminator hangs up: Your foster parents are dead. Let's go.

Page 44: Lecturer: Moni Naor Weizmann Institute of Science

Recap (2)

Adversary wants to know access pattern distribution

1. Can learn access pattern distribution2. Saw protocol that accepts when y=x3. What about y≠x?4. Big problem: can’t “learn” the access pattern

distribution in this case!

Page 45: Lecturer: Moni Naor Weizmann Institute of Science

Bait and Switch (1)• Intuition: If Carol knows sx and sL, and they give

different distributions, then she can reject.

• Concrete idea: Bob always uses sL to determine the access pattern, – Carol will check whether the distributions are close

or far.

• This is a “weakening” of the verifier. We need to show it is still secure.

Page 46: Lecturer: Moni Naor Weizmann Institute of Science

Bait and Switch (2)• Give Carol access to sx and to sL

Also give her the previous access patterns (a)• Bob got public encoding p• Recall Dp(sx) and Dp(sL):

– Access pattern distributions on public encoding p with sx and sL as initial private encodings

Page 47: Lecturer: Moni Naor Weizmann Institute of Science

Access Pattern Distribution

E

secret sx

V

D

x {0,1}n

public

secret

public

secret

V

public

x

Dx

public p

randomness

given

Page 48: Lecturer: Moni Naor Weizmann Institute of Science

Bait and Switch (3)

• If only Carol could compute Dp(sx) and Dp(sL)…

Check whether they are ε-close:If far, then p cannot be the “real” public encoding! Reject

If they are close, then:– use sL to determine access pattern

– simulate V with sx and that access pattern

Page 49: Lecturer: Moni Naor Weizmann Institute of Science

Bait and Switch (4)• Last problem:

– V’ cannot compute the distributions Dp(sx) and Dp(sL) without reading all of p (V may be adaptive).

• Observation:– V’ can compute the probability of any access pattern

for which all the bits read from p are known.

• Solution:– Sample O(1) access patterns by Dp(sL), use them to

approximate the distance between the distributions.

The only type of operation we have that is not random inverse

Page 50: Lecturer: Moni Naor Weizmann Institute of Science

public encodingpx

E

secret encoding sx

V

Dpublic encodingpy

reject

x {0,1}n

y

bits

x {0,1}n

y {0,1}n

ALICE

CAROL

BOB

sx

a, sL

rpub

x

sL

reject

close

far reject

accept

x {0,1}n

accept

Page 51: Lecturer: Moni Naor Weizmann Institute of Science

Analysis of Protocol

• If the public encoding does not change, the distributions will be ε-close w.h.p.When Carol simulates V, she accepts w.h.p.

• If decoding x from public encoding is impossible, then there are two cases:– If the distributions are far:

Carol will run approximate distance test, reject w.h.p.– If the distributions are close:

When Carol simulates V, she rejects w.h.p.

Page 52: Lecturer: Moni Naor Weizmann Institute of Science

Recap (3)

Adversary wants to know access pattern distribution

1. Can learn access pattern distribution2. Saw protocol that accepts when y=x3. What about y≠x?4. Big problem: can’t “learn” the access pattern

distribution in this case!5. Solution: Bait and Switch

Page 53: Lecturer: Moni Naor Weizmann Institute of Science

Program for This Talk:

• Define authenticators and online memory checkers• Review some past results• Define communication complexity model(s)• Prove lower bounds:

– First a simple case– Then the generalized reduction

• A discussion on one-way functions

Page 54: Lecturer: Moni Naor Weizmann Institute of Science

Recall: One-Way Functions• A function f is one-way if:

– it is computable in poly-time– the probability of successfully finding an inverse in poly-time is

negligible (on a random input)• A function f is distributionally one-way if:

– it is computable in poly-time– No poly-time algorithm can successfully find a random inverse

(on a random input)

Theorem [Impagliazzo Luby 89]:Distributionally one-way functions exist one-way function exists

Page 55: Lecturer: Moni Naor Weizmann Institute of Science

Authenticator One-Way Function

Recall Theorem 2Two steps:• If there are no one-way functions:

build an explicit efficient adversary “fooling” any CM protocol with poly-time Alice and Bob that breaks the lower bound

• If there are no one-way functions, then modify the reduction and make Alice and Bob run in poly-time

Together: a contradiction!

Page 56: Lecturer: Moni Naor Weizmann Institute of Science

Recall: Sublinear CM protocols imply one-way function

Theorem: a CM protocol for equality where – all parties are polynomial time – t(n) ¢ s(n) 2 o(n) and |mp| 2 o(n) exists iff one-way function exist

Proof: Consider the function

ff(x,rA,rp,rB1,rB

2, …, rBk) = (rp,mp,rB

1,rB2, …, rB

k ,mB1,mB

2, …, mB

k) Where

• mp = M Mpp(x,rA,rP)• mB

i =MB(x,rBi,mp,rP)

Main Lemma: the function f f is distributionally one-way

Mp is the function that maps Alice’s input to the public message mp

MB is the function that maps Bob’s input to the private message mB

Page 57: Lecturer: Moni Naor Weizmann Institute of Science

CM protocol implies one-way functions• Adversary selects a random x for Alice• Alice sends public information mp, rpub

• Adversary generates a multiset Tx of s(n) Bob-messagesClaim: W.h.p., for every Alice message, Tx approximates Carol’s

output

• Adversary randomly inverts the function f and w.h.p. finds x’x s.t. Tx characterizes Carol when Bob’s input is both x and x’

Why? Tx is of length much smaller than n since s(n) ¢ t(n) + |mp| is not too large!

• Since on x and x’ where x’ x Carol’s behavior is similar in both cases, the protocol cannot have high success probability

Page 58: Lecturer: Moni Naor Weizmann Institute of Science

Running Alice and Bob in Poly-Time

If we can randomly invert any efficiently computable function, then can run Alice and Bob in poly-time

Need the tight ACD learning resultTheorem: If one-way functions don’t exist then

can learn ACDs efficiently with few samples

Interesting Point: We don’t make Carol efficient (nor do we need to)

Page 59: Lecturer: Moni Naor Weizmann Institute of Science

Conclusion

Settled the complexity of online memory checkingCharacterized the computational assumptions required for

good online memory checkers

Open Questions:• Do we need logarithmic query complexity for online

memory checking?• Showing one-way functions are essential for cryptographic

tasks• Equivalence of the distance of distributions and oen-way

functions