68
Complexity Theory Lecture 11 Lecturer: Moni Naor

Complexity Theory Lecture 11 Lecturer: Moni Naor

Embed Size (px)

Citation preview

Page 1: Complexity Theory Lecture 11 Lecturer: Moni Naor

Complexity Theory

Lecture 11

Lecturer: Moni Naor

Page 2: Complexity Theory Lecture 11 Lecturer: Moni Naor

RecapLast week:

Statistical zero-knowledge

AM protocol for VC dimension

Hardness and Randomness

This Week:Hardness and Randomness

Semi-Random Sources

Extractors

Page 3: Complexity Theory Lecture 11 Lecturer: Moni Naor

Derandomization

A major research question:• How to make the construction of

– Small Sample space `resembling’ large one– Hitting sets

Efficient.

Successful approach: randomness from hardness– (Cryptographic) pseudo-random generators– Complexity oriented pseudo-random generators

Page 4: Complexity Theory Lecture 11 Lecturer: Moni Naor

Recall: Derandomization ITheorem: any f 2 BPP has a polynomial size

circuitSimulating large sample spaces• Want to find a small collection of strings on

which the PTM behaves similarly to the large collection – If the PTM errs with probability at most , then

should err on at most + of the small collection• Choose m random strings• For input x event Ax is more than (+) of

the m strings fail the PTM

Pr[Ax] · e-22m < 2-2n

Pr[[x Ax] · x Pr[Ax] < 2n 2-2n=1

Good 1-

Bad

Collection that should resemble probability of success on ALL inputs

Chernoff

Page 5: Complexity Theory Lecture 11 Lecturer: Moni Naor

Pseudo-random generators

• Would like to stretch a short secret (seed) into a long one• The resulting long string should be usable in any case

where a long string is needed– In particular: cryptographic application as a one-time pad

• Important notion: IndistinguishabilityTwo probability distributions that cannot be distinguished– Statistical indistinguishability: distances between probability

distributions– New notion: computational indistinguishability

Page 6: Complexity Theory Lecture 11 Lecturer: Moni Naor

Computational Indistinguishability

Definition: two sequences of distributions Dn and D’n on 0,1n are computationally indistinguishable iffor every polynomial p(n) for every probabilistic polynomial time

adversary A for sufficiently large nIf A receives input y 0,1n and tries to decide whether y was

generated by Dn or D’n then

|Prob[A=‘0’ | Dn ] - Prob[A=‘0’ | D’n ] | < 1/p(n)

Without restriction on probabilistic polynomial tests: equivalent to variation distance being negligible∑β 0,1n |Prob[ Dn = β] - Prob[ D’n = β]| < 1/p(n)

advantage

Page 7: Complexity Theory Lecture 11 Lecturer: Moni Naor

Pseudo-random generatorsDefinition: a function g:0,1* → 0,1* is said to be a

(cryptographic) pseudo-random generator if• It is polynomial time computable • It stretches the input |g(x)|>|x|

– denote by ℓ(n) the length of the output on inputs of length n• If the input (seed) is random, then the output is indistinguishable from

randomFor any probabilistic polynomial time adversary A that receives input y of length ℓ(n)

and tries to decide whether y= g(x) or is a random string from 0,1ℓ(n) for any polynomial p(n) and sufficiently large n

|Prob[A=`rand’| y=g(x)] - Prob[A=`rand’| yR 0,1ℓ(n)] | < 1/p(n)

Want to use the output a pseudo-random generator whenever long random strings are used

Anyone who considers arithmetical methods of producing random numbers is, of course, in a state of sin. J. von Neumann

Page 8: Complexity Theory Lecture 11 Lecturer: Moni Naor

Pseudo-random generatorsDefinition: a function g:0,1* → 0,1* is said to be a

(cryptographic) pseudo-random generator if• It is polynomial time computable • It stretches the input g(x)|>|x|

– denote by ℓ(n) the length of the output on inputs of length n• If the input is random the output is indistinguishable from random

For any probabilistic polynomial time adversary A that receives input y of length ℓ(n) and tries to decide whether y= g(x) or is a random string from 0,1ℓ(n) for any polynomial p(n) and sufficiently large n

|Prob[A=`rand’| y=g(x)] - Prob[A=`rand’| yR 0,1ℓ(n)] | < 1/p(n)

Important issues: • Why is the adversary bounded by polynomial time?• Why is the indistinguishability not perfect?

Page 9: Complexity Theory Lecture 11 Lecturer: Moni Naor

Pseudo-Random Generators and Derandomization

All possible strings of length k

A pseudo-random generator mapping k bits to n bits strings

Any input should see roughly the same fraction of accept and rejects

The result is a derandomization of a BPP algorithm by taking majority

Page 10: Complexity Theory Lecture 11 Lecturer: Moni Naor

Complexity of Derandomization

• Need to go over all 2k possible input string• Need to compute the pseudo-random generator on

those points

• The generator has to be secure against non-uniform distinguishers:– The actual distinguisher is the combination of the

algorithm and the input• If we want it to work for all inputs we get the non-uniformity

Page 11: Complexity Theory Lecture 11 Lecturer: Moni Naor

Construction of pseudo-random generatorsrandomness from hardness

• Idea: for any given a one-way function there must be a hard decision problem hidden there

• If balanced enough: looks random • Such a problem is a hardcore predicate• Possibilities:

– Last bit– First bit– Inner product

Page 12: Complexity Theory Lecture 11 Lecturer: Moni Naor

Hardcore PredicateDefinition: let f:0,1* → 0,1* be a function. We say that h:

0,1* → 0,1 is a hardcore predicate for f if • It is polynomial time computable • For any probabilistic polynomial time adversary A that receives input

y=f(x) and tries to compute h(x) for any polynomial p(n) and sufficiently large n

|Prob[A(y)=h(x)] - 1/2| < 1/p(n)where the probability is over the choice y and the random coins of A

• Sources of hardcoreness: – not enough information about x

• not of interest for generating pseudo-randomness– enough information about x but hard to compute it

Page 13: Complexity Theory Lecture 11 Lecturer: Moni Naor

Single bit expansion

• Let f:0,1n → 0,1n be a one-way permutation

• Let h:0,1n → 0,1 be a hardcore predicate for f

Consider g:0,1n → 0,1n+1 whereg(x)=(f(x), h(x))

Claim: g is a pseudo-random generatorProof: can use a distinguisher for g to guess h(x)

f(x), h(x)) f(x), 1-h(x))

Page 14: Complexity Theory Lecture 11 Lecturer: Moni Naor

From single bit expansion to many bit expansion

• Can make r and f(m)(x) public – But not any other internal state

• Can make m as large as needed

x f(x) h(x,r)

OutputInternal Configuration

r

f(2)(x)

f(3)(x)

Input

h(f(x),r)

h(f (2)(x),r)

h(f (m-1)(x),r)f(m)(x)

Page 15: Complexity Theory Lecture 11 Lecturer: Moni Naor

Two important techniques for showing pseudo-randomness

• Hybrid argument

• Next-bit prediction and pseudo-randomness

Page 16: Complexity Theory Lecture 11 Lecturer: Moni Naor

Hybrid argument

To prove that two distributions D and D’ are indistinguishable: • suggest a collection of distributions

D= D0, D1,… Dk =D’

If D and D’ can be distinguished, then there is a pair Di and Di+1 that can be distinguished.Advantage ε in distinguishing between D and D’ means advantage

ε/k between some Di and Di+1

Use a distinguisher for the pair Di and Di+1 to derive a contradiction

Page 17: Complexity Theory Lecture 11 Lecturer: Moni Naor

Next-bit TestDefinition: a function g:0,1* → 0,1* is said to pass the next bit

test if:• It is polynomial time computable • It stretches the input |g(x)|>|x|

– denote by ℓ(n) the length of the output on inputs of length n• If the input (seed) is random, then the output passes the next-bit test

For any prefix 0≤ i< ℓ(n), for any probabilistic polynomial time adversary A that is a predictor: receives the first i bits of y= g(x) and tries to guess the next bit, for any polynomial p(n) and sufficiently large n

|Prob[A(yi,y2,…, yi) = yi+1] – 1/2 | < 1/p(n)

Theorem: a function g:0,1* → 0,1* passes the next bit test ifand only if it is a pseudo-random generator

Page 18: Complexity Theory Lecture 11 Lecturer: Moni Naor

Landmark results in the theory of cryptographic pseudo-randomness

Theorem: if pseudo-random generators stretching by a single bit exist, then pseudo-random generators stretching by any polynomial factor exist

Theorem: if one-way permutations exist, then pseudo-random generators exist

A more difficult theorem to prove:Theorem [HILL]: One-way functions exist iff pseudo-random

generators exist

Page 19: Complexity Theory Lecture 11 Lecturer: Moni Naor

Complexity Oriented Pseudo-Random Generators

Cryptography• Only crude upper bound

on the time of the `user’– Distinguisher

• The generator has less computational power than the distinguisher

Derandomization• when derandomizing an

algorithm you have a much better idea about the resources– In particular know the run time

• The generator has more computational power– May be from a higher

complexity class Quantifier order switch

Page 20: Complexity Theory Lecture 11 Lecturer: Moni Naor

Ideas for getting better pseudo-random generators for derandomization

• The generator need not be so efficient

• For derandomizing – a parallel algorithm generator may be more sequential

• Example: to derandomize AC0 circuits can compute parities– A low memory algorithm may use more space

In particular we can depart from the one-way function assumption• Easy one-way hard the other

• The (in)distinguishing probability need not be so small– We are are going to take a majority at the end

Page 21: Complexity Theory Lecture 11 Lecturer: Moni Naor

Parameters of a complexity oriented pseudo-random generator

All functions of n• Seed length t• Output length m • Running time nc

• fooling circuits of size s • error ε Any circuit family Cn of size s(n) that tries to distinguish

outputs of the generator from random strings in 0,1m(n) has at most ε(n) advantage

Page 22: Complexity Theory Lecture 11 Lecturer: Moni Naor

Hardness Assumption: Unapproximable FunctionsDefinition: E = [k DTIME(2kn)

Definition: A family of functions f = fn fn:0,1n

0,1is said to be s(n)-unapproximable if for every family of circuits Cn of size s(n):

Prx[Cn(x) = fn(x)] ≤ ½ + 1/s(n).

s(n) is both the circuit size and the bound on the advantage

Example: if g is a one-way permutation and h is a hardcore function strong against s(n)-adversaries, then

f(y)=h(g-1(y)) is s(n)-unapproximable

Average hardness notion

Page 23: Complexity Theory Lecture 11 Lecturer: Moni Naor

One bit expansion Assumption: f = fn is

– s(n)-unapproximable, for s(n) = 2Ω(n)

– in E

Claim: G = Gn:

Gn(y) = yflog n(y) is a single bit expansion generator family

Proof: suppose not, then • There exists a predictor that computes flog n with probability

better than ½ + 1/s(log n) on a random inpu

Parametersseed length t = log noutput length m=log n + 1 fooling circuits of size s nδ running time nc

error ε =1/nδ < 1/m

Page 24: Complexity Theory Lecture 11 Lecturer: Moni Naor

Getting Many Bits Simultaneously Try outputting many evaluations of f on various parts of the seed:

Let bi(y) be a projection of y and consider

G(y) = f(b1(y))f(b2(y))…f(bm(y))

• Seems that a predictor must evaluate f(bi(y)) to predict ith bit

• But: predictor might use correlations without having to compute f

• Turns out: sufficient to decorrelate the bi(y)’s in a pairwise manner

If |y|=t and S µ 1...t we denote by y|S the sequence of bits of y whose index is in S

Page 25: Complexity Theory Lecture 11 Lecturer: Moni Naor

Nearly-Disjoint SubsetsDefinition: a collection of subsets

S1,S2,…,Sm 1…t

is an (h, a)-design if:– Fixed size: for all i, |Si| = h

– Small intersection: for all i ≠ j, |Si Å Sj| ≤ a

1...t

S1

S2

S3

Each of size h

Parameters: (m,t,h,a)Each intersection of size · a

Page 26: Complexity Theory Lecture 11 Lecturer: Moni Naor

Nearly-Disjoint Subsets

Lemma: for every ε > 0 and m < n can construct in poly(n) time a collection of subsets S1,S2,…,Sm 1…t

which is a (h, a)-design with the following parameters:• h = log n, • a = εlog n • t is O(log n).

Both a proof of existence and a sequential constructionMethod of conditional probabilities

The constant in the big O depends on ε

Page 27: Complexity Theory Lecture 11 Lecturer: Moni Naor

Nearly-Disjoint SubsetsProof: construction in a greedy manner

repeat m times:– pick a random (h=log n)-subset of 1…t– set t = O(log n) so that:

• expected overlap with a fixed Si is ½εlog n

• probability overlap with Si is larger than εlog n is at most 1/m– Can get by picking a single element independently from t’ buckets of size ½ εFor Si event Ai is: intersection is larger than (1/2+)t’

Pr[Ai] · e-22t’ < 2-log m

– Union bound: some h-subset has the desired small overlap with all the Si picked so far

– find the good h-subset by exhaustive search

Page 28: Complexity Theory Lecture 11 Lecturer: Moni Naor

Other construction of designs

• Based on error correcting codes

• Simple construction: based on polynomials

Page 29: Complexity Theory Lecture 11 Lecturer: Moni Naor

The NW generator Nisan-Wigderson

Need:• f E that is s(n)-unapproximable, for s(n) = 2δn

• A collection S1,…,Sm 1…t

which is an (h,a)-design with h=log n, a = δlog n/3 and t = O(log n)

Gn(y)=flog n(y|S1)flog n(y|S2

)…flog n(y|Sm)

010100101111101010111001010

flog n:

seed yA subset Si

Page 30: Complexity Theory Lecture 11 Lecturer: Moni Naor

The NW generator

Theorem: G=Gn is a complexity oriented pseudo-random generator with:

– seed length t = O(log n)– output length m = nδ/3

– running time nc for some constant c– fooling circuits of size s = m– error ε = 1/m

Page 31: Complexity Theory Lecture 11 Lecturer: Moni Naor

The NW generator

• Proof:– assume G=Gn does not -pass a statistical test C =

Cm of size s:|Prx[C(x) = 1] – Pry[C( Gn(y) ) = 1]| > ε

– can transform this distinguisher into a predictor A of size s’ = s + O(m):

Pry[A(Gn(y)1…i-1) = Gn(y)i] > ½ + ε/m

• just as in the next-bit test using a hybrid argument

Page 32: Complexity Theory Lecture 11 Lecturer: Moni Naor

Proof of the NW generator

Pry[A(Gn(y)1…i-1) = Gn(y)i] > ½ + ε/m

– fix the bits outside of Si to preserve advantage:

Pry’[A(Gn(y’)1…i-1) = Gn(y’)i] > ½ + ε/m and is the assignment to 1…t\Si maximizing the advantage of A

Gn(y)=flog n(y|S1)flog n(y|S2

)…flog n(y|Sm)

010100101111101010111001010

flog n:

Y’ Si

Page 33: Complexity Theory Lecture 11 Lecturer: Moni Naor

Proof of the NW generator

– Gn(y’)i is exactly flog n(y’)– for j ≠ i, as y’ varies, y’|Sj

varies over only 2a values!

• From the small intersection property• To compute Gn(y’)j need only a lookup table of size 2a

hard-wire (up to) m-1 tables of 2a values to provide all Gn(y’)1…i-1

Gn(y)=flog n(y|S1)flog n(y|S2

)…flog n(y|Sm)

010100101111101010111001010

flog n:

y ’ Si

Page 34: Complexity Theory Lecture 11 Lecturer: Moni Naor

The Circuit for computing fThere is a small circuit for approximating f from A

A

output flog n(y’)

y’

Properties of the circuit

• size: s + O(m) + (m-1)2a

< s(log n) = nδ

• advantage

ε/m=1/m2 > 1/s(log n) = n-δ

hardwired tables

1 2 3 i-1

Page 35: Complexity Theory Lecture 11 Lecturer: Moni Naor

Extending the result

Theorem : if E contains 2Ω(n)-unapproximable functions then BPP = P.

• The assumption is an average case one• Based on non-uniformity

Improvement:Theorem: If E contains functions that require size 2Ω(n)

circuits (for the worst case), then E contains 2Ω(n)-unapproximable functions.

Corollary: If E requires exponential size circuits, then BPP = P.

Page 36: Complexity Theory Lecture 11 Lecturer: Moni Naor

Extracting Randomness from defective sources

• Suppose that we have an imperfect source of randomness– physical source

• biased, correlated

– Collection of events in a computer• /dev/rand

Information Leak • Can we:

– Extract good random bits from the source – Use the source for various tasks requiring randomness

• Probabilistic algorithms

Page 37: Complexity Theory Lecture 11 Lecturer: Moni Naor

Imperfect Sources• Biased coins: X1, X2…, Xn

Each bit is independently chosen so that

Pr[Xi =1] = pHow to get unbiased coins?

Von Neumann’s procedure:Flip the coin twice.

• If it comes up ‘0’ followed by ‘1’ call the outcome ‘0’. • If it comes up ‘1’ followed by ‘0’ call the outcome ‘1’. • Otherwise (two ‘0’ ‘s or two ‘1’‘s occurred) repeat the process.

Claim: procedure generates an unbiased result, no matter how the coin was biased. Works for all p simultaneously

Two questions:Can we get a better rate of generating bitsWhat about more involved or insidious models?

Page 38: Complexity Theory Lecture 11 Lecturer: Moni Naor

Shannon Entropy

Let X be random variable over alphabet with distribution PThe Shannon entropy of X is

H(X) = - ∑x P(x) log P (x)

Where we take 0 log 0 to be 0.

Interpretation: represents how much we can compress X under the best encoding

Page 39: Complexity Theory Lecture 11 Lecturer: Moni Naor

Examples• If X=0 (constant) then H(x) = 0

– Only case where H(X)=0 when X is constant– All other cases H(X) >0

• If X 0,1 and Prob[X=0] = p and Prob[X=1]=1-p, then

H(X) = -p log p + (1-p) log (1-p) ≡ H(p)

• If X 0,1n and is uniformly distributed, thenH(X) = - ∑ x 0,1n 1/2n log 1/2n = 2n/2n n = n

Page 40: Complexity Theory Lecture 11 Lecturer: Moni Naor

Properties of Entropy

• Entropy is bounded: H(X) ≤ log | | – equality occurs only if X is uniform over

• For any function f:0,1* 0,1* H(f(X)) ≤ log H(X)

• H(X) Is an upper bound on the number of bits we can deterministically extract from X

Page 41: Complexity Theory Lecture 11 Lecturer: Moni Naor

Does High Entropy Suffice for extraction?

• If we have a source on X 0,1n where X has high entropy (say H(X) ≥ n/2 ), how many bits can we guarantee to extract?

• Consider: – Pr[X=0n ] = 1/2– For any x 10,1 n-1 Pr[ X=x ] = 1/2n

Then H(X) = n/2+1/2

But cannot guarantee more than a single bit in the extraction

Page 42: Complexity Theory Lecture 11 Lecturer: Moni Naor

Another Notion: Min EntropyLet X be random variable over alphabet with distribution Px

The min entropy of X isHmin(X) = - log max x P (x)

The min entropy represents the most likely value of X• min-entropy k implies:

– no string has weight more than 2-k

Property: Hmin(X) ≤ H(X)Why?

Would like to extract k bits from a min entropy k source.Possible ~ If: • we know the source • have unlimited computation power

Page 43: Complexity Theory Lecture 11 Lecturer: Moni Naor

The Semi-Random modelSanta Vazirani

Definition: A source emitting a sequence of bits X1, X2, …, Xn is an SV source with bias if for all 1 · i · n and b1, b2 , …, bi 2 0,1i we have:

½ - · Pr[Xi = bi | X1 = b1, X2 = b2 …, Xi-1 = bi-1] · ½ +

So the next bit has bias at most Clear generalization of a biased coin

Motivation: • physical measurements where there are correlations with the history• Distributed imperfect coin generation

An SV source has high min entropy: for any string b1, b2 , …, bn

Pr[X1 = b1, X2 = b2 …, Xn = bn] · (½ + )n

Page 44: Complexity Theory Lecture 11 Lecturer: Moni Naor

Impossibility of extracting a single bit from SV sources

• Would like a procedure similar to von Neumann’s for SV sources. – A function f:0,1n 0,1 such that for any SV

source with bias we have that f(X1, X2, …, Xn) is more or less balanced.

Theorem: For all 2 (0,1], all n and all functions f:0,1n 0,1: there is an SV source of bias such that f(X1, X2, …, Xn) has bias at least

Page 45: Complexity Theory Lecture 11 Lecturer: Moni Naor

Proof of impossibility of extraction strong SV sources

Definition: A source emitting a sequence of bits X1, X2, …, Xn

is a strong SV source with bias if: for all 1·i·n and b1, b2, …, bi-1, bi+1, …, bn 2 0,1n-1 we have

that the bias of Xi given that

X1 = b1, X2 = b2 …, Xi-1 = bi-1, Xi+1 = bi+1, …, Xn = bn

is at most

This is a restriction: every strong SV source is also an SV sourceEven the future does not help you to bias Xi too much

Page 46: Complexity Theory Lecture 11 Lecturer: Moni Naor

Proof of impossibility of extraction –imbalanced sources

Definition: A source emitting a sequence of n bits X=X1, X2, …, Xn is –imbalanced if for all x,y 2 0,1n we have

Pr[X=x]/Pr[X=y] · (1+ )/(1 - )

Lemma: Every –imbalanced source is a strong SV source with bias Proof: for all 1·i·n and b1, b2, …, bi-1, bi+1, …, bn 2 0,1n-1 the –

imbalanced property implies that

(1 - )/(1 + · Pr[X1 = b1, X2 = b2 …, Xi-1 = bi-1 ,Xi=0, Xi+1 = bi+1 …, Xn = bn ]/

Pr[X1 = b1, X2 = b2 …, Xi-1 = bi-1 ,Xi=1, Xi+1 = bi+1 …, Xn = bn ] · (1+ )/(1 -

which implies strong bias at most

Page 47: Complexity Theory Lecture 11 Lecturer: Moni Naor

Proof of impossibility of extraction

Lemma: for every function f:0,1n 0,1 there is a –imbalanced source such that f(X1, X2, …, Xn) has bias at least

Proof: there exists a set S µ 0,1n of size 2n-1

such that f is constant on S. Consider source X:

– With probability ½ + pick a random element in S – With probability ½ - pick a random element in

0,1n /S

Recall: A source is –imbalanced if for all x,y 2 0,1n we have

Pr[X=x]/Pr[X=y] · (1+ (1 -

S s.t. |S|= 2n-1

f-1(b)b is the majority f-1(1-b)

Page 48: Complexity Theory Lecture 11 Lecturer: Moni Naor

Extractors

• So if extraction from SV is impossible should we simply give up?

• No: use randomness!

Make sure you are using much less randomness than you are getting out

Page 49: Complexity Theory Lecture 11 Lecturer: Moni Naor

Extractor

• Extractor: a universal procedure for purifying imperfect source:

– The function Ext(x,y) should be efficiently computable– truly random seed as “catalyst”– Parameters: (n, k, m, t, )

seed

source stringnear-uniform

0,1n

2k strings Ext

t bitsm bits

Truly random

Page 50: Complexity Theory Lecture 11 Lecturer: Moni Naor

Extractor: Definition

(k, ε)-extractor: for all random variables X with min-entropy k:

– output fools all tests T:

|Prz[T(z) = 1] – Pry 2R 0,1t, xX[T(Ext(x, y)) = 1]| ≤ ε

– distributions Ext(X, Ut) and Um are ε-close (L1 dist ≤ 2ε)

Um uniform distribution on :0,1m

• Comparison to Pseudo-Random Generators– output of PRG should fool all efficient tests– output of extractor should fool all tests

Page 51: Complexity Theory Lecture 11 Lecturer: Moni Naor

Extractors: Applications

• Using extractors– use output in place of randomness in any application– alters probability of any outcome by at most ε

• Main motivation:– use output in place of randomness in algorithm– how to get truly random seed?– enumerate all seeds, take majority

Page 52: Complexity Theory Lecture 11 Lecturer: Moni Naor

Extractor as a Graph

0,1n

0,1m

2t

Size: 2k

Want every subset of size 2k to see almost all of the rhs with equal probability

For each subset of size at least 2k the degree on the rhs should be roughly the same

Page 53: Complexity Theory Lecture 11 Lecturer: Moni Naor

Extractors: desired parameters

• Goals: good: optimal:short seed O(log n) log n+O(1)long output m = kΩ(1) m = k+t–O(1)many k’s k = nΩ(1) any k = k(n)

seed

source stringnear-uniform

0,1n

2k strings Ext

t bitsm bits

Allows going over all seeds

Page 54: Complexity Theory Lecture 11 Lecturer: Moni Naor

Extractors

• A random construction for Ext achieves optimal!– but we need explicit constructions

• Otherwise we cannot derandomize BPP

– optimal construction of extractors still open• Trevisan Extractor:

– idea: any string defines a function• String C over of length ℓ define a function fC:1… ℓ by

fC(i)=C[i]

– Use NW generator with source string in place of hard function

From complexity to combinatorics!

Page 55: Complexity Theory Lecture 11 Lecturer: Moni Naor

Error-correcting codes

• Error Correcting Code (ECC):C:Σn Σn

• message m Σn codeword c(m) Σℓ • received corrupted word R

– C(m) with some positions corrupted• If not too many errors, want to decode: D(R) = m• parameters of interest:

– rate: n/ℓ– distance:

d = minmm’ Δ(C(m), C(m’))

Page 56: Complexity Theory Lecture 11 Lecturer: Moni Naor

Distance and error correction

Error Correcting Code C with minimum distance d:– can uniquely decode from up to d/2 errors

Σℓ

d

Page 57: Complexity Theory Lecture 11 Lecturer: Moni Naor

Distance, error correction and list decoding

• Alternative to unique decoding: find a short list of messages (inlcuding the correct one) – Hope to get closer to d errors!

• Instead of d/2 errors in unique decoding

Johnson Bound: Theorem: a binary code with minimum distance (½ - δ2)ℓ has at most O(1/δ2) codewords in any ball

of radius (½ - δ)ℓ.

Page 58: Complexity Theory Lecture 11 Lecturer: Moni Naor

Trevisan Extractor• Tools:

– An error-correcting codeC:0,1n 0,1ℓ

• Distance between codewords: (½ - ¼m-4)ℓ – Important: in any ball of radius ½- there are at most 1/2 codewords.

= ½ m-2

• Blocklength ℓ = poly(n)• Polynomial time encoding

– Decoding time does not matter– An (a,h)-design S1,S2,…,Sm 1…t where

• h=log ℓ • a = δlog n/3 • t=O(log ℓ)

• Construction:Ext(x, y)=C(x)[y|S1

]C(x)[y|S2]…C(x)[y|Sm

]

Page 59: Complexity Theory Lecture 11 Lecturer: Moni Naor

Trevisan Extractor

Ext(x, y)=C(x)[y|S1]C(x)[y|S2

]…C(x)[y|Sm]

Theorem: Ext is an extractor for min-entropy k = nδ, with – output length m = k1/3 – seed length t = O(log ℓ ) = O(log n)– error ε ≤ 1/m

010100101111101010111001010

C(x):

seed y

Page 60: Complexity Theory Lecture 11 Lecturer: Moni Naor

Proof of Trevisan ExtractorAssume X µ 0,1n is a min-entropy k random variable

failing to ε-pass a statistical test T:

|Prz[T(z) = 1] - PrxX, y 0,1t[T(Ext(x, y)) = 1]| > ε

By applying usual hybrid argument:

there is a predictor A and 1 · i · m:PrxX, y0,1t[A(Ext(x, y)1…i-1) = Ext(x, y)i] >

½+ε/m

Page 61: Complexity Theory Lecture 11 Lecturer: Moni Naor

The set for which A predict well

Consider the set B of x’s such thatPry0,1t[A(Ext(x, y)1…i-1) = Ext(x, y)i] > ½+ε/2m

By averaging Prx[x 2 B] ¸ ε/2m

Since X has min-entropy k: there should be at least

ε/2m 2k different x in B

The contradiction will be by showing a succinct encoding for each x 2 B

Page 62: Complexity Theory Lecture 11 Lecturer: Moni Naor

…Proof of Trevisan Extractori, A and B are fixed

If you fix the bits outside of Si to and and let y’ vary over all possible assignments to bits in Si. Then Ext(x, y)i = Ext(x, y’)i = C(x)[y’|Si

] = C(x)[y’]goes over all the bits of C(x)

For every x 2 B short description of a string z close to C(x) – fix bits outside of Si to and preserving the advantage

Pry’[P(Ext(x, y’)1…i-1)=C(x)[y’] ] > ½ + ε/(2m) and is the assignment to 1…t\Si maximizing the advantage of A

– for j ≠ i, as y’ varies, y’|Sj varies over only 2a

values!– Can provide (i-1) tables of 2a values to supply

Ext(x, y’)1…i-1

Page 63: Complexity Theory Lecture 11 Lecturer: Moni Naor

Trevisan Extractorshort description of a string z agreeing with C(x)

A

Output is C(x)[y’ ] w.p. ½ + ε/(2m) over Y’

y’

Y’ 0,1log ℓ

Page 64: Complexity Theory Lecture 11 Lecturer: Moni Naor

…Proof of Trevisan ExtractorUp to (m-1) tables of size 2a describe a

string z that has a ½ + ε/(2m) agreement with C(x)

• Number of codewords of C agreeing with z:on ½ + ε/(2m) places is O(1/δ2)=

O(m4) Given z: there are at most O(m4) corresponding

x’s• Number of strings z with such a description:

• 2(m-1)2a = 2nδ2/3 = 2k2/3

• total number of x 2 B O(m4) 2k2/3 << 2k(ε/2m)

Johnson Bound:A binary code with distance (½ - δ2)n has at most O(1/δ2) codewords in any ball of radius (½ - δ)n.

•C has minimum distance (½ - ¼m-

4)ℓ

Page 65: Complexity Theory Lecture 11 Lecturer: Moni Naor

Conclusion

• Given a source of n random bits with min entropy k which is n(1) it is possible to run any BPP algorithm using the source

and obtain the correct answer with high probabilityEven though extracting even a single bit may be impossible

Page 66: Complexity Theory Lecture 11 Lecturer: Moni Naor

Application: strong error reduction

• L BPP if there is a p.p.t. TM M: x L Pry[M(x,y) accepts] ≥ 2/3

x L Pry[M(x,y) rejects] ≥ 2/3

• Want: x L Pry[M(x,y) accepts] ≥ 1 - 2-k

x L Pry[M(x,y) rejects] ≥ 1 - 2-k

• Already know: if we repeat O(k) times and take majority– Use n = O(k)·|y| random bits;Of them 2n-k can be bad strings

Page 67: Complexity Theory Lecture 11 Lecturer: Moni Naor

Strong error reduction

Better: Ext extractor for k = |y|3 = nδ, ε < 1/6– pick random w R 0,1n

– run M(x, Ext(w, z)) for all z 0,1t • take majority of answers

– call w “bad” if majzM(x, Ext(w, z)) is incorrect

|Prz[M(x,Ext(w,z))=b] - Pry[M(x,y)=b]| ≥ 1/6

– extractor property: at most 2k bad w– n random bits; 2nδ bad strings

Page 68: Complexity Theory Lecture 11 Lecturer: Moni Naor

References

• Theory of cryptographic Pseudo-randomness developed by:– Blum and Micali, next bit test, 1982– Computational indistinguishability, Yao, 1982,

• The NW generator:– Nisan and Wigderson, Hardness vs. Randomness, JCSS, 1994.

– Some of the slides on the topic: Chris Umans’ course Lecture 9