How NP got a new definition:
Probabilistically Checkable Proofs (PCPs) & Approximation Properties of
NP-hard problems
SANJEEV ARORA
PRINCETON UNIVERSITY
• Recap of NP-completeness and its philosophical
importance.
• Definition of approximation.
• How to prove approximation is NP-complete (new definition of NP; PCP Theorem)
• Survey of approximation algorithms.
Talk Overview
A central theme in modern TCS: Computational Complexity
How much time (i.e., # of basic operations) are needed to solve an instance of the problem?
Example: Traveling Salesperson Problem on n cities
Number of all possible salesman tours = n!(> # of atoms in the universe for n =49)
One key distinction: Polynomial time (n3, n7 etc.) versus
Exponential time (2n, n!, etc.)
n =49
Is there an inherent difference between
being creative / brilliant
and
being able to appreciate creativity / brilliance?
• Writing the Moonlight Sonata• Proving Fermat’s Last Theorem• Coming up with a low-cost salesman tour
• Appreciating/verifying any of the above
When formulated as “computational effort”, just the P vs NP Question.
P vs. NP
NP
P
NPC “YES” answer has certificate of O(nc) size, verifiable in O(nc’) time.
Solvable in O(nc) time.
NP-complete: Every NP problem is reducible to it in O(nc) time. (“Hardest”)
e.g., 3SAT: Decide satisfiability of a boolean formula like
Pragmatic Researcher
Practical Importance of P vs NP: 1000s of optimization problems are NP-complete/NP-hard. (Traveling Salesman,CLIQUE, COLORING, Scheduling, etc.)
“Why the fuss? I am perfectly content with approximatelyoptimal solutions.” (e.g., cost within 10% of optimum)
Bad News: NP-hard for many problems.
Good news: Possible for quite a few problems.
Approximation Algorithms
MAX-3SAT: Given 3-CNF formula , find assignment maximizing the number of satisfied clauses.
An -approximation algorithm is one that for every formula, produces in polynomial time an assignment that satisfies at least OPT/ clauses. ( ¸ 1).
Good News: [KZ’97] An 8/7-approximation algorithm exists.
Bad News: [Hastad’97] If P NP then for every > 0, an(8/7 -)-approximation algorithm does not exist.
Observation (1960’s thru ~1990)
NP-hard problems differ with respect to approximability
[Johnson’74]: Provide explanation? Classification?
Last 15 years: Avalanche of Good and Bad news
Next few slides: How to rule out existenceof good approximation algorithms
(New definition of NP via PCP Theoremsand why it was needed)
Recall: “Reduction”
“If you give me a place to stand, I will move the earth.” – Archimedes (~ 250BC)
“If you give me a polynomial-time algorithm for 3SAT, I will give you a polynomial-time algorithm for every NP problem.” --- Cook, Levin (1971)
“Every instance of an NP problem can be disguised as an instance of 3SAT.”
a 1.01-approximation for MAX-3SAT
[A., Safra] [A., Lund, Motwani, Sudan, Szegedy] 1992
MAX-3SAT
Desired
Way to disguise instances of any NP problem as instances of MAX-3SAT s.t.
“Yes” instances turn into satisfiable formulae
“No” instances turn into formulae in which < 0.99fraction of clauses can be simultaneously satisfied
“Gap”
Cook-Levin reduction doesn’t produce instanceswhere approximation is hard.
Transcript of computation
?
Transcript is “correct” if it satisfies all “local” constraints.
Main point: Expressthese as boolean formula
But, there always exists a transcript that satisfies almost alllocal constraints! (No “Gap”)
New definition of NP….
Recall: Usual definition of NP
INPUT x CERTIFICATE
n nc
M
x is a “YES” input
there is a s.t. M accepts (x, )
x is a “NO” input
M rejects (x, ) for every
NP = PCP (log n, 1) [AS’92][ALMSS’92]; inspired by [BFL’90], [BFLS’91][FGLSS’91]
x is a “YES” input
there is a s.t. M accepts (x, )
x is a “NO” input
for every , M rejects (x, )
INPUT x CERTIFICATE
n nc
M Reads Fixed number of bits(chosen in randomized fashion)
Pr [ ] = 1
Pr [ ] > 1/2
Uses O(log n) random bits
(Only 3 bits ! (Hastad 97))
Many other“PCP Theorems”
known now.
Disguising an NP problem as MAX-3SAT
INPUT x ?
MO(lg n) random bits
Note: 2O(lg n) = nO(1).
) M ≡ nO(1) constraints, each on O(1) bits
x is YES instance ) All are satisfiable
x is NO instance ) · ½ fraction satisfiable“gap”
Of related interest….
Do you need to read a math proof completely to check it?
Recall: Math can be axiomatized (e.g., Peano Arithmetic)
Proof = Formal sequence of derivations from axioms
Verification of math proofs
TheoremProof
M
M runs in poly(n) time
n bits
(spot-checking)
O(1) bits
PCP Theorem
•Theorem correct there is a proof that M accepts w. prob. 1•Theorem incorrect M rejects every claimed proof w. prob 1/2
HITTING SET DOMINATING SET HYPERGRAPH - TRAVERSAL ...
[PY ’88];
OTHERS
[LY ’93]
[LY ’93, ABSS ’93]
Known Inapproximability ResultsThe tree of reductions [AL ‘96]
MAX-3SAT
MAX-3SAT(3)CLIQUE
LABEL COVER
SET COVER
COLORING
[PY ’88]
[LY ’93]
[FGLSS ’91, BS ‘89]
Metric TSP Vertex Cover MAX-CUT STEINER...
NEAREST VECTOR MIN-UNSATISFY QUADRATIC -PROGRAMMING LONGEST PATH
...
INDEPENDENT SET BICLIQUE COVER FRACTIONAL COLORING MAX-PLANAR SUBGRAPH MAX-SET PACKING MAX-SATISFY
Class II O(lg n)
Class I 1+
Class III 2(lg n)1-
Class IV
n
Proof of PCP Theorems( Some ideas )
Need for “robust” representation
O(lg n) random bits 3 bits
1 0 0 1 0 1 1 1 0 1 0 0 0 1 0 1
Randomly corrupt 1% of
x x x
Correct proof still accepted with 0.97- probability!
Original proof of PCP Thm used polynomial representations,
Local “testing” algorithms for polynomials, etc. (~30-40 pages)
New Proof (Dinur’06); ~15-20 pages
Repeated applications of two operations on the clauses:
Globalize: Create new constraints using “walks” in the adjacency graph of the old constraints.
Domain reduction: Change constraints so variables take valuesin a smaller domain (e.g., 0,1) (uses ideas from old proof)
Unique game conjecture and why it is useful
Problem: Given system of equations modulo p (p is prime).
7x2 + 2x4 = 6
5x1 + 3x5 = 2
7x5 + x2 = 21
2 variables per equation
UGC (Khot03): Computationally intractable to distinguish between the cases:
• 0.99 fraction of equations are simultaneously satisfiable
• no more than 0.001 fraction of equations are simultaneously satisfiable.
Implies hardness of approximating vertex cover, max-cut, etc.
(K04), (KR05)(KKMO05)
Applications of PCP Techniques: Tour d’Horizon
• Locally checkable / decodable codes.
• List decoding of error-correcting codes.
• Private Info Retrieval
• Zero Knowledge arguments / CS proofs
• Amplification of hardness / derandomization
• Constructions of Extractors.
• Property testing
[Sudan ’96, Guruswami-Sudan]
[Katz, Trevisan 2000]
[Kilian ‘94] [Micali]
[Lipton ‘88] [A., Sudan ’97]
[Sudan, Trevisan, Vadhan]
[Safra, Ta-shma, Zuckermann]
[Shaltiel, Umans]
[Goldreich, Goldwasser, Ron ‘97]
Approximation algorithms: Some major ideas
• Relax, solve, and round : Represent problem using a linear or semidefinite program, solve to get fractionalsolution, and round to get an integer solution. (e.g., MAX-CUT, MAX-3SAT, SPARSEST CUT)
• Primal-dual: “Grow” a solution edge by edge; prove itsnear optimality using LP duality. (Usually gives faster
algorithms.) e.g., NETWORK DESIGN, SET COVER
How can you prove that the solution you found hascost at most 1.5 times (say) the optimum cost?
• Show existence of “easy to find” near-optimal solutions:e.g., Euclidean TSP and Steiner Tree
What is semidefinite programming?
Ans. Generalization of linear programming; variables arevectors instead of fractions. “Nonlinear optimization.”
[Groetschel, Lovasz, Schrijver ’81]; first used in approximation algorithms by [Goemans-Williamson’94]
Next few slides: The semidefinite programming approach
G = (V,E)
n vertices
v1
v2
v3vn
Rn
n vectors, d(vi,vj) satisfy some constraints.
Ex: 1.13 ratio for MAX-CUT, MAX-2SAT [GW ’93]
O(lg n) ratio for min-multicut, sparsest cut. [LLR ’94, AR ’94]
n1/4-coloring of 3-colorable graphs. [KMS ’94]
(lg n)O(1) ratio for min-bandwidth and related problems [F ’98, BKRV ’98]
8/7 ratio for MAX-3SAT [KZ ’97]
plog n-approximation for graph partitioning problems (ARV04)
Main Idea:
“Round”
How do you analyze these vector programs?
Ans. Geometric arguments; sometimes very complicated
Ratio 1.13.. for MAX-CUT [GW ’93]
G = (V,E) Find that maximizes capacity .
Quadratic Programming Formulation
Semidefinite Relaxation [DP ’91, GW ’93]
Randomized Rounding [GW ’93]
v6
v2
v3v5
Rn
v1
Form a cut by partitioning v1,v2,...,vn around a random hyperplane.
SDPOPT
vj
vi
ij
Old math rides to the rescue...
sparsest cut: edge expansion
Input: A graph G=(V,E).
S
E(S, S)For a cut (S,S) let E(S,S) denote the edgescrossing the cut.
The sparsity of S is the value
The SPARSEST CUT problem is to find the cut which minimizes (S).
SDPs used to give plog n -approximation involves proving a nontrivial fact about high-dimensional geometry [ARV04]
ARV structure theorem
Arora, Rao, and Vazirani showed how the SDP could be roundedto obtain an approximation to Sparsest Cut (2004)ARV structure
theorem:If the points xu 2 Rn are well-spread, e.g.u,v (xu-xv)2 ¸ 0.1 and xu
2 · 10 for u 2 Vand d(u,v) = (xu-xv)2 is a metric, then…
A
B
There exist two large, well-separated sets A, B µ {x1, x2, …, xn}with |A|,|B| ¸ 0.1 n and
After we have such A and B, it is easy toextend them to a good sparse cut in G.
Unexpected progress inother disciplines…
ARV structure theorem led to new understanding ofthe interrelationship between l1 and l2 norms
(resolved open question in math)
l1 distances among n points can be realized as l2 distances among some other set of n points, andthe distortion incurred is only plog n
[A., Lee, Naor’05], building upon [Chawla Gupta Raecke’05]
Theory of Approximability?Desired Ingredients:
1. Definition of approximation-preserving reduction.
2. Use reductions and algorithms to show:
Approx. upto ()
factor () factor ()
. . . . . . .
All interesting problems
Partial Progress
Max-SNP: Problems similar to MAX-3SAT. [PY ’88]
RMAX(2): Problems similar to CLIQUE. [PR ‘90]
F+2(1): Problems similar to SET COVER. [KT ’91]]
MAX-ONES CSP, MIN-CSP,etc. (KST97, KSM96)
Further Directions
1. Investigate alternatives to approximation• Average case analysis• Slightly subexponential algorithms (e.g. 2o(n) algorithm for
CLIQUE??)2. Resolve the approximability of graph partitioning
problems. (BISECTION, SPARSEST CUT,
plog n vs loglog n??) and Graph Coloring
3. Complete the classification of problems w.r.t. approximability.
4. Simplify proofs of PCP Thms even further.5. Resolve “unique games”conjecture.6. Fast solutions to SDPs? Limitations of SDPs?
AttributionsDefinition of PCP
Polynomial Encoding Method
Verifier Composition
PCP
Hardness of Approx.
Fourier Transform Technique
[Fortnow, Rompel, Sipser ’88]
[Feige, Goldwasser, Lovász, Safra, Szegedy ’91]
[Arora, Safra ’92]
[Lund, Fortnow, Karloff, Nisan ’90]
[Shamir ’90]
[Babai, Fortnow ’90]
[Babai, Fortnow, Levin, Szegedy ’91]
[Arora, Safra ’92]
[FGLSS ’91]
[ALMSS ’92]
[Håstad ’96, ’97]
Constraint Satisfaction ProblemsLet F = a finite family of boolean constraints.
An instance of CSP(F):x1 x2 xn
g1 g2 gm. . . . . . . . . . . .
. . . . . . . . . . . . variables
functions from F
[Schaefer ’78]
Ex:
Dichotomy Thm:
PNP Complete
{CSP(F) : F is finite}
Iff F is 0-valid, 1-valid, weakly positive or negative, affine, or 2CNF
MAX-CSP
MAX-ONES-CSP
MIN-ONES-CSP
PMAX-SNP-hard (1+) ratio is
NP-hard
Iff F is 0-valid, 1-valid, or 2-monotone
[Creignou ‘96]
[Khanna, Sudan,
Williamson ‘97](Supercedes MAXSNP work)
Ex:
P1+ n
Feasibilty NP-hard
Feasibility is undecidable
Ex:
[KSW ‘97]
[KST ‘97]
P
1+n
Feasibilty NP-hard
NEAREST-CODEWORD-complete
MIN-HORN-DELETION-complete
Geometric Embeddings of Graphs
G = (V,E)
n vertices
v1
v2
v3vn
Rn
n vectors, d(vi,vj) satisfy some constraints.
Ex: 1.13 ratio for MAX-CUT, MAX-2SAT [GW ’93]
O(lg n) ratio for min-multicut, sparsest cut. [LLR ’94, AR ’94]
n1/4-coloring of 3-colorable graphs. [KMS ’94]
(lg n)O(1) ratio for min-bandwidth and related problems [F ’98, BKRV ’98]
8/7 ratio for MAX-3SAT [KZ ’97]
plog n-approximation for graph partitioning problems (ARV04)
Example: Low Degree TestF =GF(q)
f : F m ! F
Is f a
degree d polynomial ?
Easy: f is a degree d polynomial iff its restriction on every line is a univariate degree d polynomial.
[Line ≡ 1 dimensional affine subspace]≡ q points.
Does f agree with a
degree d polynomial
in 90% of the points?
Theorem: Iff on ~ 90% of lines, f has agreement ~90% with a univariate degree d polynomial.
Weaker results: [Babai, Fortnow, Lund ‘90][Rubinfeld Sudan ‘92][Feige, Goldwasser, Lovász, Szegedy ‘91]
Stronger results: [A. Sudan ‘96]; [Raz, Safra ‘96]
The results described in this paper indicate a possible classification of optimization problems as to the behavior of their approximation algorithms. Such a classification must remain tentative, at least until the existence of polynomial-time algorithms for finding optimal solutions has been proved or disproved. Are there indeed O(log n) coloring algorithms? Are there any clique finding algorithms better than O(ne) for all e>0? Where do other optimization problems fit into the scheme of things? What is it that makes algorithms for different problems behave the same way? Is there some stronger kind of reducibility than the simple polynomial reducibility that will explain these results, or are they due to some structural similarity between te problems as we define them? And what other types of behavior and ways of analyzing and measuring it are possible?
David Johnson, 1974
MAX-LIN(3): Given a linear system over GF(2) of the form
NP-hard Optimization Problems
MAX-3SAT: Given 3-CNF formula , find assignment maximizing the number of satisfied clauses.
find its largest feasible subsystem.
Defn: An -approximation for MAX-LIN(3) is a polynomial-time algorithm that computes, for each system, a feasible
subsystem of size ¸ . ( ¸ 1)
Approximation Algorithms
Easy Fact: 2-approximation exists.
Theorem : If P NP, (2-)-approximation does not exists.
Common Approx. Ratios
Early History
1966 Graham’s algorithm for multiprocessor scheduling [approx. ratio = 2]
1971,72 NP-completeness
1974 Sahni and Gonzalez: Approximating TSP is NP-hard
1975 FPTAS for Knapsack [IK]
1976 Christofides heuristic for metric TSP
1977 Karp’s probabilistic analysis of Euclidean TSP
1980 PTAS for Bin Packing [FL; KK]
1980-82 PTAS’s for planar graph problems [LT, B]
Subsequent Developments1988 MAX-SNP: MAX-3SAT is complete
problem [PY]
1990 IP=PSPACE, MIP=NEXPTIME
1991 First results on PCPs [BFLS, FGLSS]
1992 NP=PCP(log n,1) [AS,ALMSS]
1992-95 Better algorithms for scheduling, MAX- CUT [GW], MAX-3SAT,...
1995-98 Tight Lowerbounds (H97); (1+ )- approximation for Euclidean TSP, Steiner Tree...
1999-now Many new algorithms and hardness results.
2005 New simpler proof of NP=PCP(log n,1) (Dinur)
3SAT: Given a 3-CNF formula, like
decide if it has a satisfying assignment.
THEOREMS: Given decide
if T has a proof of length · n in Axiomatic Mathematics
Philosophical meaning of P vs NP: Is there an inherent difference between
being creative / brilliant and being able to appreciate creativity / brilliance?
SOME NP-COMPLETE PROBLEMS
“Feasible” computations:
those that run in polynomial (i.e.,O(nc)) time
(central tenet of theoretical computer science)
e.g., time is “infeasible”
NP=PCP(log n, 1)[A., Safra ‘92]
[A., Lund, Motwani, Sudan, Szegedy ’92]
INPUT x CERTIFICATE
n nc
MO(1) bits
O(lg n) random bits
Accept / Reject
x is a “YES” input
there is s.t. M accepts
x is a “NO” input
for every M rejects
> 1 -
> ½ +
Håstad’s 3-bit PCP Theorem (1997)
Reads 3 bits; Computes sum mod 2
Pr[ ]
Pr[ ]
(2-)-approx. to MAX-LIN(3)) P=NP
INPUT x ?
MO(lg n) random bits
Note: 2O(lg n) = nO(1).
) M ≡ nO(1) linear constraints
x is YES instance ) > 1- fraction satisfiable
x is NO instance ) · ½+ fraction satisfiable
1- ½
+
Polynomial EncodingIdea 1 [LFKN ‘90]
[BFL ’90]
Sequence of bits / numbers
2 4 5 7
Represent as m-variate degree d polynomial:2x1x2 + 4x1(1-x2) + 5x2(1-x1) + 7(1-x1)(1-x2)
Evaluate at all points in GF(q)m
Note: 2 different polynomials differ in (1-d/q) fraction of points.
2nd Idea:Many properties of polynomials
are locally checkable.
Program Checking [Blum Kannan ‘88]
Program Testing / Correcting [Blum, Luby, Rubinfeld ‘90]
MIP = NEXPTIME [Babai, Fortnow, Lund ’90]
1st “PCP Theorem”
Dinur [05]’s proof uses random walks on expander graphs
instead of polynomials.
Håstad’s 3-bit Theorem (and “fourier method”)
NP = PCP(lg n, 1)T1 T2
c bits 1 bit
YES instances ) 9 T1T2 Pr[Accept] = 1
NO instances ) 8 T1T2 Pr[Accept] < 1-
V0
Raz’s Thm
S1 S2ck bits k bits
Pr[Accept] = 1
vs. Pr[Accept] < 2-k/10
V1
22ck bits 22k bits
LONG CODING [BGS ’95]
Verifier Composition
V2
Suppose
Pr[Accept] > ½ +
(A few pages of Fourier Analysis)
9 S1 S2 which V1
accepts w/ Prob ¸ 2-k/10
) x is a YES instance.
Sparsest Cut / Edge Expansion
SS
G = (V, E)
c- balanced separator
Both NP-hard
G) = minS µ V
| E(S, Sc)|
|S||S| < |V|/2
c(G) = minS µ V
| E(S, Sc)|
|S|c |V| < |S| < |V|/2
c-balanced separator
c(G) = minS µ V
| E(S, Sc)|
|S|c |V| < |S| < |V|/2
SS
Assign {+1, -1} to v1, v2, …, vn to minimize
(i, j) 2 E |vi –vj|2/4
Subject to i < j |vi –vj|2/4 ¸ c(1-c)n2
+1
-1
|vi –vj|2/4 =1
Semidefinite relaxation for
Find unit vectors in <n
|vi –vj|2 + |vj –vk|2 ¸ |vi –vk|2 8 i, j, k
Triangle inequality
“cut semimetri
c”
|vi –vj|2 =0