48
Computational complexity theory a From Wikipedia, the free encyclopedia

Computational Complexity Theory A

  • Upload
    man

  • View
    32

  • Download
    0

Embed Size (px)

DESCRIPTION

1. From Wikipedia, the free encyclopedia2. Lexicographical order

Citation preview

Page 1: Computational Complexity Theory A

Computational complexity theory aFrom Wikipedia, the free encyclopedia

Page 2: Computational Complexity Theory A

Contents

1 Aanderaa–Karp–Rosenberg conjecture 11.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Query complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4 Deterministic query complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.5 Randomized query complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.6 Quantum query complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.9 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Advice (complexity) 62.1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3 Analysis of algorithms 83.1 Cost models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.2 Run-time analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3.2.1 Shortcomings of empirical metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.2.2 Orders of growth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.2.3 Empirical orders of growth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.2.4 Evaluating run-time complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.2.5 Growth rate analysis of other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3.3 Relevance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.4 Constant factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

4 Approximation algorithm 144.1 Performance guarantees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.2 Algorithm design techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154.3 Epsilon terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

i

Page 3: Computational Complexity Theory A

ii CONTENTS

4.5 Citations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

5 Approximation-preserving reduction 175.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175.2 Types of approximation-preserving reductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

5.2.1 Strict reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185.2.2 L-reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185.2.3 PTAS-reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185.2.4 A-reduction and P-reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185.2.5 E-reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195.2.6 AP-reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195.2.7 Gap reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

5.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

6 Asymptotic computational complexity 216.1 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216.2 Types of algorithms considered . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

7 Averaging argument 227.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227.2 Formalized definition of averaging argument . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227.3 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

8 Complexity class 248.1 Important complexity classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248.2 Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258.3 Closure properties of classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258.4 Relationships between complexity classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

8.4.1 Hierarchy theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268.7 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

9 Computational complexity theory 279.1 Computational problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

9.1.1 Problem instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279.1.2 Representing problem instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Page 4: Computational Complexity Theory A

CONTENTS iii

9.1.3 Decision problems as formal languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289.1.4 Function problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309.1.5 Measuring the size of an instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

9.2 Machine models and complexity measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309.2.1 Turing machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309.2.2 Other machine models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319.2.3 Complexity measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319.2.4 Best, worst and average case complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . 329.2.5 Upper and lower bounds on the complexity of problems . . . . . . . . . . . . . . . . . . . 32

9.3 Complexity classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339.3.1 Defining complexity classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339.3.2 Important complexity classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339.3.3 Hierarchy theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349.3.4 Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

9.4 Important open problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359.4.1 P versus NP problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359.4.2 Problems in NP not known to be in P or NP-complete . . . . . . . . . . . . . . . . . . . . 369.4.3 Separations between other complexity classes . . . . . . . . . . . . . . . . . . . . . . . . 36

9.5 Intractability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379.6 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

9.8.1 Textbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399.8.2 Surveys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

9.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

10 Configuration graph 4010.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4010.2 Useful property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4010.3 Size of the graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4010.4 Use of this object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4110.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

11 Padding argument 4211.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4211.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4211.3 Text and image sources, contributors, and licenses . . . . . . . . . . . . . . . . . . . . . . . . . . 43

11.3.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4311.3.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4411.3.3 Content license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

Page 5: Computational Complexity Theory A

Chapter 1

Aanderaa–Karp–Rosenberg conjecture

In theoretical computer science, theAanderaa–Karp–Rosenberg conjecture (also known as theAanderaa–Rosenbergconjecture or the evasiveness conjecture) is a group of related conjectures about the number of questions of theform “Is there an edge between vertex u and vertex v?" that have to be answered to determine whether or not anundirected graph has a particular property such as planarity or bipartiteness. They are named after Stål Aanderaa,Richard M. Karp, and Arnold L. Rosenberg. According to the conjecture, for a wide class of properties, no algo-rithm can guarantee that it will be able to skip any questions: any algorithm for determining whether the graph hasthe property, no matter how clever, might need to examine every pair of vertices before it can give its answer. Aproperty satisfying this conjecture is called evasive.More precisely, the Aanderaa–Rosenberg conjecture states that any deterministic algorithm must test at least a con-stant fraction of all possible pairs of vertices, in the worst case, to determine any non-trivial monotone graph property;in this context, a property is monotone if it remains true when edges are added (so planarity is not monotone, butnon-planarity is monotone). A stronger version of this conjecture, called the evasiveness conjecture or the Aanderaa–Karp–Rosenberg conjecture, states that exactly n(n − 1)/2 tests are needed. Versions of the problem for randomizedalgorithms and quantum algorithms have also been formulated and studied.The deterministic Aanderaa–Rosenberg conjecture was proven by Rivest & Vuillemin (1975), but the strongerAanderaa–Karp–Rosenberg conjecture remains unproven. Additionally, there is a large gap between the conjec-tured lower bound and the best proven lower bound for randomized and quantum query complexity.

1.1 Example

The property of being non-empty (that is, having at least one edge) is monotone, because adding another edge toa non-empty graph produces another non-empty graph. There is a simple algorithm for testing whether a graph isnon-empty: loop through all of the pairs of vertices, testing whether each pair is connected by an edge. If an edgeis ever found in this way, break out of the loop, and report that the graph is non-empty, and if the loop completeswithout finding any edges, then report that the graph is empty. On some graphs (for instance the complete graphs) thisalgorithm will terminate quickly, without testing every pair of vertices, but on the empty graph it tests all possiblepairs before terminating. Therefore, the query complexity of this algorithm is n(n − 1)/2: in the worst case, thealgorithm performs n(n − 1)/2 tests.The algorithm described above is not the only possible method of testing for non-emptiness, but the Aanderaa–Karp–Rosenberg conjecture implies that every deterministic algorithm for testing non-emptiness has the same querycomplexity, n(n − 1)/2. That is, the property of being non-empty is evasive. For this property, the result is easy toprove directly: if an algorithm does not perform n(n − 1)/2 tests, it cannot distinguish the empty graph from a graphthat has one edge connecting one of the untested pairs of vertices, and must give an incorrect answer on one of thesetwo graphs.

1

Page 6: Computational Complexity Theory A

2 CHAPTER 1. AANDERAA–KARP–ROSENBERG CONJECTURE

1.2 Definitions

In the context of this article, all graphs will be simple and undirected, unless stated otherwise. This means that theedges of the graph form a set (and not a multiset) and each edge is a pair of distinct vertices. Graphs are assumed tohave an implicit representation in which each vertex has a unique identifier or label and in which it is possible to testthe adjacency of any two vertices, but for which adjacency testing is the only allowed primitive operation.Informally, a graph property is a property of a graph that is independent of labeling. More formally, a graph propertyis a mapping from the set of all graphs to 0,1 such that isomorphic graphs are mapped to the same value. Forexample, the property of containing at least 1 vertex of degree 2 is a graph property, but the property that the firstvertex has degree 2 is not, because it depends on the labeling of the graph (in particular, it depends on which vertex isthe “first” vertex). A graph property is called non-trivial if it doesn't assign the same value to all graphs. For instance,the property of being a graph is a trivial property, since all graphs possess this property. On the other hand, theproperty of being empty is non-trivial, because the empty graph possesses this property, but non-empty graphs donot. A graph property is said to be monotone if the addition of edges does not destroy the property. Alternately, ifa graph possesses a monotone property, then every supergraph of this graph on the same vertex set also possessesit. For instance, the property of being nonplanar is monotone: a supergraph of a nonplanar graph is itself nonplanar.However, the property of being regular is not monotone.The big O notation is often used for query complexity. In short, f(n) is O(g(n)) if for large enough n, f(n) ≤ c g(n)for some positive constant c. Similarly, f(n) is Ω(g(n)) if for large enough n, f(n) ≥ c g(n) for some positive constantc. Finally, f(n) is Θ(g(n)) if it is both O(g(n)) and Ω(g(n)).

1.3 Query complexity

The deterministic query complexity of evaluating a function on n bits (x1, x2, ..., xn) is the number of bits xᵢ that haveto be read in the worst case by a deterministic algorithm to determine the value of the function. For instance, if thefunction takes value 0 when all bits are 0 and takes value 1 otherwise (this is the OR function), then the deterministicquery complexity is exactly n. In the worst case, the first n − 1 bits read could all be 0, and the value of the functionnow depends on the last bit. If an algorithm doesn't read this bit, it might output an incorrect answer. (Such argumentsare known as adversary arguments.) The number of bits read are also called the number of queries made to the input.One can imagine that the algorithm asks (or queries) the input for a particular bit and the input responds to this query.The randomized query complexity of evaluating a function is defined similarly, except the algorithm is allowed to berandomized, i.e., it can flip coins and use the outcome of these coin flips to decide which bits to query. However,the randomized algorithm must still output the correct answer for all inputs: it is not allowed to make errors. Suchalgorithms are called Las Vegas algorithms, which distinguishes them from Monte Carlo algorithms which are allowedto make some error. Randomized query complexity can also be defined in the Monte Carlo sense, but the Aanderaa–Karp–Rosenberg conjecture is about the Las Vegas query complexity of graph properties.Quantum query complexity is the natural generalization of randomized query complexity, of course allowing quantumqueries and responses. Quantum query complexity can also be defined with respect to Monte Carlo algorithms or LasVegas algorithms, but it is usually taken to mean Monte Carlo quantum algorithms.In the context of this conjecture, the function to be evaluated is the graph property, and the input is a string of sizen(n − 1)/2, which gives the locations of the edges on an n vertex graph, since a graph can have at most n(n − 1)/2possible edges. The query complexity of any function is upper bounded by n(n − 1)/2, since the whole input is readafter making n(n − 1)/2 queries, thus determining the input graph completely.

1.4 Deterministic query complexity

For deterministic algorithms, Rosenberg (1973) originally conjectured that for all nontrivial graph properties on nvertices, deciding whether a graph possesses this property requires Ω(n2) queries. The non-triviality condition isclearly required because there are trivial properties like “is this a graph?" which can be answered with no queries atall.The conjecture was disproved by Aanderaa, who exhibited a directed graph property (the property of containing a“sink”) which required only O(n) queries to test. A sink, in a directed graph, is a vertex of indegree n−1 and outdegree

Page 7: Computational Complexity Theory A

1.5. RANDOMIZED QUERY COMPLEXITY 3

0. This property can be tested with less than 3n queries (Best, van Emde Boas & Lenstra 1974). An undirected graphproperty which can also be tested with O(n) queries is the property of being a scorpion graph, first described in Best,van Emde Boas & Lenstra (1974). A scorpion graph is a graph containing a three-vertex path, such that one endpointof the path is connected to all remaining vertices, while the other two path vertices have no incident edges other thanthe ones in the path.Then Aanderaa and Rosenberg formulated a new conjecture (the Aanderaa–Rosenberg conjecture) which says thatdeciding whether a graph possesses a non-trivial monotone graph property requires Ω(n2) queries.[1] This conjecturewas resolved by Rivest & Vuillemin (1975) by showing that at least n2/16 queries are needed to test for any nontrivialmonotone graph property. The bound was further improved to n2/9 by Kleitman & Kwiatkowski (1980), then to n2/4- o(n2) by Kahn, Saks & Sturtevant (1983), then to (8/25)n2 - o(n2) by Korneffel & Triesch (2010), and then to n2/3- o(n2) by Scheidweiler & Triesch (2013).Richard Karp conjectured the stronger statement (which is now called the evasiveness conjecture or the Aanderaa–Karp–Rosenberg conjecture) that “every nontrivial monotone graph property for graphs on n vertices is evasive.”[2]

A property is called evasive if determining whether a given graph has this property sometimes requires all n(n − 1)/2queries.[3] This conjecture says that the best algorithm for testing any nontrivial monotone property must (in the worstcase) query all possible edges. This conjecture is still open, although several special graph properties have shown to beevasive for all n. The conjecture has been resolved for the case where n is a prime power by Kahn, Saks & Sturtevant(1983) using a topological approach. The conjecture has also been resolved for all non-trivial monotone properties onbipartite graphs by Yao (1988). Minor-closed properties have also been shown to be evasive for large n (Chakrabarti,Khot & Shi 2001).

1.5 Randomized query complexity

Richard Karp also conjectured that Ω(n2) queries are required for testing nontrivial monotone properties even ifrandomized algorithms are permitted. No nontrivial monotone property is known which requires less than n2/4queries to test. A linear lower bound (i.e., Ω(n)) follows from a very general relationship between randomized anddeterministic query complexities. The first superlinear lower bound for this problem was given by Yao (1991) whoshowed that Ω(n log1/12 n) queries are required. This was further improved by King (1988) to Ω(n5/4), and then byHajnal (1991) to Ω(n4/3). This was subsequently improved to the current best known lower bound of Ω(n4/3 log1/3 n)by Chakrabarti & Khot (2001).Some recent results give lower bounds which are determined by the critical probability p of the monotone graphproperty under consideration. The critical probability p is defined as the unique p such that a random graph G(n, p)possesses this property with probability equal to 1/2. A random graph G(n, p) is a graph on n vertices where eachedge is chosen to be present with probability p independent of all the other edges. Friedgut, Kahn & Wigderson(2002) showed that any monotone property with critical probability p requires Ω

(min

n

min(p,1−p) ,n2

log n

)queries.

For the same problem, O'Donnell et al. (2005) showed a lower bound of Ω(n4/3/p1/3).As in the deterministic case, there are many special properties for which an Ω(n2) lower bound is known. Moreover,better lower bounds are known for several classes of graph properties. For instance, for testing whether the graphhas a subgraph isomorphic to any given graph (the so-called subgraph isomorphism problem), the best known lowerbound is Ω(n3/2) due to Gröger (1992).

1.6 Quantum query complexity

For bounded-error quantum query complexity, the best known lower bound is Ω(n2/3 log1/6 n) as observed by AndrewYao.[4] It is obtained by combining the randomized lower bound with the quantum adversary method. The bestpossible lower bound one could hope to achieve is Ω(n), unlike the classical case, due to Grover’s algorithm whichgives an O(n) query algorithm for testing the monotone property of non-emptiness. Similar to the deterministicand randomized case, there are some properties which are known to have an Ω(n) lower bound, for example non-emptiness (which follows from the optimality of Grover’s algorithm) and the property of containing a triangle. Moreinterestingly, there are some graph properties which are known to have an Ω(n3/2) lower bound, and even someproperties with an Ω(n2) lower bound. For example, the monotone property of nonplanarity requires Θ(n3/2) queries(Ambainis et al. 2008) and the monotone property of containing more than half the possible number of edges (alsocalled the majority function) requires Θ(n2) queries (Beals et al. 2001).

Page 8: Computational Complexity Theory A

4 CHAPTER 1. AANDERAA–KARP–ROSENBERG CONJECTURE

1.7 Notes[1] Triesch (1996)

[2] Lutz (2001)

[3] Kozlov (2008, pp. 226–228)

[4] The result is unpublished, but mentioned in Magniez, Santha & Szegedy (2005).

1.8 References

• Ambainis, Andris; Iwama, Kazuo; Nakanishi, Masaki; Nishimura, Harumichi; Raymond, Rudy; Tani, Sei-ichiro; Yamashita, Shigeru (2008), “Quantum query complexity of Boolean functions with small on-sets”,Proceedings of the 19th International Symposium on Algorithms and Computation, Lecture Notes in ComputerScience 5369, Gold Coast, Australia: Springer-Verlag, pp. 907–918, doi:10.1007/978-3-540-92182-0_79,ISBN 978-3-540-92181-3.

• Beals, Robert; Buhrman, Harry; Cleve, Richard; Mosca, Michele; de Wolf, Ronald (2001), “Quantum lowerbounds by polynomials”, Journal of the ACM 48 (4): 778–797, doi:10.1145/502090.502097.

• Best, M.R.; van Emde Boas, P.; Lenstra, H.W. (1974), A sharpened version of the Aanderaa-Rosenberg con-jecture, Report ZW 30/74, Mathematisch Centrum Amsterdam, hdl:1887/3792.

• Chakrabarti, Amit; Khot, Subhash (2001), “Improved Lower Bounds on the Randomized Complexity of GraphProperties”, Proceedings of the 28th International Colloquium on Automata, Languages and Programming,Lecture Notes in Computer Science 2076, Springer-Verlag, pp. 285–296, doi:10.1007/3-540-48224-5_24,ISBN 978-3-540-42287-7.

• Chakrabarti, Amit; Khot, Subhash; Shi, Yaoyun (2001), “Evasiveness of subgraph containment and relatedproperties”, SIAM Journal on Computing 31 (3): 866–875, doi:10.1137/S0097539700382005.

• Friedgut, Ehud; Kahn, Jeff; Wigderson, Avi (2002), “Computing Graph Properties by Randomized SubcubePartitions”, Randomization and Approximation Techniques in Computer Science, Lecture Notes in ComputerScience 2483, Springer-Verlag, p. 953, doi:10.1007/3-540-45726-7_9, ISBN 978-3-540-44147-2.

• Gröger, Hans Dietmar (1992), “On the randomized complexity of monotone graph properties” (PDF), ActaCybernetica 10 (3): 119–127.

• Hajnal, Péter (1991), “An Ω(n4/3) lower bound on the randomized complexity of graph properties”,Combinatorica11 (2): 131–143, doi:10.1007/BF01206357.

• Kahn, Jeff; Saks, Michael; Sturtevant, Dean (1983), “A topological approach to evasiveness”, 24th AnnualSymposium on Foundations of Computer Science (sfcs 1983), Los Alamitos, CA, USA: IEEE Computer Society,pp. 31–33, doi:10.1109/SFCS.1983.4, ISBN 0-8186-0508-1.

• King, Valerie (1988), “Lower bounds on the complexity of graph properties”, Proc. 20th ACM Symposiumon Theory of Computing, Chicago, Illinois, United States, pp. 468–476, doi:10.1145/62212.62258, ISBN0-89791-264-0.

• Kleitman, D.J.; Kwiatkowski, DJ (1980), “Further results on the Aanderaa-Rosenberg conjecture”, Journal ofCombinatorial Theory. Series B 28: 85–95, doi:10.1016/0095-8956(80)90057-X.

• Kozlov, Dmitry (2008), Combinatorial Algebraic Topology, Springer-Verlag, ISBN 978-3-540-73051-4.

• Lutz, Frank H. (2001), “Some results related to the evasiveness conjecture”, Journal of Combinatorial Theory,Series B 81 (1): 110–124, doi:10.1006/jctb.2000.2000.

• Korneffel, Torsten; Triesch, Eberhard (2010), “An asymptotic bound for the complexity of monotone graphproperties”, Combinatorica (Springer-Verlag) 30 (6): 735–743, doi:10.1007/s00493-010-2485-3, ISSN 0209-9683.

Page 9: Computational Complexity Theory A

1.9. FURTHER READING 5

• Magniez, Frédéric; Santha, Miklos; Szegedy, Mario (2005), “Quantum algorithms for the triangle prob-lem”, Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, Vancouver, BritishColumbia: Society for Industrial and Applied Mathematics, pp. 1109–1117, arXiv:quant-ph/0310134.

• O'Donnell, Ryan; Saks, Michael; Schramm, Oded; Servedio, Rocco A. (2005), “Every decision tree has an in-fluential variable”, Proc 46th IEEE Symposium on Foundations of Computer Science, pp. 31–39, doi:10.1109/SFCS.2005.34,ISBN 0-7695-2468-0.

• Rivest, Ronald L.; Vuillemin, Jean (1975), “A generalization and proof of the Aanderaa-Rosenberg conjec-ture”, Proc. 7th ACM Symposium on Theory of Computing, Albuquerque, New Mexico, United States, pp.6–11, doi:10.1145/800116.803747.

• Rosenberg, Arnold L. (1973), “On the time required to recognize properties of graphs: a problem”, SIGACTNews 5 (4): 15–16, doi:10.1145/1008299.1008302.

• Scheidweiler, Robert; Triesch, Eberhard (2013), “A Lower Bound for the Complexity of Monotone GraphProperties”, SIAM Journal on Discrete Mathematics 27 (1): 257–265, doi:10.1137/120888703

• Triesch, Eberhard (1996), “On the recognition complexity of some graph properties”, Combinatorica 16 (2):259–268, doi:10.1007/BF01844851.

• Yao, Andrew Chi-Chih (1988), “Monotone bipartite graph properties are evasive”, SIAM Journal on Computing17 (3): 517–520, doi:10.1137/0217031.

• Yao, Andrew Chi-Chih (1991), “Lower bounds to randomized algorithms for graph properties”, Journal ofComputer and System Sciences 42 (3): 267–287, doi:10.1016/0022-0000(91)90003-N.

1.9 Further reading• Bollobás, Béla (2004), “Chapter VIII. Complexity and packing”, Extremal Graph Theory, New York: Dover

Publications, pp. 401–437, ISBN 978-0-486-43596-1.

• László Lovász; Young, Neal E. (2002). “Lecture Notes on Evasiveness of Graph Properties”. arXiv:cs/0205031v1[cs.CC].

• Chronaki, Catherine E (1990), A survey of Evasiveness: Lower Bounds on the Decision-Tree Complexity ofBoolean Functions, CiteSeerX: 10 .1 .1 .37 .1041.

• Michael Saks. “Decision Trees: Problems and Results, Old and New” (PDF).

Page 10: Computational Complexity Theory A

Chapter 2

Advice (complexity)

In computational complexity theory, an advice string is an extra input to a Turing machine which is allowed todepend on the length n of the input, but not on the input itself. A decision problem is in the complexity class P/f(n)if there is a polynomial time Turing machine M with the following property: for any n, there is an advice string A oflength f(n) such that, for any input x of length n, the machine M correctly decides the problem on the input x, givenx and A.The most common complexity class involving advice is P/poly where advice length f(n) can be any polynomial in n.P/poly is equal to the class of decision problems such that, for every n, there exists a polynomial size Boolean circuitcorrectly deciding the problem on all inputs of length n. One direction of the equivalence is easy to see. If, for everyn, there is a polynomial size Boolean circuit A(n) deciding the problem, we can use a Turing machine that interpretsthe advice string as a description of the circuit. Then, given the description of A(n) as the advice, the machine willcorrectly decide the problem on all inputs of length n. The other direction uses a simulation of a polynomial-timeTuring machine by a polynomial-size circuit as in one proof of Cook’s Theorem. Simulating a Turing machine withadvice is no more complicated than simulating an ordinary machine, since the advice string can be incorporated intothe circuit.[1]

Because of this equivalence, P/poly is sometimes defined as the class of decision problems solvable by polynomialsize Boolean circuits, or by polynomial-size non-uniform Boolean circuits.P/poly contains both P and BPP (Adleman’s theorem). It also contains some undecidable problems, such as theunary version of every undecidable problem, including the halting problem. Because of that, it is not contained inDTIME (f(n)) or NTIME (f(n)) for any f.Advice classes can be defined for other resource bounds instead of P. For example, taking a non-deterministic poly-nomial time Turing machine with an advice of length f(n) gives the complexity class NP/f(n). If we are allowed anadvice of length 2n, we can use it to encode whether each input of length n is contained in the language. Therefore,any boolean function is computable with an advice of length 2n and advice of more than exponential length is notmeaningful.Similarly, the class L/poly can be defined as deterministic logspace with a polynomial amount of advice.Known results include:

• The classes NL/poly and UL/poly are the same, i.e. nondeterministic logarithmic space computation withadvice can be made unambiguous.[2] This may be proved using an isolation lemma.[3]

• It is known that coNEXP is contained in NEXP/poly.[4]

• If NP is contained in P/poly, then the polynomial time hierarchy collapses (Karp-Lipton theorem).

2.1 References[1] Arora, Sanjeev; Barak, Boaz (2009), Computational Complexity: A Modern Approach, Cambridge University Press, p. 113,

ISBN 9780521424264, Zbl 1193.68112.

6

Page 11: Computational Complexity Theory A

2.1. REFERENCES 7

[2] Reinhardt, Klaus; Allender, Eric (2000). “Making nondeterminism unambiguous”. SIAM J. Comput. 29 (4): 1118–1131.doi:10.1137/S0097539798339041. Zbl 0947.68063.

[3] Hemaspaandra, Lane A.; Ogihara, Mitsunori (2002). The complexity theory companion. Texts in Theoretical ComputerScience. An EATCS Series. Berlin: Springer-Verlag. ISBN 3-540-67419-5. Zbl 0993.68042.

[4] Lance Fortnow, A Little Theorem

Page 12: Computational Complexity Theory A

Chapter 3

Analysis of algorithms

In computer science, the analysis of algorithms is the determination of the amount of resources (such as time andstorage) necessary to execute them. Most algorithms are designed to work with inputs of arbitrary length. Usually,the efficiency or running time of an algorithm is stated as a function relating the input length to the number of steps(time complexity) or storage locations (space complexity).Algorithm analysis is an important part of a broader computational complexity theory, which provides theoreticalestimates for the resources needed by any algorithm which solves a given computational problem. These estimatesprovide an insight into reasonable directions of search for efficient algorithms.In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimatethe complexity function for arbitrarily large input. Big O notation, Big-omega notation and Big-theta notation areused to this end. For instance, binary search is said to run in a number of steps proportional to the logarithm of thelength of the list being searched, or in O(log(n)), colloquially “in logarithmic time". Usually asymptotic estimatesare used because different implementations of the same algorithm may differ in efficiency. However the efficienciesof any two “reasonable” implementations of a given algorithm are related by a constant multiplicative factor called ahidden constant.Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assump-tions concerning the particular implementation of the algorithm, called model of computation. A model of com-putation may be defined in terms of an abstract computer, e.g., Turing machine, and/or by postulating that certainoperations are executed in unit time. For example, if the sorted list to which we apply binary search has n elements,and we can guarantee that each lookup of an element in the list can be done in unit time, then at most log2 n + 1 timeunits are needed to return an answer.

3.1 Cost models

Time efficiency estimates depend on what we define to be a step. For the analysis to correspond usefully to the actualexecution time, the time required to perform a step must be guaranteed to be bounded above by a constant. One mustbe careful here; for instance, some analyses count an addition of two numbers as one step. This assumption may notbe warranted in certain contexts. For example, if the numbers involved in a computation may be arbitrarily large, thetime required by a single addition can no longer be assumed to be constant.Two cost models are generally used:[1][2][3][4][5]

• the uniform cost model, also called uniform-cost measurement (and similar variations), assigns a constantcost to every machine operation, regardless of the size of the numbers involved

• the logarithmic cost model, also called logarithmic-cost measurement (and variations thereof), assigns acost to every machine operation proportional to the number of bits involved

The latter is more cumbersome to use, so it’s only employed when necessary, for example in the analysis of arbitrary-precision arithmetic algorithms, like those used in cryptography.

8

Page 13: Computational Complexity Theory A

3.2. RUN-TIME ANALYSIS 9

A key point which is often overlooked is that published lower bounds for problems are often given for a model ofcomputation that is more restricted than the set of operations that you could use in practice and therefore there arealgorithms that are faster than what would naively be thought possible.[6]

3.2 Run-time analysis

Run-time analysis is a theoretical classification that estimates and anticipates the increase in running time (or run-time)of an algorithm as its input size (usually denoted as n) increases. Run-time efficiency is a topic of great interest incomputer science: A program can take seconds, hours or even years to finish executing, depending on which algorithmit implements (see also performance analysis, which is the analysis of an algorithm’s run-time in practice).

3.2.1 Shortcomings of empirical metrics

Since algorithms are platform-independent (i.e. a given algorithm can be implemented in an arbitrary programminglanguage on an arbitrary computer running an arbitrary operating system), there are significant drawbacks to usingan empirical approach to gauge the comparative performance of a given set of algorithms.Take as an example a program that looks up a specific entry in a sorted list of size n. Suppose this program wereimplemented on Computer A, a state-of-the-art machine, using a linear search algorithm, and on Computer B, a muchslower machine, using a binary search algorithm. Benchmark testing on the two computers running their respectiveprograms might look something like the following:Based on these metrics, it would be easy to jump to the conclusion that Computer A is running an algorithm that is farsuperior in efficiency to that of Computer B. However, if the size of the input-list is increased to a sufficient number,that conclusion is dramatically demonstrated to be in error:Computer A, running the linear search program, exhibits a linear growth rate. The program’s run-time is directlyproportional to its input size. Doubling the input size doubles the run time, quadrupling the input size quadruples therun-time, and so forth. On the other hand, Computer B, running the binary search program, exhibits a logarithmicgrowth rate. Quadrupling the input size only increases the run time by a constant amount (in this example, 50,000ns). Even though Computer A is ostensibly a faster machine, Computer B will inevitably surpass Computer A inrun-time because it’s running an algorithm with a much slower growth rate.

3.2.2 Orders of growth

Main article: Big O notation

Informally, an algorithm can be said to exhibit a growth rate on the order of a mathematical function if beyond acertain input size n, the function f(n) times a positive constant provides an upper bound or limit for the run-time ofthat algorithm. In other words, for a given input size n greater than some n0 and a constant c, the running time of thatalgorithm will never be larger than c × f(n). This concept is frequently expressed using Big O notation. For example,since the run-time of insertion sort grows quadratically as its input size increases, insertion sort can be said to be oforder O(n2).Big O notation is a convenient way to express the worst-case scenario for a given algorithm, although it can also beused to express the average-case — for example, the worst-case scenario for quicksort is O(n2), but the average-caserun-time is O(n log n).

3.2.3 Empirical orders of growth

Assuming the execution time follows power rule, t ≈ k na, the coefficient a can be found [7] by taking empiricalmeasurements of run time t1, t2 at some problem-size points n1, n2 , and calculating t2/t1 = (n2/n1)

a sothat a = log(t2/t1)/ log(n2/n1) . In other words, this measures the slope of the empirical line on the log–log plotof execution time vs. problem size, at some size point. If the order of growth indeed follows the power rule (and sothe line on log–log plot is indeed a straight line), the empirical value of a will stay constant at different ranges, and if

Page 14: Computational Complexity Theory A

10 CHAPTER 3. ANALYSIS OF ALGORITHMS

not, it will change (and the line is a curved line) - but still could serve for comparison of any two given algorithms asto their empirical local orders of growth behaviour. Applied to the above table:It is clearly seen that the first algorithm exhibits a linear order of growth indeed following the power rule. Theempirical values for the second one are diminishing rapidly, suggesting it follows another rule of growth and in anycase has much lower local orders of growth (and improving further still), empirically, than the first one.

3.2.4 Evaluating run-time complexity

The run-time complexity for the worst-case scenario of a given algorithm can sometimes be evaluated by examiningthe structure of the algorithm and making some simplifying assumptions. Consider the following pseudocode:1 get a positive integer from input 2 if n > 10 3 print “This might take a while...” 4 for i = 1 to n 5 for j = 1 to i 6print i * j 7 print “Done!"A given computer will take a discrete amount of time to execute each of the instructions involved with carryingout this algorithm. The specific amount of time to carry out a given instruction will vary depending on which in-struction is being executed and which computer is executing it, but on a conventional computer, this amount will bedeterministic.[8] Say that the actions carried out in step 1 are considered to consume time T1, step 2 uses time T2,and so forth.In the algorithm above, steps 1, 2 and 7 will only be run once. For a worst-case evaluation, it should be assumed thatstep 3 will be run as well. Thus the total amount of time to run steps 1-3 and step 7 is:

T1 + T2 + T3 + T7.

The loops in steps 4, 5 and 6 are trickier to evaluate. The outer loop test in step 4 will execute ( n + 1 ) times (notethat an extra step is required to terminate the for loop, hence n + 1 and not n executions), which will consume T4( n +1 ) time. The inner loop, on the other hand, is governed by the value of i, which iterates from 1 to i. On the first passthrough the outer loop, j iterates from 1 to 1: The inner loop makes one pass, so running the inner loop body (step 6)consumes T6 time, and the inner loop test (step 5) consumes 2T5 time. During the next pass through the outer loop,j iterates from 1 to 2: the inner loop makes two passes, so running the inner loop body (step 6) consumes 2T6 time,and the inner loop test (step 5) consumes 3T5 time.Altogether, the total time required to run the inner loop body can be expressed as an arithmetic progression:

T6 + 2T6 + 3T6 + · · ·+ (n− 1)T6 + nT6

which can be factored[9] as

T6 [1 + 2 + 3 + · · ·+ (n− 1) + n] = T6

[1

2(n2 + n)

]The total time required to run the outer loop test can be evaluated similarly:

2T5 + 3T5 + 4T5 + · · ·+ (n− 1)T5 + nT5 + (n+ 1)T5

= T5 + 2T5 + 3T5 + 4T5 + · · ·+ (n− 1)T5 + nT5 + (n+ 1)T5 − T5

which can be factored as

T5 [1 + 2 + 3 + · · ·+ (n− 1) + n+ (n+ 1)]−T5 =

[1

2(n2 + n)

]T5+(n+1)T5−T5 = T5

[1

2(n2 + n)

]+nT5 =

[1

2(n2 + 3n)

]T5

Therefore the total running time for this algorithm is:

f(n) = T1 + T2 + T3 + T7 + (n+ 1)T4 +

[1

2(n2 + n)

]T6 +

[1

2(n2 + 3n)

]T5

Page 15: Computational Complexity Theory A

3.3. RELEVANCE 11

which reduces to

f(n) =

[1

2(n2 + n)

]T6 +

[1

2(n2 + 3n)

]T5 + (n+ 1)T4 + T1 + T2 + T3 + T7

As a rule-of-thumb, one can assume that the highest-order term in any given function dominates its rate of growthand thus defines its run-time order. In this example, n² is the highest-order term, so one can conclude that f(n) =O(n²). Formally this can be proven as follows:

Prove that[12 (n

2 + n)]T6 +

[12 (n

2 + 3n)]T5 + (n+1)T4 + T1 + T2 + T3 + T7 ≤ cn2, n ≥ n0[

12 (n

2 + n)]T6 +

[12 (n

2 + 3n)]T5 + (n+ 1)T4 + T1 + T2 + T3 + T7

≤ (n2 + n)T6 + (n2 + 3n)T5 + (n+ 1)T4 + T1 + T2 + T3 + T7 (for n ≥ 0)Let k be a constant greater than or equal to [T1..T7]

T6(n2 +n) + T5(n

2 +3n) + (n+1)T4 + T1 + T2 + T3 + T7 ≤ k(n2 +n) + k(n2 +3n) + kn+5k= 2kn2 + 5kn+ 5k ≤ 2kn2 + 5kn2 + 5kn2 (for n ≥ 1) = 12kn2

Therefore[12 (n

2 + n)]T6 +

[12 (n

2 + 3n)]T5 + (n+ 1)T4 + T1 + T2 + T3 + T7 ≤ cn2, n ≥ n0

for c = 12k, n0 = 1

A more elegant approach to analyzing this algorithm would be to declare that [T1..T7] are all equal to one unit oftime, in a system of units chosen so that one unit is greater than or equal to the actual times for these steps. Thiswould mean that the algorithm’s running time breaks down as follows:[10]

4 +∑n

i=1 i ≤ 4 +∑n

i=1 n = 4 + n2 ≤ 5n2 (for n ≥ 1) = O(n2).

3.2.5 Growth rate analysis of other resources

The methodology of run-time analysis can also be utilized for predicting other growth rates, such as consumption ofmemory space. As an example, consider the following pseudocode which manages and reallocates memory usage bya program based on the size of a file which that program manages:while (file still open) let n = size of file for every 100,000 kilobytes of increase in file size double the amount of memoryreserved

In this instance, as the file size n increases, memory will be consumed at an exponential growth rate, which is orderO(2n). This is an extremely rapid and most likely unmanageable growth rate for consumption of memory resources.

3.3 Relevance

Algorithm analysis is important in practice because the accidental or unintentional use of an inefficient algorithmcan significantly impact system performance. In time-sensitive applications, an algorithm taking too long to run canrender its results outdated or useless. An inefficient algorithm can also end up requiring an uneconomical amount ofcomputing power or storage in order to run, again rendering it practically useless.

3.4 Constant factors

Analysis of algorithms typically focuses on the asymptotic performance, particularly at the elementary level, but inpractical applications constant factors are important, and real-world data is in practice always limited in size. Thelimit is typically the size of addressable memory, so on 32-bit machines 232 = 4 GiB (greater if segmented memoryis used) and on 64-bit machines 264 = 16 EiB. Thus given a limited size, an order of growth (time or space) can bereplaced by a constant factor, and in this sense all practical algorithms are O(1) for a large enough constant, or forsmall enough data.This interpretation is primarily useful for functions that grow extremely slowly: (binary) iterated logarithm (log*) isless than 5 for all practical data (265536 bits); (binary) log-log (log log n) is less than 6 for virtually all practical data

Page 16: Computational Complexity Theory A

12 CHAPTER 3. ANALYSIS OF ALGORITHMS

(264 bits); and binary log (log n) is less than 64 for virtually all practical data (264 bits). An algorithm with non-constant complexity may nonetheless be more efficient than an algorithm with constant complexity on practical dataif the overhead of the constant time algorithm results in a larger constant factor, e.g., one may have K > k log lognso long as K/k > 6 and n < 22

6

= 264 .For large data linear or quadratic factors cannot be ignored, but for small data an asymptotically inefficient algorithmmay be more efficient. This is particularly used in hybrid algorithms, like Timsort, which use an asymptotically effi-cient algorithm (here merge sort, with time complexity n logn ), but switch to an asymptotically inefficient algorithm(here insertion sort, with time complexity n2 ) for small data, as the simpler algorithm is faster on small data.

3.5 See also• Amortized analysis

• Analysis of parallel algorithms

• Asymptotic computational complexity

• Best, worst and average case

• Big O notation

• Computational complexity theory

• Master theorem

• NP-Complete

• Numerical analysis

• Polynomial time

• Program optimization

• Profiling (computer programming)

• Scalability

• Smoothed analysis

• Termination analysis — the subproblem of checking whether a program will terminate at all

• Time complexity — includes table of orders of growth for common algorithms

3.6 Notes[1] Alfred V. Aho; John E. Hopcroft; Jeffrey D. Ullman (1974). The design and analysis of computer algorithms. Addison-

Wesley Pub. Co., section 1.3

[2] Juraj Hromkovič (2004). Theoretical computer science: introduction to Automata, computability, complexity, algorithmics,randomization, communication, and cryptography. Springer. pp. 177–178. ISBN 978-3-540-14015-3.

[3] Giorgio Ausiello (1999). Complexity and approximation: combinatorial optimization problems and their approximabilityproperties. Springer. pp. 3–8. ISBN 978-3-540-65431-5.

[4] Wegener, Ingo (2005), Complexity theory: exploring the limits of efficient algorithms, Berlin, New York: Springer-Verlag,p. 20, ISBN 978-3-540-21045-0

[5] Robert Endre Tarjan (1983). Data structures and network algorithms. SIAM. pp. 3–7. ISBN 978-0-89871-187-5.

[6] Examples of the price of abstraction?, cstheory.stackexchange.com

[7] How To Avoid O-Abuse and Bribes, at the blog “Gödel’s Lost Letter and P=NP” by R. J. Lipton, professor of ComputerScience at Georgia Tech, recounting idea by Robert Sedgewick

Page 17: Computational Complexity Theory A

3.7. REFERENCES 13

[8] However, this is not the case with a quantum computer

[9] It can be proven by induction that 1 + 2 + 3 + · · ·+ (n− 1) + n = n(n+1)2

[10] This approach, unlike the above approach, neglects the constant time consumed by the loop tests which terminate theirrespective loops, but it is trivial to prove that such omission does not affect the final result

3.7 References• Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L. & Stein, Clifford (2001). Introduction to Al-gorithms. Chapter 1: Foundations (Second ed.). Cambridge, MA: MIT Press and McGraw-Hill. pp. 3–122.ISBN 0-262-03293-7.

• Sedgewick, Robert (1998). Algorithms in C, Parts 1-4: Fundamentals, Data Structures, Sorting, Searching (3rded.). Reading, MA: Addison-Wesley Professional. ISBN 978-0-201-31452-6.

• Knuth, Donald. The Art of Computer Programming. Addison-Wesley.

• Greene, Daniel A.; Knuth, Donald E. (1982). Mathematics for the Analysis of Algorithms (Second ed.).Birkhäuser. ISBN 3-7643-3102-X.

• Goldreich, Oded (2010). Computational Complexity: A Conceptual Perspective. Cambridge University Press.ISBN 978-0-521-88473-0.

Page 18: Computational Complexity Theory A

Chapter 4

Approximation algorithm

In computer science and operations research, approximation algorithms are algorithms used to find approximatesolutions to optimization problems. Approximation algorithms are often associated with NP-hard problems; sinceit is unlikely that there can ever be efficient polynomial-time exact algorithms solving NP-hard problems, one set-tles for polynomial-time sub-optimal solutions. Unlike heuristics, which usually only find reasonably good solutionsreasonably fast, one wants provable solution quality and provable run-time bounds. Ideally, the approximation isoptimal up to a small constant factor (for instance within 5% of the optimal solution). Approximation algorithms areincreasingly being used for problems where exact polynomial-time algorithms are known but are too expensive dueto the input size. A typical example for an approximation algorithm is the one for vertex cover in graphs: find anuncovered edge and add both endpoints to the vertex cover, until none remain. It is clear that the resulting cover is atmost twice as large as the optimal one. This is a constant factor approximation algorithm with a factor of 2.NP-hard problems vary greatly in their approximability; some, such as the bin packing problem, can be approximatedwithin any factor greater than 1 (such a family of approximation algorithms is often called a polynomial time approxi-mation scheme or PTAS). Others are impossible to approximate within any constant, or even polynomial factor unlessP = NP, such as the maximum clique problem.NP-hard problems can often be expressed as integer programs (IP) and solved exactly in exponential time. Manyapproximation algorithms emerge from the linear programming relaxation of the integer program.Not all approximation algorithms are suitable for all practical applications. They often use IP/LP/Semidefinite solvers,complex data structures or sophisticated algorithmic techniques which lead to difficult implementation problems.Also, some approximation algorithms have impractical running times even though they are polynomial time, forexample O(n2156 )[1] . Yet the study of even very expensive algorithms is not a completely theoretical pursuit as theycan yield valuable insights. A classic example is the initial PTAS for Euclidean TSP due to Sanjeev Arora which hadprohibitive running time, yet within a year, Arora refined the ideas into a linear time algorithm. Such algorithms arealso worthwhile in some applications where the running times and cost can be justified e.g. computational biology,financial engineering, transportation planning, and inventory management. In such scenarios, they must compete withthe corresponding direct IP formulations.Another limitation of the approach is that it applies only to optimization problems and not to “pure” decision prob-lems like satisfiability, although it is often possible to conceive optimization versions of such problems, such as themaximum satisfiability problem (Max SAT).Inapproximability has been a fruitful area of research in computational complexity theory since the 1990 result ofFeige, Goldwasser, Lovász, Safra and Szegedy on the inapproximability of Independent Set. After Arora et al. provedthe PCP theorem a year later, it has now been shown that Johnson’s 1974 approximation algorithms for Max SAT,Set Cover, Independent Set and Coloring all achieve the optimal approximation ratio, assuming P != NP.

4.1 Performance guarantees

For some approximation algorithms it is possible to prove certain properties about the approximation of the optimumresult. For example, a ρ-approximation algorithm A is defined to be an algorithm for which it has been proven thatthe value/cost, f(x), of the approximate solution A(x) to an instance x will not be more (or less, depending on the

14

Page 19: Computational Complexity Theory A

4.2. ALGORITHM DESIGN TECHNIQUES 15

situation) than a factor ρ times the value, OPT, of an optimum solution.

OPT ≤ f(x) ≤ ρOPT, if ρ > 1;

ρOPT ≤ f(x) ≤ OPT, if ρ < 1.

The factor ρ is called the relative performance guarantee. An approximation algorithm has an absolute performanceguarantee or bounded error c, if it has been proven for every instance x that

(OPT − c) ≤ f(x) ≤ (OPT + c).

Similarly, the performance guarantee, R(x,y), of a solution y to an instance x is defined as

R(x, y) = max(OPT

f(y),f(y)

OPT

),

where f(y) is the value/cost of the solution y for the instance x. Clearly, the performance guarantee is greater than orequal to 1 and equal to 1 if and only if y is an optimal solution. If an algorithm A guarantees to return solutions with aperformance guarantee of at most r(n), thenA is said to be an r(n)-approximation algorithm and has an approximationratio of r(n). Likewise, a problem with an r(n)-approximation algorithm is said to be r(n)-approximable or have anapproximation ratio of r(n).[2][3]

One may note that for minimization problems, the two different guarantees provide the same result and that formaximization problems, a relative performance guarantee of ρ is equivalent to a performance guarantee of r = ρ−1 .In the literature, both definitions are common but it is clear which definition is used since, for maximization problems,as ρ ≤ 1 while r ≥ 1.The absolute performance guarantee PA of some approximation algorithm A, where x refers to an instance of aproblem, and where RA(x) is the performance guarantee of A on x (i.e. ρ for problem instance x) is:

PA = infr ≥ 1 | RA(x) ≤ r,∀x.

That is to say that PA is the largest bound on the approximation ratio, r, that one sees over all possible instances ofthe problem. Likewise, the asymptotic performance ratio R∞

A is:

R∞A = infr ≥ 1 | ∃n ∈ Z+, RA(x) ≤ r,∀x, |x| ≥ n.

That is to say that it is the same as the absolute performance ratio, with a lower bound n on the size of probleminstances. These two types of ratios are used because there exist algorithms where the difference between these twois significant.

4.2 Algorithm design techniques

By now there are several standard techniques that one tries to design an approximation algorithm. These include thefollowing ones.

1. Greedy algorithm2. Local search3. Enumeration and dynamic programming4. Solving a convex programming relaxation to get a fractional solution. Then converting this fractional solution

into a feasible solution by some appropriate rounding. The popular relaxations include the following.(a) Linear programming relaxation(b) Semidefinite programming relaxation

5. Embedding the problem in some simple metric and then solving the problem on the metric. This is also knownas metric embedding.

Page 20: Computational Complexity Theory A

16 CHAPTER 4. APPROXIMATION ALGORITHM

4.3 Epsilon terms

In the literature, an approximation ratio for a maximization (minimization) problem of c - ϵ (min: c + ϵ) means thatthe algorithm has an approximation ratio of c ∓ ϵ for arbitrary ϵ > 0 but that the ratio has not (or cannot) be shownfor ϵ = 0. An example of this is the optimal inapproximability — inexistence of approximation — ratio of 7 / 8 +ϵ for satisfiable MAX-3SAT instances due to Johan Håstad.[4] As mentioned previously, when c = 1, the problem issaid to have a polynomial-time approximation scheme.An ϵ-term may appear when an approximation algorithm introduces a multiplicative error and a constant error whilethe minimum optimum of instances of size n goes to infinity as n does. In this case, the approximation ratio is c ∓ k/ OPT = c ∓ o(1) for some constants c and k. Given arbitrary ϵ > 0, one can choose a large enough N such that theterm k / OPT < ϵ for every n ≥ N. For every fixed ϵ, instances of size n < N can be solved by brute force , therebyshowing an approximation ratio — existence of approximation algorithms with a guarantee — of c ∓ ϵ for every ϵ >0.

4.4 See also• Domination analysis considers guarantees in terms of the rank of the computed solution.

• PTAS - a type of approximation algorithm that takes the approximation ratio as a parameter

• APX is the class of problems with some constant-factor approximation algorithm

• Approximation-preserving reduction

4.5 Citations[1] Zych, Anna; Bilò, Davide (2011). “New Reoptimization Techniques applied to Steiner Tree Problem”. Electronic Notes in

Discrete Mathematics 37: 387–392. doi:10.1016/j.endm.2011.05.066. ISSN 1571-0653.

[2] G. Ausiello, P. Crescenzi, G. Gambosi, V. Kann, A. Marchetti-Spaccamela, and M. Protasi (1999). Complexity andApproximation: Combinatorial Optimization Problems and their Approximability Properties.

[3] Viggo Kann (1992). On the Approximability of NP-complete Optimization Problems (PDF).

[4] Johan Håstad (1999). “Some Optimal Inapproximability Results”. Journal of the ACM.

4.6 References• Vazirani, Vijay V. (2003). Approximation Algorithms. Berlin: Springer. ISBN 3-540-65367-8.

• Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms,Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Chapter 35: ApproximationAlgorithms, pp. 1022–1056.

• Dorit S. Hochbaum, ed. Approximation Algorithms for NP-Hard problems, PWS Publishing Company, 1997.ISBN 0-534-94968-1. Chapter 9: Various Notions of Approximations: Good, Better, Best, and More

• Williamson, David P.; Shmoys, David B. (April 26, 2011), TheDesign of Approximation Algorithms, CambridgeUniversity Press, ISBN 978-0521195270

4.7 External links• Pierluigi Crescenzi, Viggo Kann, Magnús Halldórsson, Marek Karpinski and Gerhard Woeginger, A com-pendium of NP optimization problems.

Page 21: Computational Complexity Theory A

Chapter 5

Approximation-preserving reduction

In computability theory and computational complexity theory, especially the study of approximation algorithms, anapproximation-preserving reduction is an algorithm for transforming one optimization problem into another prob-lem, such that the distance of solutions from optimal is preserved to some degree. Approximation-preserving reduc-tions are a subset of more general reductions in complexity theory; the difference is that approximation-preservingreductions usually make statements on approximation problems or optimization problems, as opposed to decisionproblems.Intuitively, problem A is reducible to problem B via an approximation-preserving reduction if, given an instance ofproblem A and a (possibly approximate) solver for problem B, one can convert the instance of problem A into aninstance of problem B, apply the solver for problem B, and recover a solution for problem A that also has someguarantee of approximation.

5.1 Definition

Unlike reductions on decision problems, an approximation-preserving reduction must preserve more than the truthof the problem instances when reducing from one problem to another. It must also maintain some guarantee on therelationship between the cost of the solution to the cost of the optimum in both problems. To formalize:Let A and B be optimization problems.Let x be an instance of problem A , with optimal solution OPT (x) . Let cA(x, y) denote the cost of a solution y toan instance x of problem A . This is also the metric used to determine which solution is considered optimal.An approximation-preserving reduction is a pair of functions (f, g) (which often must be computable in polyno-mial time), such that:

• f maps an instance x of A to an instance x′ of B .

• g maps a solution y′ of B to a solution y of A .

• g preserves some guarantee of the solution’s performance, or approximation ratio, defined as RA(x, y) =

max(

cA(x,OPT (x))cA(x,y) , cA(x,y)

cA(x,OPT (x))

).

5.2 Types of approximation-preserving reductions

There are many different types of approximation-preserving reductions, all of which have a different guarantee (thethird point in the definition above). However, unlike with other reductions, approximation-preserving reductionsoften overlap in what properties they demonstrate on optimization problems (e.g. complexity class membership orcompleteness, or inapproximability). The different types of reductions are used instead as varying reduction tech-niques, in that the applicable reduction which is most easily adapted to the problem is used.Not all types of approximation-preserving reductions can be used to show membership in all approximability com-plexity classes, the most notable of which are PTAS and APX. A reduction below preserves membership in a

17

Page 22: Computational Complexity Theory A

18 CHAPTER 5. APPROXIMATION-PRESERVING REDUCTION

complexity class C if, given a problem A that reduces to problem B via the reduction scheme, and B is in C, thenA is in C as well. Some reductions shown below only preserve membership in APX or PTAS, but not the other.Because of this, careful choice must be made when selecting an approximation-preserving reductions, especially forthe purpose of proving completeness of a problem within a complexity class.Crescenzi suggests that the three most ideal styles of reduction, for both ease of use and proving power, are PTASreduction, AP reduction, and L-reduction. [1] The reduction descriptions that follow are from Crescenzi’s survey ofapproximation-preserving reductions.

5.2.1 Strict reduction

Strict reduction is the simplest type of approximation-preserving reduction. In a strict reduction, the approximationratio of a solution y' to an instance x' of a problem B must be at most as good as the approximation ratio of thecorresponding solution y to instance x of problem A. In other words:

RA(x, y) ≤ RB(x′, y′) for x′ = f(x), y = g(y′) .

Strict reduction is the most straightforward: if a strict reduction from problem A to problem B exists, then problemA can always be approximated to at least as good a ratio as problem B. Strict reduction preserves membership in bothPTAS and APX.There exists a similar concept of a S-reduction, for which cA(x, y) = cB(x

′, y′) , and the optima of the twocorresponding instances must have the same cost as well. S-reduction is a very special case of strict reduction, and iseven more constraining. In effect, the two problems A and B must be in near perfect correspondence with each other.The existence of an S-reduction implies not only the existence of a strict reduction but every other reduction listedhere.

5.2.2 L-reduction

Main article: L-reduction

L-reductions preserve membership in PTAS as well as APX (but only for minimization problems in the case of thelatter). As a result, they cannot be used in general to prove completeness results about APX, Log-APX, or Poly-APX,but nevertheless they are valued for their natural formulation and ease of use in proofs.[1]

5.2.3 PTAS-reduction

Main article: PTAS reduction

PTAS-reduction is another commonly used reduction scheme. Though it preserves membership in PTAS, it does notdo so for APX. Nevertheless, APX-completeness is defined in terms of PTAS reductions.PTAS-reductions are a generalization of P-reductions, shown below, with the only difference being that the functiong is allowed to depend on the approximation ratio r .

5.2.4 A-reduction and P-reduction

A-reduction and P-reduction are similar reduction schemes that can be used to show membership in APX and PTASrespectively. Both introduce a new function c , defined on numbers greater than 1, which must be computable.In an A-reduction, we have that

RB(x′, y′) ≤ r → RA(x, y) ≤ c(r)

In a P-reduction, we have that

Page 23: Computational Complexity Theory A

5.3. SEE ALSO 19

RB(x′, y′) ≤ c(r) → RA(x, y) ≤ r

The existence of a P-reduction implies the existence of a PTAS-reduction.

5.2.5 E-reduction

E-reduction, which is a generalization of strict reduction but implies both A-reduction and P-reduction, is an exampleof a less restrictive reduction style that preserves membership not only in PTAS and APX, but also the larger classesLog-APX and Poly-APX. E-reduction introduces two new parameters, a polynomial p and a constant β . Its definitionis as follows.In an E-reduction, we have that for some polynomial p and constant β ,

• cB(OPTB(x′)) ≤ p(|x|)cA(OPTA(x)) , where |x| denotes the size of the problem instance’s description.

• For any solution y′ to B , we have RA(x, y) ≤ 1 + β · (RB(x′, y′)− 1) .

To obtain an A-reduction from an E-reduction, let c(r) = 1 + β · (r − 1) , and to obtain a P-reduction from anE-reduction, let c(r) = 1 + (r − 1)/β .

5.2.6 AP-reduction

AP-reductions are used to define completeness in the classes Log-APX and Poly-APX. They are a special case ofPTAS reduction, meeting the following restrictions.In an AP-reduction, we have that for some constant α ,

RB(x′, y′) ≤ r → RA(x, y) ≤ 1 + α · (r − 1)

with the additional generalization that the function g is allowed to depend on the approximation ratio r , as in PTAS-reduction.AP-reduction is also a generalization of E-reduction. An additional restriction actually needs to be imposed forAP-reduction to preserve Log-APX and Poly-APX membership, as E-reduction does: for fixed problem size, thecomputation time of f, g must be non-increasing as the approximation ratio increases.

5.2.7 Gap reduction

Main article: Gap reduction

A gap reduction is a type of reduction that, while useful in proving some inapproximability results, does not resemblethe other reductions shown here. Gap reductions deal with optimization problems within a decision problem con-tainer, generated by changing the problem goal to distinguishing between the optimal solution and solutions somemultiplicative factor worse than the optimum.

5.3 See also• Reduction (complexity)

• PTAS reduction

• L-reduction

• Approximation algorithm

Page 24: Computational Complexity Theory A

20 CHAPTER 5. APPROXIMATION-PRESERVING REDUCTION

5.4 References[1] Crescenzi, Pierluigi (1997). “A Short Guide To Approximation Preserving Reductions”. Proceedings of the 12th Annual

IEEE Conference on Computational Complexity (Washington, D.C.: IEEE Computer Society): 262–.

Page 25: Computational Complexity Theory A

Chapter 6

Asymptotic computational complexity

In computational complexity theory, asymptotic computational complexity is the usage of asymptotic analysis forthe estimation of computational complexity of algorithms and computational problems, commonly associated withthe usage of the big O notation.

6.1 Scope

With respect to computational resources, asymptotic time complexity and asymptotic space complexity are com-monly estimated. Other asymptotically estimated behavior include circuit complexity and various measures of parallelcomputation, such as the number of (parallel) processors.Since the ground-breaking 1965 paper by Juris Hartmanis and Richard E. Stearns[1] and the 1979 book by MichaelGarey and David S. Johnson on NP-completeness,[2] the term "computational complexity" (of algorithms) has becomecommonly referred to as asymptotic computational complexity.Further, unless specified otherwise, the term “computational complexity” usually refers to the upper bound for theasymptotic computational complexity of an algorithm or a problem, which is usually written in terms of the big O no-tation, e.g.. O(n3). Other types of (asymptotic) computational complexity estimates are lower bounds ("Big Omega"notation; e.g., Ω(n)) and asymptotically tight estimates, when the asymptotic upper and lower bounds coincide (writ-ten using the "big Theta"; e.g., Θ(n log n)).A further tacit assumption is that the worst case analysis of computational complexity is in question unless statedotherwise. An alternative approach is probabilistic analysis of algorithms.

6.2 Types of algorithms considered

In most practical cases deterministic algorithms or randomized algorithms are discussed, although theoretical com-puter science also considers nondeterministic algorithms and other advanced models of computation.

6.3 See also• Asymptotically optimal algorithm

6.4 References[1] Hartmanis, J.; Stearns, R. E. (1965). “On the computational complexity of algorithms”. Transactions of the American

Mathematical Society 117: 285–306. doi:10.1090/S0002-9947-1965-0170805-7.

[2] Michael Garey, and David S. Johnson: Computers and Intractability: A Guide to the Theory of NP-Completeness. NewYork: W. H. Freeman & Co., 1979.

21

Page 26: Computational Complexity Theory A

Chapter 7

Averaging argument

In computational complexity theory and cryptography, averaging argument is a standard argument for provingtheorems. It usually allows us to convert probabilistic polynomial-time algorithms into non-uniform polynomial-sizecircuits.

7.1 Example

To simplify, let’s first consider an example.Example: If every person likes at least 1/3 of the books in a library, then, there exists a book, which at least 1/3 ofpeople liked it.Proof: Suppose there are N people and B books. Each person likes at least B/3 of the books. Let people leave amark on the book they like. Then, there will be at least M = (NB)/3 marks. The averaging argument claims thatthere exists a book with at least N/3 marks on it. Assume, to the contradiction, that no such book exists. Then, everybook has fewer than N/3 marks. However, since there are B books, the total number of marks will be fewer than(NB)/3 , contradicting the fact that there are at least M marks.

7.2 Formalized definition of averaging argument

Consider two sets: X and Y, a proposition p : X × Y → TRUE/FALSE , and a fraction f (where 0 ≤ f ≤ 1 ).If for all x ∈ X and at least a fraction f of y ∈ Y , the proposition p(x, y) holds, then there exists a y ∈ Y , forwhich there exists a fraction f of x ∈ X that the proposition p(x, y) holds.Another formal (and more complicated) definition is due to Barak:[1]

Let f be some function. The averaging argument is the following claim: if we have a circuit C such that C(x, y) =f(x) with probability at least ρ , where x is chosen at random and y is chosen independently from some distributionY over 0, 1m (which might not even be efficiently sampleable) then there exists a single string y0 ∈ 0, 1m suchthat Prx[C(x, y0) = f(x)] ≥ ρ .Indeed, for every y define py to be Prx[C(x, y) = f(x)] then

Prx,y

[C(x, y) = f(x)] = Ey[py]

and then this reduces to the claim that for every random variable Z , if E[Z] ≥ ρ then Pr[Z ≥ ρ] > 0 (this holdssince E[Z] is the weighted average of Z and clearly if the average of some values is at least ρ then one of the valuesmust be at least ρ ).

22

Page 27: Computational Complexity Theory A

7.3. APPLICATION 23

7.3 Application

This argument has wide use in complexity theory (e.g. proving BPP ⊊ P/poly ) and cryptography (e.g. proving thatindistinguishable encryption results in semantic security). A plethora of such applications can be found in Goldreich'sbooks.[2][3][4]

7.4 References[1] Boaz Barak, “Note on the averaging and hybrid arguments and prediction vs. distinguishing.”, COS522, Princeton Uni-

versity, March 2006.

[2] Oded Goldreich, Foundations of Cryptography, Volume 1: Basic Tools, Cambridge University Press, 2001, ISBN 0-521-79172-3

[3] Oded Goldreich, Foundations of Cryptography, Volume 2: Basic Applications, Cambridge University Press, 2004, ISBN0-521-83084-2

[4] Oded Goldreich, Computational Complexity: A Conceptual Perspective, Cambridge University Press, 2008, ISBN 0-521-88473-X

Page 28: Computational Complexity Theory A

Chapter 8

Complexity class

In computational complexity theory, a complexity class is a set of problems of related resource-based complexity.A typical complexity class has a definition of the form:

the set of problems that can be solved by an abstract machine M using O(f(n)) of resource R, where nis the size of the input.

For example, the class NP is the set of decision problems whose solutions can be determined by a non-deterministicTuring machine in polynomial time, while the class PSPACE is the set of decision problems that can be solved by adeterministic Turing machine in polynomial space.The simpler complexity classes are defined by the following factors:

• The type of computational problem: The most commonly used problems are decision problems. However,complexity classes can be defined based on function problems (an example is FP), counting problems (e.g.#P), optimization problems, promise problems, etc.

• The model of computation: The most common model of computation is the deterministic Turing machine, butmany complexity classes are based on nondeterministic Turing machines, boolean circuits, quantum Turingmachines, monotone circuits, etc.

• The resource (or resources) that are being bounded and the bounds: These two properties are usually statedtogether, such as “polynomial time”, “logarithmic space”, “constant depth”, etc.

Many complexity classes can be characterized in terms of the mathematical logic needed to express them; seedescriptive complexity.Bounding the computation time above by some concrete function f(n) often yields complexity classes that depend onthe chosen machine model. For instance, the language xx | x is any binary string can be solved in linear time on amulti-tape Turing machine, but necessarily requires quadratic time in the model of single-tape Turing machines. Ifwe allow polynomial variations in running time, Cobham-Edmonds thesis states that “the time complexities in anytwo reasonable and general models of computation are polynomially related” (Goldreich 2008, Chapter 1.2). Thisforms the basis for the complexity class P, which is the set of decision problems solvable by a deterministic Turingmachine within polynomial time. The corresponding set of function problems is FP.The Blum axioms can be used to define complexity classes without referring to a concrete computational model.

8.1 Important complexity classes

Many important complexity classes can be defined by bounding the time or space used by the algorithm. Someimportant complexity classes of decision problems defined in this manner are the following:It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE by Savitch’s theorem.

24

Page 29: Computational Complexity Theory A

8.2. REDUCTION 25

Other important complexity classes include BPP, ZPP and RP, which are defined using probabilistic Turing machines;AC and NC, which are defined using boolean circuits and BQP and QMA, which are defined using quantum Turingmachines. #P is an important complexity class of counting problems (not decision problems). Classes like IP andAM are defined using Interactive proof systems. ALL is the class of all decision problems.

8.2 Reduction

Main article: Reduction (complexity)

Many complexity classes are defined using the concept of a reduction. A reduction is a transformation of one probleminto another problem. It captures the informal notion of a problem being at least as difficult as another problem. Forinstance, if a problemX can be solved using an algorithm for Y,X is no more difficult than Y, and we say thatX reducesto Y. There are many different types of reductions, based on the method of reduction, such as Cook reductions, Karpreductions and Levin reductions, and the bound on the complexity of reductions, such as polynomial-time reductionsor log-space reductions.The most commonly used reduction is a polynomial-time reduction. This means that the reduction process takespolynomial time. For example, the problem of squaring an integer can be reduced to the problem of multiplying twointegers. This means an algorithm for multiplying two integers can be used to square an integer. Indeed, this can bedone by giving the same input to both inputs of the multiplication algorithm. Thus we see that squaring is not moredifficult than multiplication, since squaring can be reduced to multiplication.This motivates the concept of a problem being hard for a complexity class. A problem X is hard for a class ofproblems C if every problem in C can be reduced to X. Thus no problem in C is harder than X, since an algorithmfor X allows us to solve any problem in C. Of course, the notion of hard problems depends on the type of reductionbeing used. For complexity classes larger than P, polynomial-time reductions are commonly used. In particular, theset of problems that are hard for NP is the set of NP-hard problems.If a problem X is in C and hard for C, then X is said to be complete for C. This means that X is the hardest problem inC (Since there could be many problems which are equally hard, one might say that X is one of the hardest problemsin C). Thus the class of NP-complete problems contains the most difficult problems in NP, in the sense that they arethe ones most likely not to be in P. Because the problem P = NP is not solved, being able to reduce another problem,Π1, to a known NP-complete problem, Π2, would indicate that there is no known polynomial-time solution for Π1.This is because a polynomial-time solution to Π1 would yield a polynomial-time solution to Π2. Similarly, becauseall NP problems can be reduced to the set, finding an NP-complete problem that can be solved in polynomial timewould mean that P = NP.

8.3 Closure properties of classes

Complexity classes have a variety of closure properties; for example, decision classes may be closed under negation,disjunction, conjunction, or even under all Boolean operations. Moreover, they might also be closed under a varietyof quantification schemes. P, for instance, is closed under all Boolean operations, and under quantification overpolynomially sized domains. However, it is most likely not closed under quantification over exponential sized domains.Each class X that is not closed under negation has a complement class co-Y, which consists of the complements ofthe languages contained in X. Similarly one can define the Boolean closure of a class, and so on; this is however lesscommonly done.One possible route to separating two complexity classes is to find some closure property possessed by one and not bythe other.

8.4 Relationships between complexity classes

The following table shows some of the classes of problems (or languages, or grammars) that are considered in com-plexity theory. If class X is a strict subset of Y, then X is shown below Y, with a dark line connecting them. IfX is a subset, but it is unknown whether they are equal sets, then the line is lighter and is dotted. Technically, the

Page 30: Computational Complexity Theory A

26 CHAPTER 8. COMPLEXITY CLASS

breakdown into decidable and undecidable pertains more to the study of computability theory but is useful for puttingthe complexity classes in perspective.

8.4.1 Hierarchy theorems

Main articles: time hierarchy theorem and space hierarchy theorem

For the complexity classes defined in this way, it is desirable to prove that relaxing the requirements on (say) compu-tation time indeed defines a bigger set of problems. In particular, although DTIME(n) is contained in DTIME(n2), itwould be interesting to know if the inclusion is strict. For time and space requirements, the answer to such questionsis given by the time and space hierarchy theorems respectively. They are called hierarchy theorems because theyinduce a proper hierarchy on the classes defined by constraining the respective resources. Thus there are pairs ofcomplexity classes such that one is properly included in the other. Having deduced such proper set inclusions, wecan proceed to make quantitative statements about how much more additional time or space is needed in order toincrease the number of problems that can be solved.More precisely, the time hierarchy theorem states that

DTIME(f(n)

)⊊ DTIME

(f(n) · log2(f(n))

)The space hierarchy theorem states that

DSPACE(f(n)

)⊊ DSPACE

(f(n) · log(f(n))

)The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance,the time hierarchy theorem tells us that P is strictly contained in EXPTIME, and the space hierarchy theorem tells usthat L is strictly contained in PSPACE.

8.5 See also• List of complexity classes

8.6 References

8.7 Further reading• The Complexity Zoo: A huge list of complexity classes, a reference for experts.

• Diagram by Neil Immerman showing the hierarchy of complexity classes and how they fit together.

• Michael Garey, and David S. Johnson: Computers and Intractability: AGuide to the Theory of NP-Completeness.New York: W. H. Freeman & Co., 1979. The standard reference on NP-Complete problems - an importantcategory of problems whose solutions appear to require an impractically long time to compute.

Page 31: Computational Complexity Theory A

Chapter 9

Computational complexity theory

Computational complexity theory is a branch of the theory of computation in theoretical computer science andmathematics that focuses on classifying computational problems according to their inherent difficulty, and relatingthose classes to each other. A computational problem is understood to be a task that is in principle amenable to beingsolved by a computer, which is equivalent to stating that the problem may be solved by mechanical application ofmathematical steps, such as an algorithm.A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used.The theory formalizes this intuition, by introducing mathematical models of computation to study these problems andquantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures arealso used, such as the amount of communication (used in communication complexity), the number of gates in acircuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles ofcomputational complexity theory is to determine the practical limits on what computers can and cannot do.Closely related fields in theoretical computer science are analysis of algorithms and computability theory. A keydistinction between analysis of algorithms and computational complexity theory is that the former is devoted toanalyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks amore general question about all possible algorithms that could be used to solve the same problem. More precisely,it tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposingrestrictions on the available resources is what distinguishes computational complexity from computability theory: thelatter theory asks what kind of problems can, in principle, be solved algorithmically.

9.1 Computational problems

9.1.1 Problem instances

A computational problem can be viewed as an infinite collection of instances together with a solution for every instance.The input string for a computational problem is referred to as a problem instance, and should not be confused with theproblem itself. In computational complexity theory, a problem refers to the abstract question to be solved. In contrast,an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. Forexample, consider the problem of primality testing. The instance is a number (e.g. 15) and the solution is “yes” ifthe number is prime and “no” otherwise (in this case “no”). Stated another way, the instance is a particular input tothe problem, and the solution is the output corresponding to the given input.To further highlight the difference between a problem and an instance, consider the following instance of the decisionversion of the traveling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany’s15 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instancesof the problem, such as asking for a round trip through all sites in Milan whose total length is at most 10 km. Forthis reason, complexity theory addresses computational problems and not particular problem instances.

27

Page 32: Computational Complexity Theory A

28 CHAPTER 9. COMPUTATIONAL COMPLEXITY THEORY

A traveling salesman tour through Germany’s 15 largest cities.

9.1.2 Representing problem instances

When considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet istaken to be the binary alphabet (i.e., the set 0,1), and thus the strings are bitstrings. As in a real-world computer,mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented inbinary notation, and graphs can be encoded directly via their adjacency matrices, or by encoding their adjacency listsin binary.Even though some proofs of complexity-theoretic theorems regularly assume some concrete choice of input encoding,one tries to keep the discussion abstract enough to be independent of the choice of encoding. This can be achievedby ensuring that different representations can be transformed into each other efficiently.

9.1.3 Decision problems as formal languages

Decision problems are one of the central objects of study in computational complexity theory. A decision problemis a special type of computational problem whose answer is either yes or no, or alternately either 1 or 0. A decision

Page 33: Computational Complexity Theory A

9.1. COMPUTATIONAL PROBLEMS 29

A decision problem has only two possible outputs, yes or no (or alternately 1 or 0) on any input.

problem can be viewed as a formal language, where the members of the language are instances whose output is yes,and the non-members are those instances whose output is no. The objective is to decide, with the aid of an algorithm,whether a given input string is a member of the formal language under consideration. If the algorithm deciding thisproblem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input.An example of a decision problem is the following. The input is an arbitrary graph. The problem consists in decidingwhether the given graph is connected, or not. The formal language associated with this decision problem is then theset of all connected graphs—of course, to obtain a precise definition of this language, one has to decide how graphsare encoded as binary strings.

Page 34: Computational Complexity Theory A

30 CHAPTER 9. COMPUTATIONAL COMPLEXITY THEORY

9.1.4 Function problems

A function problem is a computational problem where a single output (of a total function) is expected for every input,but the output is more complex than that of a decision problem, that is, it isn't just yes or no. Notable examplesinclude the traveling salesman problem and the integer factorization problem.It is tempting to think that the notion of function problems is much richer than the notion of decision problems.However, this is not really the case, since function problems can be recast as decision problems. For example, themultiplication of two integers can be expressed as the set of triples (a, b, c) such that the relation a × b = c holds.Deciding whether a given triple is a member of this set corresponds to solving the problem of multiplying two numbers.

9.1.5 Measuring the size of an instance

To measure the difficulty of solving a computational problem, one may wish to see how much time the best algorithmrequires to solve the problem. However, the running time may, in general, depend on the instance. In particular,larger instances will require more time to solve. Thus the time required to solve a problem (or the space required,or any measure of complexity) is calculated as a function of the size of the instance. This is usually taken to be thesize of the input in bits. Complexity theory is interested in how algorithms scale with an increase in the input size.For instance, in the problem of finding whether a graph is connected, how much more time does it take to solve aproblem for a graph with 2n vertices compared to the time taken for a graph with n vertices?If the input size is n, the time taken can be expressed as a function of n. Since the time taken on different inputs ofthe same size can be different, the worst-case time complexity T(n) is defined to be the maximum time taken over allinputs of size n. If T(n) is a polynomial in n, then the algorithm is said to be a polynomial time algorithm. Cobham’sthesis says that a problem can be solved with a feasible amount of resources if it admits a polynomial time algorithm.

9.2 Machine models and complexity measures

9.2.1 Turing machine

An artistic representation of a Turing machine

Main article: Turing machine

A Turing machine is a mathematical model of a general computing machine. It is a theoretical device that manipulates

Page 35: Computational Complexity Theory A

9.2. MACHINE MODELS AND COMPLEXITY MEASURES 31

symbols contained on a strip of tape. Turing machines are not intended as a practical computing technology, butrather as a thought experiment representing a computing machine—anything from an advanced supercomputer to amathematician with a pencil and paper. It is believed that if a problem can be solved by an algorithm, there exists aTuring machine that solves the problem. Indeed, this is the statement of the Church–Turing thesis. Furthermore, itis known that everything that can be computed on other models of computation known to us today, such as a RAMmachine, Conway’s Game of Life, cellular automata or any programming language can be computed on a Turingmachine. Since Turing machines are easy to analyze mathematically, and are believed to be as powerful as any othermodel of computation, the Turing machine is the most commonly used model in complexity theory.Many types of Turing machines are used to define complexity classes, such as deterministic Turing machines,probabilistic Turing machines, non-deterministic Turing machines, quantum Turing machines, symmetric Turingmachines and alternating Turing machines. They are all equally powerful in principle, but when resources (such astime or space) are bounded, some of these may be more powerful than others.A deterministic Turing machine is the most basic Turing machine, which uses a fixed set of rules to determine itsfuture actions. A probabilistic Turing machine is a deterministic Turing machine with an extra supply of random bits.The ability to make probabilistic decisions often helps algorithms solve problems more efficiently. Algorithms thatuse random bits are called randomized algorithms. A non-deterministic Turing machine is a deterministic Turingmachine with an added feature of non-determinism, which allows a Turing machine to have multiple possible futureactions from a given state. One way to view non-determinism is that the Turing machine branches into many possiblecomputational paths at each step, and if it solves the problem in any of these branches, it is said to have solved theproblem. Clearly, this model is not meant to be a physically realizable model, it is just a theoretically interestingabstract machine that gives rise to particularly interesting complexity classes. For examples, see non-deterministicalgorithm.

9.2.2 Other machine models

Many machine models different from the standard multi-tape Turing machines have been proposed in the literature,for example random access machines. Perhaps surprisingly, each of these models can be converted to another withoutproviding any extra computational power. The time and memory consumption of these alternate models may vary.[1]

What all these models have in common is that the machines operate deterministically.However, some computational problems are easier to analyze in terms of more unusual resources. For example, anon-deterministic Turing machine is a computational model that is allowed to branch out to check many differentpossibilities at once. The non-deterministic Turing machine has very little to do with how we physically want tocompute algorithms, but its branching exactly captures many of the mathematical models we want to analyze, so thatnon-deterministic time is a very important resource in analyzing computational problems.

9.2.3 Complexity measures

For a precise definition of what it means to solve a problem using a given amount of time and space, a computationalmodel such as the deterministic Turing machine is used. The time required by a deterministic Turing machine M oninput x is the total number of state transitions, or steps, the machine makes before it halts and outputs the answer(“yes” or “no”). A Turing machine M is said to operate within time f(n), if the time required by M on each input oflength n is at most f(n). A decision problem A can be solved in time f(n) if there exists a Turing machine operatingin time f(n) that solves the problem. Since complexity theory is interested in classifying problems based on theirdifficulty, one defines sets of problems based on some criteria. For instance, the set of problems solvable within timef(n) on a deterministic Turing machine is then denoted by DTIME(f(n)).Analogous definitions can be made for space requirements. Although time and space are the most well-known com-plexity resources, any complexity measure can be viewed as a computational resource. Complexity measures arevery generally defined by the Blum complexity axioms. Other complexity measures used in complexity theory in-clude communication complexity, circuit complexity, and decision tree complexity.The complexity of an algorithm is often expressed using big O notation.

Page 36: Computational Complexity Theory A

32 CHAPTER 9. COMPUTATIONAL COMPLEXITY THEORY

Visualization of the quicksort algorithm that has average case performance Θ(n logn) .

9.2.4 Best, worst and average case complexity

The best, worst and average case complexity refer to three different ways of measuring the time complexity (or anyother complexity measure) of different inputs of the same size. Since some inputs of size n may be faster to solvethan others, we define the following complexities:

• Best-case complexity: This is the complexity of solving the problem for the best input of size n.

• Worst-case complexity: This is the complexity of solving the problem for the worst input of size n.

• Average-case complexity: This is the complexity of solving the problem on an average. This complexity isonly defined with respect to a probability distribution over the inputs. For instance, if all inputs of the samesize are assumed to be equally likely to appear, the average case complexity can be defined with respect to theuniform distribution over all inputs of size n.

For example, consider the deterministic sorting algorithm quicksort. This solves the problem of sorting a list ofintegers that is given as the input. The worst-case is when the input is sorted or sorted in reverse order, and thealgorithm takes time O(n2) for this case. If we assume that all possible permutations of the input list are equallylikely, the average time taken for sorting is O(n log n). The best case occurs when each pivoting divides the list inhalf, also needing O(n log n) time.

9.2.5 Upper and lower bounds on the complexity of problems

To classify the computation time (or similar resources, such as space consumption), one is interested in provingupper and lower bounds on the minimum amount of time required by the most efficient algorithm solving a givenproblem. The complexity of an algorithm is usually taken to be its worst-case complexity, unless specified otherwise.Analyzing a particular algorithm falls under the field of analysis of algorithms. To show an upper bound T(n) on

Page 37: Computational Complexity Theory A

9.3. COMPLEXITY CLASSES 33

the time complexity of a problem, one needs to show only that there is a particular algorithm with running time atmost T(n). However, proving lower bounds is much more difficult, since lower bounds make a statement about allpossible algorithms that solve a given problem. The phrase “all possible algorithms” includes not just the algorithmsknown today, but any algorithm that might be discovered in the future. To show a lower bound of T(n) for a problemrequires showing that no algorithm can have time complexity lower than T(n).Upper and lower bounds are usually stated using the big O notation, which hides constant factors and smaller terms.This makes the bounds independent of the specific details of the computational model used. For instance, if T(n) =7n2 + 15n + 40, in big O notation one would write T(n) = O(n2).

9.3 Complexity classes

Main article: Complexity class

9.3.1 Defining complexity classes

A complexity class is a set of problems of related complexity. Simpler complexity classes are defined by the followingfactors:

• The type of computational problem: The most commonly used problems are decision problems. However,complexity classes can be defined based on function problems, counting problems, optimization problems,promise problems, etc.

• The model of computation: The most common model of computation is the deterministic Turing machine, butmany complexity classes are based on non-deterministic Turing machines, Boolean circuits, quantum Turingmachines, monotone circuits, etc.

• The resource (or resources) that are being bounded and the bounds: These two properties are usually statedtogether, such as “polynomial time”, “logarithmic space”, “constant depth”, etc.

Of course, some complexity classes have complicated definitions that do not fit into this framework. Thus, a typicalcomplexity class has a definition like the following:

The set of decision problems solvable by a deterministic Turing machine within time f(n). (This com-plexity class is known as DTIME(f(n)).)

But bounding the computation time above by some concrete function f(n) often yields complexity classes that dependon the chosen machine model. For instance, the language xx | x is any binary string can be solved in linear time ona multi-tape Turing machine, but necessarily requires quadratic time in the model of single-tape Turing machines.If we allow polynomial variations in running time, Cobham-Edmonds thesis states that “the time complexities in anytwo reasonable and general models of computation are polynomially related” (Goldreich 2008, Chapter 1.2). Thisforms the basis for the complexity class P, which is the set of decision problems solvable by a deterministic Turingmachine within polynomial time. The corresponding set of function problems is FP.

9.3.2 Important complexity classes

Many important complexity classes can be defined by bounding the time or space used by the algorithm. Someimportant complexity classes of decision problems defined in this manner are the following:The logarithmic-space classes (necessarily) do not take into account the space needed to represent the problem.It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE by Savitch’s theorem.Other important complexity classes include BPP, ZPP and RP, which are defined using probabilistic Turing machines;AC and NC, which are defined using Boolean circuits and BQP and QMA, which are defined using quantum Turingmachines. #P is an important complexity class of counting problems (not decision problems). Classes like IP andAM are defined using Interactive proof systems. ALL is the class of all decision problems.

Page 38: Computational Complexity Theory A

34 CHAPTER 9. COMPUTATIONAL COMPLEXITY THEORY

EXPSPACE

EXPTIME

NP

P

PSPACE

=?

=?

=?

=?

NL=?

A representation of the relation among complexity classes

9.3.3 Hierarchy theorems

Main articles: time hierarchy theorem and space hierarchy theorem

For the complexity classes defined in this way, it is desirable to prove that relaxing the requirements on (say) compu-tation time indeed defines a bigger set of problems. In particular, although DTIME(n) is contained in DTIME(n2), itwould be interesting to know if the inclusion is strict. For time and space requirements, the answer to such questionsis given by the time and space hierarchy theorems respectively. They are called hierarchy theorems because theyinduce a proper hierarchy on the classes defined by constraining the respective resources. Thus there are pairs ofcomplexity classes such that one is properly included in the other. Having deduced such proper set inclusions, wecan proceed to make quantitative statements about how much more additional time or space is needed in order toincrease the number of problems that can be solved.More precisely, the time hierarchy theorem states that

DTIME(f(n)

)⊊ DTIME

(f(n) · log2(f(n))

)The space hierarchy theorem states that

DSPACE(f(n)

)⊊ DSPACE

(f(n) · log(f(n))

)

Page 39: Computational Complexity Theory A

9.4. IMPORTANT OPEN PROBLEMS 35

The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance,the time hierarchy theorem tells us that P is strictly contained in EXPTIME, and the space hierarchy theorem tells usthat L is strictly contained in PSPACE.

9.3.4 Reduction

Main article: Reduction (complexity)

Many complexity classes are defined using the concept of a reduction. A reduction is a transformation of one probleminto another problem. It captures the informal notion of a problem being at least as difficult as another problem. Forinstance, if a problemX can be solved using an algorithm for Y,X is no more difficult than Y, and we say thatX reducesto Y. There are many different types of reductions, based on the method of reduction, such as Cook reductions, Karpreductions and Levin reductions, and the bound on the complexity of reductions, such as polynomial-time reductionsor log-space reductions.The most commonly used reduction is a polynomial-time reduction. This means that the reduction process takespolynomial time. For example, the problem of squaring an integer can be reduced to the problem of multiplying twointegers. This means an algorithm for multiplying two integers can be used to square an integer. Indeed, this can bedone by giving the same input to both inputs of the multiplication algorithm. Thus we see that squaring is not moredifficult than multiplication, since squaring can be reduced to multiplication.This motivates the concept of a problem being hard for a complexity class. A problem X is hard for a class ofproblems C if every problem in C can be reduced to X. Thus no problem in C is harder than X, since an algorithmfor X allows us to solve any problem in C. Of course, the notion of hard problems depends on the type of reductionbeing used. For complexity classes larger than P, polynomial-time reductions are commonly used. In particular, theset of problems that are hard for NP is the set of NP-hard problems.If a problem X is in C and hard for C, then X is said to be complete for C. This means that X is the hardest problem inC. (Since many problems could be equally hard, one might say that X is one of the hardest problems in C.) Thus theclass of NP-complete problems contains the most difficult problems in NP, in the sense that they are the ones mostlikely not to be in P. Because the problem P = NP is not solved, being able to reduce a known NP-complete problem,Π2, to another problem, Π1, would indicate that there is no known polynomial-time solution for Π1. This is becausea polynomial-time solution to Π1 would yield a polynomial-time solution to Π2. Similarly, because all NP problemscan be reduced to the set, finding an NP-complete problem that can be solved in polynomial time would mean that P= NP.[2]

9.4 Important open problems

9.4.1 P versus NP problem

Main article: P versus NP problem

The complexity class P is often seen as a mathematical abstraction modeling those computational tasks that admitan efficient algorithm. This hypothesis is called the Cobham–Edmonds thesis. The complexity class NP, on theother hand, contains many problems that people would like to solve efficiently, but for which no efficient algorithmis known, such as the Boolean satisfiability problem, the Hamiltonian path problem and the vertex cover problem.Since deterministic Turing machines are special non-deterministic Turing machines, it is easily observed that eachproblem in P is also member of the class NP.The question of whether P equals NP is one of the most important open questions in theoretical computer sciencebecause of the wide implications of a solution.[2] If the answer is yes, many important problems can be shown tohave more efficient solutions. These include various types of integer programming problems in operations research,many problems in logistics, protein structure prediction in biology,[4] and the ability to find formal proofs of puremathematics theorems.[5] The P versus NP problem is one of the Millennium Prize Problems proposed by the ClayMathematics Institute. There is a US$1,000,000 prize for resolving the problem.[6]

Page 40: Computational Complexity Theory A

36 CHAPTER 9. COMPUTATIONAL COMPLEXITY THEORY

NP Problems

P Problems

NP Complete

Diagram of complexity classes provided that P ≠ NP. The existence of problems in NP outside both P and NP-complete in this casewas established by Ladner.[3]

9.4.2 Problems in NP not known to be in P or NP-complete

It was shown by Ladner that if P ≠NP then there exist problems inNP that are neither in P norNP-complete.[3] Suchproblems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem andthe integer factorization problem are examples of problems believed to be NP-intermediate. They are some of thevery few NP problems not known to be in P or to be NP-complete.The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic.An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete.[7] If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level.[8]

Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graphisomorphism is not NP-complete. The best algorithm for this problem, due to Laszlo Babai and Eugene Luks hasrun time 2O(√(n log(n))) for graphs with n vertices.The integer factorization problem is the computational problem of determining the prime factorization of a giveninteger. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less than k.No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographicsystems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP andco-UP[9]). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NPwill equal co-NP). The best known algorithm for integer factorization is the general number field sieve, which takestime O(e(64/9)1/3(n.log 2)1/3(log (n.log 2))2/3 ) to factor an n-bit integer. However, the best known quantum algorithm for thisproblem, Shor’s algorithm, does run in polynomial time. Unfortunately, this fact doesn't say much about where theproblem lies with respect to non-quantum complexity classes.

9.4.3 Separations between other complexity classes

Many known complexity classes are suspected to be unequal, but this has not been proved. For instance P ⊆ NP ⊆PP ⊆ PSPACE, but it is possible that P = PSPACE. If P is not equal to NP, then P is not equal to PSPACE either.Since there are many known complexity classes between P and PSPACE, such as RP, BPP, PP, BQP, MA, PH,etc., it is possible that all these complexity classes collapse to one class. Proving that any of these classes are unequalwould be a major breakthrough in complexity theory.

Page 41: Computational Complexity Theory A

9.5. INTRACTABILITY 37

Along the same lines, co-NP is the class containing the complement problems (i.e. problems with the yes/no answersreversed) of NP problems. It is believed[10] that NP is not equal to co-NP; however, it has not yet been proven. Ithas been shown that if these two complexity classes are not equal then P is not equal to NP.Similarly, it is not known if L (the set of all problems that can be solved in logarithmic space) is strictly contained inP or equal to P. Again, there are many complexity classes between the two, such as NL and NC, and it is not knownif they are distinct or equal classes.It is suspected that P and BPP are equal. However, it is currently open if BPP = NEXP.

9.5 Intractability

See also: Combinatorial explosion

Problems that can be solved in theory (e.g., given large but finite time), but which in practice take too long for theirsolutions to be useful, are known as intractable problems.[11] In complexity theory, problems that lack polynomial-time solutions are considered to be intractable for more than the smallest inputs. In fact, the Cobham–Edmonds thesisstates that only those problems that can be solved in polynomial time can be feasibly computed on some computationaldevice. Problems that are known to be intractable in this sense include those that are EXPTIME-hard. If NP is not thesame as P, then the NP-complete problems are also intractable in this sense. To see why exponential-time algorithmsmight be unusable in practice, consider a program that makes 2n operations before halting. For small n, say 100,and assuming for the sake of example that the computer does 1012 operations each second, the program would runfor about 4 × 1010 years, which is the same order of magnitude as the age of the universe. Even with a much fastercomputer, the program would only be useful for very small instances and in that sense the intractability of a problem issomewhat independent of technological progress. Nevertheless, a polynomial time algorithm is not always practical.If its running time is, say, n15, it is unreasonable to consider it efficient and it is still useless except on small instances.What intractability means in practice is open to debate. Saying that a problem is not in P does not imply that alllarge cases of the problem are hard or even that most of them are. For example, the decision problem in Presburgerarithmetic has been shown not to be in P, yet algorithms have been written that solve the problem in reasonable timesin most cases. Similarly, algorithms can solve the NP-complete knapsack problem over a wide range of sizes inless than quadratic time and SAT solvers routinely handle large instances of the NP-complete Boolean satisfiabilityproblem.

9.6 History

An early example of algorithm complexity analysis is the running time analysis of the Euclidean algorithm done byGabriel Lamé in 1844.Before the actual research explicitly devoted to the complexity of algorithmic problems started off, numerous foun-dations were laid out by various researchers. Most influential among these was the definition of Turing machines byAlan Turing in 1936, which turned out to be a very robust and flexible simplification of a computer.As Fortnow & Homer (2003) point out, the beginning of systematic studies in computational complexity is attributedto the seminal paper “On the Computational Complexity of Algorithms” by Juris Hartmanis and Richard Stearns(1965), which laid out the definitions of time and space complexity and proved the hierarchy theorems. Also, in1965 Edmonds defined a “good” algorithm as one with running time bounded by a polynomial of the input size.[12]

Earlier papers studying problems solvable by Turing machines with specific bounded resources include [13] JohnMyhill's definition of linear bounded automata (Myhill 1960), Raymond Smullyan's study of rudimentary sets (1961),as well as Hisao Yamada's paper[14] on real-time computations (1962). Somewhat earlier, Boris Trakhtenbrot (1956),a pioneer in the field from the USSR, studied another specific complexity measure.[15] As he remembers:

However, [my] initial interest [in automata theory] was increasingly set aside in favor of computa-tional complexity, an exciting fusion of combinatorial methods, inherited from switching theory, withthe conceptual arsenal of the theory of algorithms. These ideas had occurred to me earlier in 1955 whenI coined the term “signalizing function”, which is nowadays commonly known as “complexity measure”.—Boris Trakhtenbrot, From Logic to Theoretical Computer Science – An Update. In: Pillars of Com-

Page 42: Computational Complexity Theory A

38 CHAPTER 9. COMPUTATIONAL COMPLEXITY THEORY

puter Science, LNCS 4800, Springer 2008.

In 1967, Manuel Blum developed an axiomatic complexity theory based on his axioms and proved an importantresult, the so-called, speed-up theorem. The field really began to flourish in 1971 when the US researcher StephenCook and, working independently, Leonid Levin in the USSR, proved that there exist practically relevant problemsthat are NP-complete. In 1972, Richard Karp took this idea a leap forward with his landmark paper, “ReducibilityAmong Combinatorial Problems”, in which he showed that 21 diverse combinatorial and graph theoretical problems,each infamous for its computational intractability, are NP-complete.[16]

9.7 See also• Category:Computational problems

• Context of computational complexity

• Descriptive complexity theory

• Game complexity

• List of complexity classes

• List of computability and complexity topics

• List of important publications in theoretical computer science

• List of unsolved problems in computer science

• Parameterized complexity

• Proof complexity

• Quantum complexity theory

• Structural complexity theory

• Transcomputational problem

9.8 References[1] See Arora & Barak 2009, Chapter 1: The computational model and why it doesn't matter

[2] See Sipser 2006, Chapter 7: Time complexity

[3] Ladner, Richard E. (1975), “On the structure of polynomial time reducibility” (PDF), Journal of the ACM (JACM) 22 (1):151–171, doi:10.1145/321864.321877.

[4] Berger, Bonnie A.; Leighton, T (1998), “Protein folding in the hydrophobic-hydrophilic (HP) model is NP-complete”,Journal of Computational Biology 5 (1): 27–40, doi:10.1089/cmb.1998.5.27, PMID 9541869.

[5] Cook, Stephen (April 2000), The P versus NP Problem (PDF), Clay Mathematics Institute, retrieved 2006-10-18.

[6] Jaffe, Arthur M. (2006), “The Millennium Grand Challenge in Mathematics” (PDF), Notices of the AMS 53 (6), retrieved2006-10-18.

[7] Arvind, Vikraman; Kurur, Piyush P. (2006), “Graph isomorphism is in SPP”, Information and Computation 204 (5): 835–852, doi:10.1016/j.ic.2006.02.002.

[8] Schöning, Uwe. “Graph isomorphism is in the low hierarchy”. Proceedings of the 4th Annual Symposium on TheoreticalAspects of Computer Science 1987: 114–124. doi:10.1007/bfb0039599.; also: Journal of Computer and System Sciences37: 312–323. 1988. doi:10.1016/0022-0000(88)90010-4. Missing or empty |title= (help)

[9] Lance Fortnow. Computational Complexity Blog: Complexity Class of the Week: Factoring. September 13, 2002. http://weblog.fortnow.com/2002/09/complexity-class-of-week-factoring.html

Page 43: Computational Complexity Theory A

9.9. EXTERNAL LINKS 39

[10] Boaz Barak’s course on Computational Complexity Lecture 2

[11] Hopcroft, J.E., Motwani, R. and Ullman, J.D. (2007) Introduction to Automata Theory, Languages, and Computation,Addison Wesley, Boston/San Francisco/New York (page 368)

[12] Richard M. Karp, “Combinatorics, Complexity, and Randomness”, 1985 Turing Award Lecture

[13] Fortnow & Homer (2003)

[14] Yamada, H. (1962). “Real-Time Computation and Recursive Functions Not Real-Time Computable”. IEEE Transactionson Electronic Computers. EC-11 (6): 753–760. doi:10.1109/TEC.1962.5219459.

[15] Trakhtenbrot, B.A.: Signalizing functions and tabular operators. Uchionnye Zapiski Penzenskogo Pedinstituta (Transac-tions of the Penza Pedagogoical Institute) 4, 75–87 (1956) (in Russian)

[16] Richard M. Karp (1972), “Reducibility Among Combinatorial Problems” (PDF), in R. E. Miller and J. W. Thatcher (edi-tors), Complexity of Computer Computations, New York: Plenum, pp. 85–103

9.8.1 Textbooks

• Arora, Sanjeev; Barak, Boaz (2009), Computational Complexity: A Modern Approach, Cambridge, ISBN 978-0-521-42426-4, Zbl 1193.68112

• Downey, Rod; Fellows, Michael (1999), Parameterized complexity, Berlin, New York: Springer-Verlag

• Du, Ding-Zhu; Ko, Ker-I (2000), Theory of Computational Complexity, John Wiley & Sons, ISBN 978-0-471-34506-0

• Goldreich, Oded (2008), Computational Complexity: A Conceptual Perspective, Cambridge University Press

• van Leeuwen, Jan, ed. (1990), Handbook of theoretical computer science (vol. A): algorithms and complexity,MIT Press, ISBN 978-0-444-88071-0

• Papadimitriou, Christos (1994), Computational Complexity (1st ed.), Addison Wesley, ISBN 0-201-53082-1

• Sipser, Michael (2006), Introduction to the Theory of Computation (2nd ed.), USA: Thomson Course Technol-ogy, ISBN 0-534-95097-3

• Garey, Michael R.; Johnson, David S. (1979), Computers and Intractability: A Guide to the Theory of NP-Completeness, W. H. Freeman, ISBN 0-7167-1045-5

9.8.2 Surveys

• Khalil, Hatem; Ulery, Dana (1976), A Review of Current Studies on Complexity of Algorithms for Partial Differ-ential Equations, ACM '76 Proceedings of the 1976 Annual Conference, p. 197, doi:10.1145/800191.805573

• Cook, Stephen (1983), “An overview of computational complexity”, Commun. ACM (ACM) 26 (6): 400–408,doi:10.1145/358141.358144, ISSN 0001-0782

• Fortnow, Lance; Homer, Steven (2003), “A Short History of Computational Complexity” (PDF), Bulletin ofthe EATCS 80: 95–133

• Mertens, Stephan (2002), “Computational Complexity for Physicists”, Computing in Science and Engg. (Piscat-away, NJ, USA: IEEE Educational Activities Department) 4 (3): 31–47, arXiv:cond-mat/0012185, doi:10.1109/5992.998639,ISSN 1521-9615

9.9 External links• The Complexity Zoo

• Hazewinkel, Michiel, ed. (2001), “Computational complexity classes”, Encyclopedia ofMathematics, Springer,ISBN 978-1-55608-010-4

Page 44: Computational Complexity Theory A

Chapter 10

Configuration graph

Configuration graphs are a theoretical tool used in computational complexity theory to prove a relation betweengraph reachability and complexity classes.

10.1 Definition

A theoretical computational model, like Turing machine or finite automata, explains how to do a computation. Themodel explains both what is an initial configuration of the machine and which steps can be taken to continue thecomputation, until we eventually stop. A configuration, also called an Instantaneous Description(ID) is a finite repre-sentation of the machine at a given time. For example, for a finite automata and a given input, the configuration willbe the current state and the number of read letters, for a Turing machine it will be the state, the content of the tapeand the position of the head. A configuration graph is a directed labeled graph where the label of the vertices are thepossible configurations of the models and where there is an edge from one configuration to another if it correspondto a computational step of the model.The initial and accepting configuration(s) of the machine are special vertices of the configuration graph. The com-putation accepts if and only if there is a path from an initial vertex to an accepting vertex.

10.2 Useful property

If the computation is deterministic then from any configuration there is at most one possible step, so the graph is ofout-degree 1, and there is exactly one initial state.Once we add a dummy initial vertex with an edge to every initial vertex and a dummy accepting vertex with an edgefrom every accepting vertex, checking if there is an accepting computation only requires to check if there is a pathfrom the initial vertex to the accepting vertex, which is the reachability problem.A cycle in the graph means that there is a possible infinite loop in the computation.

10.3 Size of the graph

The computational graph can be of infinite size if there are no restrictions on possible configurations; indeed, it iseasy to see that there are Turing machines which can reach arbitrarily large configurations.It is also possible to have finite graphs: on Deterministic finite automaton with s states, for a given word of size n theconfiguration is composed of the position of the head and the current state. So the graph is of size (n+1)s , and theaccessible part from the initial state is of size n+ 1 .

40

Page 45: Computational Complexity Theory A

10.4. USE OF THIS OBJECT 41

10.4 Use of this object

This notion is useful because it reduces computational problems to graph reachability problems.For example, since reachability is in NL when we can represent configurations in space which is logarithmic in thesize of the input, and since the configuration of the Turing Machine in NL is indeed of logarithmic size, then graph-reachability is complete for NL.[1]

In the other direction, it helps to verify the complexity of a computation model; the decision problem for a (deter-ministic) model whose configuration are of space which is logarithmic in the size of the input is in (L) NL. This isfor example the case of finite automata and finite automata with one counter.

10.5 References[1] Papadimitriou, Christos H. (1994). Computational Complexity, Reading, Massachusetts: Addison-Wesley. ISBN 0-201-

53082-1.

• Sanjeev Arora and Boaz Barak (2009). Computational complexity, a modern approach. Cambridge UniversityPress. ISBN 978-0-521-42426-4. Section 4.3: NL-completeness, p. 87.

Page 46: Computational Complexity Theory A

Chapter 11

Padding argument

In computational complexity theory, the padding argument is a tool to conditionally prove that if some complexityclasses are equal, then some other bigger classes are also equal.

11.1 Example

The proof that P = NP implies EXP = NEXP uses “padding”. EXP ⊆ NEXP by definition, so it suffices to showNEXP ⊆ EXP .Let L be a language in NEXP. Since L is in NEXP, there is a non-deterministic Turing machine M that decides L intime 2n

c for some constant c. Let

L′ = x12|x|c

| x ∈ L,

where 1 is a symbol not occurring in L. First we show that L′ is in NP, then we will use the deterministic polynomialtime machine given by P = NP to show that L is in EXP.L′ can be decided in non-deterministic polynomial time as follows. Given input x′ , verify that it has the formx′ = x12

|x|c and reject if it does not. If it has the correct form, simulateM(x). The simulation takes non-deterministic2|x|

c time, which is polynomial in the size of the input, x′ . So, L′ is in NP. By the assumption P = NP, there is alsoa deterministic machine DM that decides L′ in polynomial time. We can then decide L in deterministic exponentialtime as follows. Given input x , simulate DM(x12

|x|c

) . This takes only exponential time in the size of the input, x .The 1d is called the “padding” of the language L. This type of argument is also sometimes used for space complexityclasses, alternating classes, and bounded alternating classes.

11.2 References• Arora, Sanjeev; Barak, Boaz (2009), Computational Complexity: A Modern Approach, Cambridge, p. 57,

ISBN 978-0-521-42426-4

42

Page 47: Computational Complexity Theory A

11.3. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 43

11.3 Text and image sources, contributors, and licenses

11.3.1 Text• Aanderaa–Karp–Rosenberg conjecture Source: https://en.wikipedia.org/wiki/Aanderaa%E2%80%93Karp%E2%80%93Rosenberg_

conjecture?oldid=646610062 Contributors: Michael Hardy, Phil Boswell, Tobias Bergemann, Giftlite, Bender235, Matt Cook, Mya-suda, A3nm, David Eppstein, Coppertwig, GirasoleDE, Eeekster, Dthomsen8, Dekart, Yobot, Citation bot, Thore Husfeldt, Yewang315,Citation bot 1, RobinK, RjwilmsiBot, Helpful Pixie Bot, CitationCleanerBot, Ynaamad and Anonymous: 4

• Advice (complexity) Source: https://en.wikipedia.org/wiki/Advice_(complexity)?oldid=628842244Contributors: Michael Hardy, Ixfd64,Dcoetzee, MathMartin, Andris, Gadfium, Creidieki, ReyBrujo, SmackBot, Commander Keane bot, Sadeq, CRGreathouse, Simeon, DavidEppstein, Addbot, Luckas-bot, Yobot, Erik9bot, RobinK, BattyBot, ChrisGualtieri, Deltahedron, Cerabot~enwiki, Tesujikekx and Anony-mous: 7

• Analysis of algorithms Source: https://en.wikipedia.org/wiki/Analysis_of_algorithms?oldid=669243362 Contributors: Bryan Derksen,Seb, Arvindn, Hfastedge, Edward, Nixdorf, Kku, McKay, Pakaran, Murray Langton, Altenmann, MathMartin, Bkell, Tobias Berge-mann, David Gerard, Giftlite, Jao, Brona, Manuel Anastácio, Beland, Andreas Kaufmann, Liberlogos, Mani1, Bender235, Ashewmaker,MCiura, Gary, Terrycojones, Pinar, Ruud Koot, Ilya, Qwertyus, Miserlou, DVdm, YurikBot, PrologFan, SmackBot, Vald, Nbarth,Mhym, GRuban, Radagast83, Cybercobra, Kendrick7, Spiritia, Lee Carre, Amakuru, CRGreathouse, ShelfSkewed, Hermel, Magio-laditis, VoABot II, David Eppstein, User A1, Maju wiki, 2help, Cometstyles, The Wilschon, BotKung, Groupthink, Keilana, Xe7al,Ykhwong, Alastair Carnegie, Ivan Akira, Roux, Jarble, Legobot, Yobot, Fraggle81, Pcap, GateKeeper, AnomieBOT, Materialscientist,Miym, Charvest, Fortdj33, 124Nick, RobinK, WillNess, RjwilmsiBot, Uploader4u, Jmencisom, Wikipelli, The Nut, Tirab, Tijfo098,ClueBot NG, Ifarzana, Satellizer, Tvguide1234, Helpful Pixie Bot, Intr199, Manuel.mas12, Liam987, AlexanderZoul, Jochen Burghardt,Cubism44, Vieque, PNattrass and Anonymous: 75

• Approximation algorithm Source: https://en.wikipedia.org/wiki/Approximation_algorithm?oldid=667970915 Contributors: Danny,Dcoetzee, Fredrik, Ojigiri~enwiki, Giftlite, Mellum, Andris, NathanHurst, Jnestorius, Haham hanuka, Arthena, Culix, Oleg Alexan-drov, Decrease789, Ruud Koot, GregorB, Rjwilmsi, Chobot, YurikBot, Bota47, Tribaal, Bmatheny, SmackBot, DKalkin, Pnamjoshi,Myasuda, NotQuiteEXPComplete, LachlanA, Dricherby, Tiagofassoni, David Eppstein, LordAnubisBOT, Ashishgoel.1973, Ratfox,Brvman, Vavi~enwiki, Paolo.dL, Whodoesthis, RMFan1, Kolyma, C. lorenz, Addbot, CarsracBot, Numbo3-bot, Yobot, Citation bot,Anonash, Kiefer.Wolfowitz, RobinK, EmausBot, ZéroBot, Howard nyc, Helpful Pixie Bot, BattyBot, Olonic, Mariolucic, Mark viking,Metin.balaban, Infinitestory, KasparBot and Anonymous: 30

• Approximation-preserving reduction Source: https://en.wikipedia.org/wiki/Approximation-preserving_reduction?oldid=664545324Contributors: Edemaine, Npinsker, Infinitestory and Anonymous: 1

• Asymptotic computational complexity Source: https://en.wikipedia.org/wiki/Asymptotic_computational_complexity?oldid=662061990Contributors: Altenmann, ZeroOne, BD2412, Logan, Taemyr, Sharonidith, Buenasdiaz, NOrbeck, VladimirReshetnikov, FrescoBot,RobinK, Mo ainm, Dcirovic, CeraBot, Brirush, Sol1 and Anonymous: 3

• Averaging argument Source: https://en.wikipedia.org/wiki/Averaging_argument?oldid=536874098Contributors: Michael Hardy, Sadeq,Yobot, Nageh, DrilBot, RobinK and Anonymous: 4

• Complexity class Source: https://en.wikipedia.org/wiki/Complexity_class?oldid=676149537 Contributors: The Anome, Ixfd64, Ce-sarB, Schneelocke, Timwi, Dcoetzee, Fredrik, Altenmann, MathMartin, Tobias Bergemann, Giftlite, Andris, Gdr, Bosmon, Ascánder,Ben Standeven, Chalst, Peter M Gerdes, Craptree, Ruud Koot, Chobot, YurikBot, TodorBozhinov, Gene.arboit, Ksyrie, Ott2, ArthurRubin, Netrapt, Tom Morris, SmackBot, Stux, Pkirlin, Tsca.bot, Duncancumming, Zero sharp, CRGreathouse, Pascal.Tesson, Thijs!bot,AlexAlex, Electron9, JAnDbot, Willp4139, Entropy, Jamelan, Addbot, Luckas-bot, Yobot, Reshadipoor, ArthurBot, Miym, FrescoBot,Roi 1986, RobinK, Anonymann, EmausBot, Dima david, Bomazi, Frietjes, Wiker63 and Anonymous: 26

• Computational complexity theory Source: https://en.wikipedia.org/wiki/Computational_complexity_theory?oldid=676228384 Con-tributors: AxelBoldt, LC~enwiki, Mav, Robert Merkel, The Anome, Arvindn, DavidSJ, Ryguasu, Youandme, Stevertigo, Hfastedge,Michael Hardy, Booyabazooka, Nixdorf, Ixfd64, Chinju, Looxix~enwiki, Docu, Charles Matthews, Timwi, David Newton, Dcoet-zee, Jitse Niesen, Daniel Quinlan, Arthaey, Doradus, Prumpf, GulDan, E23~enwiki, Jimbreed, Populus, Wernher, McKay, Pakaran,David.Monniaux, Phil Boswell, Robbot, Chealer, Fredrik, Altenmann, MathMartin, Henrygb, Bkell, Jleedev, Tobias Bergemann, DavidGerard, Tdgs, Giftlite, N12345n, Aphaia, Dissident, Everyking, Andris, Déjà Vu, Quackor, Chowbok, Gdr, Knutux, Beland, Ehsan~enwiki,APH, Alotau, Creidieki, Blokhead, Rich Farmbrough, Guanabot, Leibniz, ArnoldReinhold, Talldean, Ascánder, Ben Standeven, Chalst,Barcex, Themusicgod1, John Vandenberg, Shenme, Flammifer, Obradovic Goran, Mdd, Jhertel, SpaceMoose, Walkerma, Scottcraig,Oleg Alexandrov, Postrach, Xiaoyang, Linas, Decrease789, Rend~enwiki, Ruud Koot, Orz, Mpatel, Wikiklrsc, Bluemoose, GregorB,Bruno Unna, Pete142, Graham87, Emallove, Rjwilmsi, Koavf, Nneonneo, Bubba73, RainR, FlaBot, Mathbot, RexNL, Intgr, Der-Graph~enwiki, Chobot, Hmonroe, Bgwhite, Roboto de Ajvol, Siddhant, Wavelength, RussBot, Gaius Cornelius, Trovatore, Tejas81,PrologFan, Larry laptop, Mikeblas, Auminski, Klutzy, Cesarsorm~enwiki, Zipcube, Ripper234, Arthur Rubin, GrEp, Bo Jacoby, Smack-Bot, InverseHypercube, Powo, Gilliam, Hegariz, Kurykh, Droll, Nbarth, DHN-bot~enwiki, Konstable, Readams, Contrasedative, Steven-mitchell, Bsotomay, Battamer, Henning Makholm, Andrei Stroe, Wvbailey, Harryboyles, J. Finkelstein, 16@r, Ryulong, Jaksmata, JR-Spriggs, Ylloh, CRGreathouse, Clecio~enwiki, GPhilip, Pascal.Tesson, ErrantX, Dragonflare82, Omicronpersei8, Egriffin, Epbr123,ConceptExp, Hazmat2, Young Pioneer, Headbomb, Klausness, WikiSlasher, VictorAnyakin, Hermel, JAnDbot, The Transhumanist,Drizzd~enwiki, Yill577, Four Dog Night, Jeff Dahl, David Eppstein, Ekotkie, Abatasigh, Jamesd9007, FANSTARbot, Adavidb, Mau-rice Carbonaro, Bot-Schafter, Deflagg, Tarotcards, Bouke~enwiki, Krishnachandranvn, Policron, Christer.berg, Tkgd2007, JohnBlack-burne, LokiClock, Toddy1, Philip Trueman, Getonyourfeet, Magmi, Calculuslover800, Rogerdpack, Groupthink, Dmcq, Van Parunak,SieBot, Flyer22, Faradayplank, SimonTrew, Manway, Skippydo, CharlesGillingham, Anchor Link Bot, Triwas, Grsbmd, Cngoulimis,ClueBot, The Thing That Should Not Be, Jlpinar83, Huynl, Erudecorp, D.scain.farenzena, DragonBot, Tim32, PixelBot, Estirabot, Dmy-ersturnbull, Brianbjparker, Dmitri pavlov, Johnuniq, Pichpich, C. lorenz, WikHead, Dsimic, Multipundit, Addbot, DOI bot, Bassbone-rocks, SpBot, Numbo3-bot, שי ,דוד Gail, Zorrobot, Mik01aj, Yobot, OrgasGirl, Ptbotgourou, Pcap, Nallimbot, Waltnmi, Rubinbot,Hiihammuk, Materialscientist, Citation bot, Twri, Xqbot, Miym, RibotBOT, Charvest, FrescoBot, Mycer1nus, Sae1962, Citation bot 1,MarcelB612, Eser.aygun, RedBot, RobinK, AvnishIT, Ink-Jetty, D climacus, Quotient group, Rednas1234, EmausBot, Ethereal-Blade,Fuujuhi, Jmencisom, Carbo1200, Bethnim, MassimoLauria, Mastergreg82, AvicAWB, Muditjai, Wayne Slam, Staszek Lem, OrangeSuede Sofa, Dlu776, ClueBot NG, LJosil, Braincricket, Helpful Pixie Bot, CitationCleanerBot, Brad7777, MichiHenning, AidaFernan-daUFPE, Aaron Nitro Danielson, JYBot, Dexbot, Deltahedron, Djhulme, Julianiacoponi, Alexbrandts, Jpmunz, Monkbot, Amitkumarp,Mona Borham, Ja49cs69, KasparBot and Anonymous: 213

Page 48: Computational Complexity Theory A

44 CHAPTER 11. PADDING ARGUMENT

• Configuration graph Source: https://en.wikipedia.org/wiki/Configuration_graph?oldid=645923124Contributors: Michael Hardy, Rjwilmsi,SmackBot, Sadads, A3nm, Sphilbrick, Bender2k14, Addbot, LaaknorBot, LilHelpa, Arthur MILCHIOR, John of Reading, Elaz85 andAnonymous: 2

• Padding argument Source: https://en.wikipedia.org/wiki/Padding_argument?oldid=670647071 Contributors: Edward, Michael Hardy,Auric, J. Finkelstein, Arthur MILCHIOR, Gareth Griffith-Jones, Upsidedowntophat, Kephir and Anonymous: 5

11.3.2 Images• File:Complexity_classes.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/bc/Complexity_classes.svg License: Public

domain Contributors: ? Original artist: ?• File:Complexity_subsets_pspace.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/6e/Complexity_subsets_pspace.svgLicense: Public domainContributors: Own work by uploader, intended to replace bitmap image illustrating same thingOriginal artist: Handdrawn in Inkscape Qef

• File:Decision_Problem.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/06/Decision_Problem.svgLicense: GFDLCon-tributors: Derivative work based on http://en.wikipedia.org/wiki/File:Decision_Problem.png Original artist: RobinK

• File:DottedLine.png Source: https://upload.wikimedia.org/wikipedia/commons/c/c7/DottedLine.png License: Public domain Contrib-utors: ? Original artist: ?

• File:Internet_map_1024.jpg Source: https://upload.wikimedia.org/wikipedia/commons/d/d2/Internet_map_1024.jpg License: CC BY2.5 Contributors: Originally from the English Wikipedia; description page is/was here. Original artist: The Opte Project

• File:Maquina.png Source: https://upload.wikimedia.org/wikipedia/commons/3/3d/Maquina.png License: Public domain Contributors:en.wikipedia Original artist: Schadel (http://turing.izt.uam.mx)

• File:Max_paraboloid.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/72/Max_paraboloid.svg License: CC BY-SA4.0 Contributors: Own work Original artist: IkamusumeFan

• File:SolidLine.png Source: https://upload.wikimedia.org/wikipedia/commons/2/2d/SolidLine.png License: Public domain Contributors:? Original artist: ?

• File:Sorting_quicksort_anim.gif Source: https://upload.wikimedia.org/wikipedia/commons/6/6a/Sorting_quicksort_anim.gif License:CC-BY-SA-3.0 Contributors: originally upload on the English Wikipedia Original artist: Wikipedia:en:User:RolandH

• File:TSP_Deutschland_3.png Source: https://upload.wikimedia.org/wikipedia/commons/c/c4/TSP_Deutschland_3.png License: Pub-lic domain Contributors: https://www.cia.gov/cia/publications/factbook/maps/gm-map.gif Original artist: The original uploader wasKapitän Nemo at German Wikipedia

• File:Text_document_with_red_question_mark.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a4/Text_document_with_red_question_mark.svg License: Public domain Contributors: Created by bdesham with Inkscape; based upon Text-x-generic.svgfrom the Tango project. Original artist: Benjamin D. Esham (bdesham)

11.3.3 Content license• Creative Commons Attribution-Share Alike 3.0