129
Lecture Topic: What is a Computer? Jakub Mareˇ cek and Se´ an McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 1/1

Lecture Topic: What is a Computer?

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Lecture Topic: What is a Computer?

Lecture Topic: What is a Computer?

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 1 / 1

Page 2: Lecture Topic: What is a Computer?

What is a Computer?

Before we consider the processing of numbers on computers, it is certainlyworthwhile to review what a computer is, in general, and what a processor is, inparticular. Modern processor is very complicated.

People hence study many abstractions of the workings of a processor, called“models of computation”.

First, however, we need a formal notion of a problem and a model of computation.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 2 / 1

Page 3: Lecture Topic: What is a Computer?

What is a Computer?

Before we consider the processing of numbers on computers, it is certainlyworthwhile to review what a computer is, in general, and what a processor is, inparticular. Modern processor is very complicated.

People hence study many abstractions of the workings of a processor, called“models of computation”.

First, however, we need a formal notion of a problem and a model of computation.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 2 / 1

Page 4: Lecture Topic: What is a Computer?

What is a Computer?

Before we consider the processing of numbers on computers, it is certainlyworthwhile to review what a computer is, in general, and what a processor is, inparticular. Modern processor is very complicated.

People hence study many abstractions of the workings of a processor, called“models of computation”.

First, however, we need a formal notion of a problem and a model of computation.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 2 / 1

Page 5: Lecture Topic: What is a Computer?

Decision Problem

Much of computer science uses a language-inspired definition of a decisionproblem.

One starts with an finite alphabet A.

By stringing elements of the alphabet one after another, one obtains strings offinite or countably infinite length.

A set of strings is called a language.

A decision problem is defined by a fixed set S , which is a subset of the languageU of all possible strings over the alphabet A.

A particular instance of the decision problem is to decide, given an elementu ∈ U, whether u is included in S .

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 3 / 1

Page 6: Lecture Topic: What is a Computer?

Decision Problem

Much of computer science uses a language-inspired definition of a decisionproblem.

One starts with an finite alphabet A.

By stringing elements of the alphabet one after another, one obtains strings offinite or countably infinite length.

A set of strings is called a language.

A decision problem is defined by a fixed set S , which is a subset of the languageU of all possible strings over the alphabet A.

A particular instance of the decision problem is to decide, given an elementu ∈ U, whether u is included in S .

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 3 / 1

Page 7: Lecture Topic: What is a Computer?

Decision Problem

Much of computer science uses a language-inspired definition of a decisionproblem.

One starts with an finite alphabet A.

By stringing elements of the alphabet one after another, one obtains strings offinite or countably infinite length.

A set of strings is called a language.

A decision problem is defined by a fixed set S , which is a subset of the languageU of all possible strings over the alphabet A.

A particular instance of the decision problem is to decide, given an elementu ∈ U, whether u is included in S .

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 3 / 1

Page 8: Lecture Topic: What is a Computer?

Decision Problem

Much of computer science uses a language-inspired definition of a decisionproblem.

One starts with an finite alphabet A.

By stringing elements of the alphabet one after another, one obtains strings offinite or countably infinite length.

A set of strings is called a language.

A decision problem is defined by a fixed set S , which is a subset of the languageU of all possible strings over the alphabet A.

A particular instance of the decision problem is to decide, given an elementu ∈ U, whether u is included in S .

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 3 / 1

Page 9: Lecture Topic: What is a Computer?

Decision Problem

Much of computer science uses a language-inspired definition of a decisionproblem.

One starts with an finite alphabet A.

By stringing elements of the alphabet one after another, one obtains strings offinite or countably infinite length.

A set of strings is called a language.

A decision problem is defined by a fixed set S , which is a subset of the languageU of all possible strings over the alphabet A.

A particular instance of the decision problem is to decide, given an elementu ∈ U, whether u is included in S .

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 3 / 1

Page 10: Lecture Topic: What is a Computer?

Decision Problem

Much of computer science uses a language-inspired definition of a decisionproblem.

One starts with an finite alphabet A.

By stringing elements of the alphabet one after another, one obtains strings offinite or countably infinite length.

A set of strings is called a language.

A decision problem is defined by a fixed set S , which is a subset of the languageU of all possible strings over the alphabet A.

A particular instance of the decision problem is to decide, given an elementu ∈ U, whether u is included in S .

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 3 / 1

Page 11: Lecture Topic: What is a Computer?

Example (Primality testing.)

For example, alphabet could be composed of binary digits A = 0, 1, U could bethe set of all natural numbers encoded in binary, and set S could be the binaryencodings of prime numbers. The decision problem is the inclusion of an arbitrarybinary encoding of a natural number in the set of S . ♦

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 4 / 1

Page 12: Lecture Topic: What is a Computer?

Models of Computation

Several models of computation were devised. Alan Turing introduced a model,where

characters are stored on an infinitely long tape,

with a read/write head scanning one square at any given time and having

very simple rules for changing its internal state based on the symbol read andcurrent state.

Many of these formalisms turn out to be equivalent in computational power,i.e., any computation that can be carried out with one can be carried out with anyof the others.

They are also equivalent to the familiar computer, if it had infinite amounts ofmemory.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 5 / 1

Page 13: Lecture Topic: What is a Computer?

Models of Computation

Several models of computation were devised. Alan Turing introduced a model,where

characters are stored on an infinitely long tape,

with a read/write head scanning one square at any given time and having

very simple rules for changing its internal state based on the symbol read andcurrent state.

Many of these formalisms turn out to be equivalent in computational power,i.e., any computation that can be carried out with one can be carried out with anyof the others.

They are also equivalent to the familiar computer, if it had infinite amounts ofmemory.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 5 / 1

Page 14: Lecture Topic: What is a Computer?

Models of Computation

Several models of computation were devised. Alan Turing introduced a model,where

characters are stored on an infinitely long tape,

with a read/write head scanning one square at any given time and having

very simple rules for changing its internal state based on the symbol read andcurrent state.

Many of these formalisms turn out to be equivalent in computational power,i.e., any computation that can be carried out with one can be carried out with anyof the others.

They are also equivalent to the familiar computer, if it had infinite amounts ofmemory.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 5 / 1

Page 15: Lecture Topic: What is a Computer?

Turing Machine

Formally, one can define a Turing machine using:

a finite, non-empty set Q of objects, representing states

a subset F of Q, corresponding to “accepting” states, where computationhalts

q0 ∈ Q, the initial state

a finite, non-empty set Γ of objects, representing the symbols to be used on atape

a partial function δ : (Q \ F )× Γ→ Q × Γ× −1, 0, 1 where for acombination of a state and symbol read from the tape, we get the next state,the symbol to write onto the tape, and an instruction to shift the tape left(-1), right (+1), or keep in its position (0).

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 6 / 1

Page 16: Lecture Topic: What is a Computer?

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 6 / 1

Page 17: Lecture Topic: What is a Computer?

Exercise (2 points)

Consider the following simulator of a Turing machine (TM):

1 def turing(code, tape, initPos = 0, initState = "1"):position = initPosstate = initStatewhile state != "halt":print state, ":", tape

6 symbol = tape[position](symbol, direction, state) = code[state][sym]if sym != "noWrite": tape[position] = symbolposition += direction

Implement a TM, which multiplies two integers, which are encoded on the tape inunary and and delimited by “blank” on both ends and between the numbers. Donot replace the numbers, but append the result after yet another blank.Hint: Unary encoding means the number of a ocurrences of a particular symbols(e.g., “1”) is equal to the number (e.g., “11111” stands for 5).

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 6 / 1

Page 18: Lecture Topic: What is a Computer?

Computability

Computability studies these models of computation, andasks which problems can be proven to be unsolvable by a computers.

Example (The Halting Problem.)

Given a program and an input to the program,will the program eventually stop when given that input? ♦

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 7 / 1

Page 19: Lecture Topic: What is a Computer?

The Halting Problem

A silly solution would be to just run the program with the given input, for areasonable amount of time.

If the program stops, we know the program stops.

But if the program doesn’t stop in a “reasonable” amount of time, we cannotconclude that it won’t stop.

Maybe we didn’t wait long enough.

Alan Turing proved the Halting problem to be undecidable in 1936.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 8 / 1

Page 20: Lecture Topic: What is a Computer?

The Halting Problem

A silly solution would be to just run the program with the given input, for areasonable amount of time.

If the program stops, we know the program stops.

But if the program doesn’t stop in a “reasonable” amount of time, we cannotconclude that it won’t stop.

Maybe we didn’t wait long enough.

Alan Turing proved the Halting problem to be undecidable in 1936.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 8 / 1

Page 21: Lecture Topic: What is a Computer?

The Halting Problem

A silly solution would be to just run the program with the given input, for areasonable amount of time.

If the program stops, we know the program stops.

But if the program doesn’t stop in a “reasonable” amount of time, we cannotconclude that it won’t stop.

Maybe we didn’t wait long enough.

Alan Turing proved the Halting problem to be undecidable in 1936.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 8 / 1

Page 22: Lecture Topic: What is a Computer?

The Halting Problem

A silly solution would be to just run the program with the given input, for areasonable amount of time.

If the program stops, we know the program stops.

But if the program doesn’t stop in a “reasonable” amount of time, we cannotconclude that it won’t stop.

Maybe we didn’t wait long enough.

Alan Turing proved the Halting problem to be undecidable in 1936.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 8 / 1

Page 23: Lecture Topic: What is a Computer?

The Halting Problem

A silly solution would be to just run the program with the given input, for areasonable amount of time.

If the program stops, we know the program stops.

But if the program doesn’t stop in a “reasonable” amount of time, we cannotconclude that it won’t stop.

Maybe we didn’t wait long enough.

Alan Turing proved the Halting problem to be undecidable in 1936.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 8 / 1

Page 24: Lecture Topic: What is a Computer?

The Halting Problem

A silly solution would be to just run the program with the given input, for areasonable amount of time.

If the program stops, we know the program stops.

But if the program doesn’t stop in a “reasonable” amount of time, we cannotconclude that it won’t stop.

Maybe we didn’t wait long enough.

Alan Turing proved the Halting problem to be undecidable in 1936.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 8 / 1

Page 25: Lecture Topic: What is a Computer?

Complexity

Some problems are solvable by a computer,but require such a long time to compute that solution is impractical.

Here, we express the run time as a function from the dimensions of the input tothe numbers of steps of a Turing machine (or similar).

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 9 / 1

Page 26: Lecture Topic: What is a Computer?

Complexity

Some problems are solvable by a computer,but require such a long time to compute that solution is impractical.

Here, we express the run time as a function from the dimensions of the input tothe numbers of steps of a Turing machine (or similar).

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 9 / 1

Page 27: Lecture Topic: What is a Computer?

Example (Fischer-Rabin Theorem.)

For example, let us have a very simple logic featuring 0, 1, the usual addition, andwhere the axioms are a closure of the following:

(0 = x + 1)

x + 1 = y + 1⇒ x = y

x + 0 = x

x + (y + 1) = (x + y) + 1

For a first-order formula P(x) with a free variable x ,(P(0) ∧ ∀x(P(x)⇒ P(x + 1)))⇒ ∀yP(y) (“induction”).

This is known as the Presburger arithmetic.

Fischer and Rabin proved in 1974 that every algorithm that decides the truth ofstatement of length n in Presburger arithmetic has a runtime of at least 22cn forsome constant c , because it may need to produce output of that size. ♦

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 10 / 1

Page 28: Lecture Topic: What is a Computer?

Example (Fischer-Rabin Theorem.)

For example, let us have a very simple logic featuring 0, 1, the usual addition, andwhere the axioms are a closure of the following:

(0 = x + 1)

x + 1 = y + 1⇒ x = y

x + 0 = x

x + (y + 1) = (x + y) + 1

For a first-order formula P(x) with a free variable x ,(P(0) ∧ ∀x(P(x)⇒ P(x + 1)))⇒ ∀yP(y) (“induction”).

This is known as the Presburger arithmetic.

Fischer and Rabin proved in 1974 that every algorithm that decides the truth ofstatement of length n in Presburger arithmetic has a runtime of at least 22cn forsome constant c , because it may need to produce output of that size. ♦

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 10 / 1

Page 29: Lecture Topic: What is a Computer?

Complexity theory

Complexity theory deals with questions concerning the time or space requirementsof given problems: the computational cost. For algorithms working with finitestrings from a finite alphabet, this is often surprisingly easy.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 11 / 1

Page 30: Lecture Topic: What is a Computer?

Analysis of algorithms

The term analysis of algorithms is used to describe the study of the performanceof computer programs on a scientific basis.

One such approach concentrates on determining the growth of the worst-caseperformance of the algorithm (an “upper bound”): An algorithm’s “order”suggests asymptotics of the number of operations carried out by the algorithm ona particular input, as a function of the dimensions of the input.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 12 / 1

Page 31: Lecture Topic: What is a Computer?

Analysis of algorithms

The term analysis of algorithms is used to describe the study of the performanceof computer programs on a scientific basis.

One such approach concentrates on determining the growth of the worst-caseperformance of the algorithm (an “upper bound”): An algorithm’s “order”suggests asymptotics of the number of operations carried out by the algorithm ona particular input, as a function of the dimensions of the input.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 12 / 1

Page 32: Lecture Topic: What is a Computer?

O notation in run-time analysis

Example

For example, we might find that a certain algorithm takes timeT (n) = 3n2 − 2n + 6 to complete a problem of size n.

If we ignore

constants (which makes sense because those depend on the particularhardware/virtual machine the program is run on), and

slower growing terms such as 2n,

we could say “T (n) grows at the order of n2”. ♦

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 13 / 1

Page 33: Lecture Topic: What is a Computer?

O notation in run-time analysis

Example

For example, we might find that a certain algorithm takes timeT (n) = 3n2 − 2n + 6 to complete a problem of size n.

If we ignore

constants (which makes sense because those depend on the particularhardware/virtual machine the program is run on), and

slower growing terms such as 2n,

we could say “T (n) grows at the order of n2”. ♦

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 13 / 1

Page 34: Lecture Topic: What is a Computer?

O notation in run-time analysis

Example

For example, we might find that a certain algorithm takes timeT (n) = 3n2 − 2n + 6 to complete a problem of size n.

If we ignore

constants (which makes sense because those depend on the particularhardware/virtual machine the program is run on), and

slower growing terms such as 2n,

we could say “T (n) grows at the order of n2”. ♦

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 13 / 1

Page 35: Lecture Topic: What is a Computer?

Other performance measures of algorithmsAlthough we usually measure the performance of an algorithm by its running time,other performance measures are often as important. Such measures include:

the amount of memory space required;

the number of bits of precision required to achieve a given output accuracy;

the number and extent of rounds of communication in distrbutedcomputation;

the error probability of a stochastic algorithm;

the number of random bits needed in a randomised algorithm;

the number of function evaluations in an algorithm which works withfunctions, e.g., optimisation or zero-finding algorithms;

the number of examples needed in a learning algorithm;

the number or proportion of accesses to secondary (disk) storage; and

the cost of programmer time to implement the algorithm as a program.

All of these can be expressed in the same notation.Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 14 / 1

Page 36: Lecture Topic: What is a Computer?

Other performance measures of algorithmsAlthough we usually measure the performance of an algorithm by its running time,other performance measures are often as important. Such measures include:

the amount of memory space required;

the number of bits of precision required to achieve a given output accuracy;

the number and extent of rounds of communication in distrbutedcomputation;

the error probability of a stochastic algorithm;

the number of random bits needed in a randomised algorithm;

the number of function evaluations in an algorithm which works withfunctions, e.g., optimisation or zero-finding algorithms;

the number of examples needed in a learning algorithm;

the number or proportion of accesses to secondary (disk) storage; and

the cost of programmer time to implement the algorithm as a program.

All of these can be expressed in the same notation.Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 14 / 1

Page 37: Lecture Topic: What is a Computer?

Other performance measures of algorithmsAlthough we usually measure the performance of an algorithm by its running time,other performance measures are often as important. Such measures include:

the amount of memory space required;

the number of bits of precision required to achieve a given output accuracy;

the number and extent of rounds of communication in distrbutedcomputation;

the error probability of a stochastic algorithm;

the number of random bits needed in a randomised algorithm;

the number of function evaluations in an algorithm which works withfunctions, e.g., optimisation or zero-finding algorithms;

the number of examples needed in a learning algorithm;

the number or proportion of accesses to secondary (disk) storage; and

the cost of programmer time to implement the algorithm as a program.

All of these can be expressed in the same notation.Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 14 / 1

Page 38: Lecture Topic: What is a Computer?

Other performance measures of algorithmsAlthough we usually measure the performance of an algorithm by its running time,other performance measures are often as important. Such measures include:

the amount of memory space required;

the number of bits of precision required to achieve a given output accuracy;

the number and extent of rounds of communication in distrbutedcomputation;

the error probability of a stochastic algorithm;

the number of random bits needed in a randomised algorithm;

the number of function evaluations in an algorithm which works withfunctions, e.g., optimisation or zero-finding algorithms;

the number of examples needed in a learning algorithm;

the number or proportion of accesses to secondary (disk) storage; and

the cost of programmer time to implement the algorithm as a program.

All of these can be expressed in the same notation.Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 14 / 1

Page 39: Lecture Topic: What is a Computer?

Other performance measures of algorithmsAlthough we usually measure the performance of an algorithm by its running time,other performance measures are often as important. Such measures include:

the amount of memory space required;

the number of bits of precision required to achieve a given output accuracy;

the number and extent of rounds of communication in distrbutedcomputation;

the error probability of a stochastic algorithm;

the number of random bits needed in a randomised algorithm;

the number of function evaluations in an algorithm which works withfunctions, e.g., optimisation or zero-finding algorithms;

the number of examples needed in a learning algorithm;

the number or proportion of accesses to secondary (disk) storage; and

the cost of programmer time to implement the algorithm as a program.

All of these can be expressed in the same notation.Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 14 / 1

Page 40: Lecture Topic: What is a Computer?

Other performance measures of algorithmsAlthough we usually measure the performance of an algorithm by its running time,other performance measures are often as important. Such measures include:

the amount of memory space required;

the number of bits of precision required to achieve a given output accuracy;

the number and extent of rounds of communication in distrbutedcomputation;

the error probability of a stochastic algorithm;

the number of random bits needed in a randomised algorithm;

the number of function evaluations in an algorithm which works withfunctions, e.g., optimisation or zero-finding algorithms;

the number of examples needed in a learning algorithm;

the number or proportion of accesses to secondary (disk) storage; and

the cost of programmer time to implement the algorithm as a program.

All of these can be expressed in the same notation.Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 14 / 1

Page 41: Lecture Topic: What is a Computer?

Other performance measures of algorithmsAlthough we usually measure the performance of an algorithm by its running time,other performance measures are often as important. Such measures include:

the amount of memory space required;

the number of bits of precision required to achieve a given output accuracy;

the number and extent of rounds of communication in distrbutedcomputation;

the error probability of a stochastic algorithm;

the number of random bits needed in a randomised algorithm;

the number of function evaluations in an algorithm which works withfunctions, e.g., optimisation or zero-finding algorithms;

the number of examples needed in a learning algorithm;

the number or proportion of accesses to secondary (disk) storage; and

the cost of programmer time to implement the algorithm as a program.

All of these can be expressed in the same notation.Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 14 / 1

Page 42: Lecture Topic: What is a Computer?

Other performance measures of algorithmsAlthough we usually measure the performance of an algorithm by its running time,other performance measures are often as important. Such measures include:

the amount of memory space required;

the number of bits of precision required to achieve a given output accuracy;

the number and extent of rounds of communication in distrbutedcomputation;

the error probability of a stochastic algorithm;

the number of random bits needed in a randomised algorithm;

the number of function evaluations in an algorithm which works withfunctions, e.g., optimisation or zero-finding algorithms;

the number of examples needed in a learning algorithm;

the number or proportion of accesses to secondary (disk) storage; and

the cost of programmer time to implement the algorithm as a program.

All of these can be expressed in the same notation.Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 14 / 1

Page 43: Lecture Topic: What is a Computer?

Other performance measures of algorithmsAlthough we usually measure the performance of an algorithm by its running time,other performance measures are often as important. Such measures include:

the amount of memory space required;

the number of bits of precision required to achieve a given output accuracy;

the number and extent of rounds of communication in distrbutedcomputation;

the error probability of a stochastic algorithm;

the number of random bits needed in a randomised algorithm;

the number of function evaluations in an algorithm which works withfunctions, e.g., optimisation or zero-finding algorithms;

the number of examples needed in a learning algorithm;

the number or proportion of accesses to secondary (disk) storage; and

the cost of programmer time to implement the algorithm as a program.

All of these can be expressed in the same notation.Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 14 / 1

Page 44: Lecture Topic: What is a Computer?

Other performance measures of algorithmsAlthough we usually measure the performance of an algorithm by its running time,other performance measures are often as important. Such measures include:

the amount of memory space required;

the number of bits of precision required to achieve a given output accuracy;

the number and extent of rounds of communication in distrbutedcomputation;

the error probability of a stochastic algorithm;

the number of random bits needed in a randomised algorithm;

the number of function evaluations in an algorithm which works withfunctions, e.g., optimisation or zero-finding algorithms;

the number of examples needed in a learning algorithm;

the number or proportion of accesses to secondary (disk) storage; and

the cost of programmer time to implement the algorithm as a program.

All of these can be expressed in the same notation.Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 14 / 1

Page 45: Lecture Topic: What is a Computer?

Example (Other uses.)

There are many other uses of the notation. In mathematics, it is often importantto get a feel for the error term of an approximation (i.e., how good is theapproximation).

For instance, people will write the Taylor series for ex as

ex = 1 + x + x2/2 + O(x3) for x → 0

to express the fact that the error has smaller absolute value than some constanttimes x3 if x is close enough to 0. ♦

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 15 / 1

Page 46: Lecture Topic: What is a Computer?

Example (Other uses.)

There are many other uses of the notation. In mathematics, it is often importantto get a feel for the error term of an approximation (i.e., how good is theapproximation).

For instance, people will write the Taylor series for ex as

ex = 1 + x + x2/2 + O(x3) for x → 0

to express the fact that the error has smaller absolute value than some constanttimes x3 if x is close enough to 0. ♦

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 15 / 1

Page 47: Lecture Topic: What is a Computer?

Asymptotic behaviour of functions and O notation

Let us introduce a formalisation of the notion of asymptotics.

The formalisation known as “Big O notation” or “Bachmann–Landau notation”goes back at least to 1892 and Paul Gustav Heinrich Bachmann, according tosome sources, although it was reinvented many times over.

Suppose our A requires T (n) operations to complete the algorithm in the longestpossible case.

Then we may say A is O(g(n)) if |T (n)/g(n)| is bounded from above as n→∞.

The fastest growing term in T (n) dominates all the others as n gets biggerand so is the most significant measure of complexity.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 16 / 1

Page 48: Lecture Topic: What is a Computer?

Asymptotic behaviour of functions and O notation

Let us introduce a formalisation of the notion of asymptotics.

The formalisation known as “Big O notation” or “Bachmann–Landau notation”goes back at least to 1892 and Paul Gustav Heinrich Bachmann, according tosome sources, although it was reinvented many times over.

Suppose our A requires T (n) operations to complete the algorithm in the longestpossible case.

Then we may say A is O(g(n)) if |T (n)/g(n)| is bounded from above as n→∞.

The fastest growing term in T (n) dominates all the others as n gets biggerand so is the most significant measure of complexity.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 16 / 1

Page 49: Lecture Topic: What is a Computer?

Asymptotic behaviour of functions and O notation

Let us introduce a formalisation of the notion of asymptotics.

The formalisation known as “Big O notation” or “Bachmann–Landau notation”goes back at least to 1892 and Paul Gustav Heinrich Bachmann, according tosome sources, although it was reinvented many times over.

Suppose our A requires T (n) operations to complete the algorithm in the longestpossible case.

Then we may say A is O(g(n)) if |T (n)/g(n)| is bounded from above as n→∞.

The fastest growing term in T (n) dominates all the others as n gets biggerand so is the most significant measure of complexity.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 16 / 1

Page 50: Lecture Topic: What is a Computer?

Asymptotic behaviour of functions and O notation

Let us introduce a formalisation of the notion of asymptotics.

The formalisation known as “Big O notation” or “Bachmann–Landau notation”goes back at least to 1892 and Paul Gustav Heinrich Bachmann, according tosome sources, although it was reinvented many times over.

Suppose our A requires T (n) operations to complete the algorithm in the longestpossible case.

Then we may say A is O(g(n)) if |T (n)/g(n)| is bounded from above as n→∞.

The fastest growing term in T (n) dominates all the others as n gets biggerand so is the most significant measure of complexity.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 16 / 1

Page 51: Lecture Topic: What is a Computer?

Asymptotic behaviour of functions and O notation

Let us introduce a formalisation of the notion of asymptotics.

The formalisation known as “Big O notation” or “Bachmann–Landau notation”goes back at least to 1892 and Paul Gustav Heinrich Bachmann, according tosome sources, although it was reinvented many times over.

Suppose our A requires T (n) operations to complete the algorithm in the longestpossible case.

Then we may say A is O(g(n)) if |T (n)/g(n)| is bounded from above as n→∞.

The fastest growing term in T (n) dominates all the others as n gets biggerand so is the most significant measure of complexity.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 16 / 1

Page 52: Lecture Topic: What is a Computer?

Asymptotic behaviour of functions and O notation

Let us introduce a formalisation of the notion of asymptotics.

The formalisation known as “Big O notation” or “Bachmann–Landau notation”goes back at least to 1892 and Paul Gustav Heinrich Bachmann, according tosome sources, although it was reinvented many times over.

Suppose our A requires T (n) operations to complete the algorithm in the longestpossible case.

Then we may say A is O(g(n)) if |T (n)/g(n)| is bounded from above as n→∞.

The fastest growing term in T (n) dominates all the others as n gets biggerand so is the most significant measure of complexity.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 16 / 1

Page 53: Lecture Topic: What is a Computer?

Asymptotic behaviour of functions and O notation

Similarly to “Big O”, there are 4 more notions, as summarised in Table ??.

Notation Definition Analogy

f (n) = O(g(n)) see Def. ≤f (n) = o(g(n)) see Def. <f (n) = Ω(g(n)) g(n) = O(f (n)) ≥f (n) = ω(g(n)) g(n) = o(f (n)) >f (n) = Θ(g(n)) f (n) = O(g(n)) and g(n) = O(f (n)) =

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 17 / 1

Page 54: Lecture Topic: What is a Computer?

Asymptotic behaviour of functions and O notation

Similarly to “Big O”, there are 4 more notions, as summarised in Table ??.

Notation Definition Analogy

f (n) = O(g(n)) see Def. ≤f (n) = o(g(n)) see Def. <f (n) = Ω(g(n)) g(n) = O(f (n)) ≥f (n) = ω(g(n)) g(n) = o(f (n)) >f (n) = Θ(g(n)) f (n) = O(g(n)) and g(n) = O(f (n)) =

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 17 / 1

Page 55: Lecture Topic: What is a Computer?

f (x) = O(g(x)): f does not grow faster than g

Definition

We write:

f (x) = O(g(x)) (or, to be more precise, f (x) = O(g(x)) for x →∞)

if and only if there exist constants N and C > 0 such that

|f (x)| ≤ C |g(x)| for all x > N or, equivalently,|f (x)||g(x)|

≤ C for all x > N.

That is, |f (x)/g(x)| is bounded from above as x →∞.

Intuitively, this means that f does not grow faster than g .

The letter “O” is read as “order” or just “Oh”.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 18 / 1

Page 56: Lecture Topic: What is a Computer?

f (x) = O(g(x)): f does not grow faster than g

Definition

We write:

f (x) = O(g(x)) (or, to be more precise, f (x) = O(g(x)) for x →∞)

if and only if there exist constants N and C > 0 such that

|f (x)| ≤ C |g(x)| for all x > N or, equivalently,|f (x)||g(x)|

≤ C for all x > N.

That is, |f (x)/g(x)| is bounded from above as x →∞.

Intuitively, this means that f does not grow faster than g .

The letter “O” is read as “order” or just “Oh”.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 18 / 1

Page 57: Lecture Topic: What is a Computer?

f (x) = O(g(x)): f does not grow faster than g

Definition

We write:

f (x) = O(g(x)) (or, to be more precise, f (x) = O(g(x)) for x →∞)

if and only if there exist constants N and C > 0 such that

|f (x)| ≤ C |g(x)| for all x > N or, equivalently,|f (x)||g(x)|

≤ C for all x > N.

That is, |f (x)/g(x)| is bounded from above as x →∞.

Intuitively, this means that f does not grow faster than g .

The letter “O” is read as “order” or just “Oh”.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 18 / 1

Page 58: Lecture Topic: What is a Computer?

f (x) = O(g(x)): f does not grow faster than g

Definition

We write:

f (x) = O(g(x)) (or, to be more precise, f (x) = O(g(x)) for x →∞)

if and only if there exist constants N and C > 0 such that

|f (x)| ≤ C |g(x)| for all x > N or, equivalently,|f (x)||g(x)|

≤ C for all x > N.

That is, |f (x)/g(x)| is bounded from above as x →∞.

Intuitively, this means that f does not grow faster than g .

The letter “O” is read as “order” or just “Oh”.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 18 / 1

Page 59: Lecture Topic: What is a Computer?

f (x) = Ω(g(x)): f does not grow slower than g

Definition

We also write:f (x) = Ω(g(x)) (for x →∞)

if and only if there exist constants N and C > 0 such that

|f (x)| ≥ C |g(x)| for all x > N or, equivalently,|f (x)||g(x)|

≥ C for all x > N.

That is, |f (x)/g(x)| is bounded from below by a positive (i.e., non-zero) numberas x →∞.

Intuitively, this means that f does not grow more slowly than g(i.e., g(x) = O(f (x))).

The letter “Ω” is read as “omega” or just “bounded from below by”.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 19 / 1

Page 60: Lecture Topic: What is a Computer?

f (x) = Ω(g(x)): f does not grow slower than g

Definition

We also write:f (x) = Ω(g(x)) (for x →∞)

if and only if there exist constants N and C > 0 such that

|f (x)| ≥ C |g(x)| for all x > N or, equivalently,|f (x)||g(x)|

≥ C for all x > N.

That is, |f (x)/g(x)| is bounded from below by a positive (i.e., non-zero) numberas x →∞.

Intuitively, this means that f does not grow more slowly than g(i.e., g(x) = O(f (x))).

The letter “Ω” is read as “omega” or just “bounded from below by”.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 19 / 1

Page 61: Lecture Topic: What is a Computer?

f (x) = Ω(g(x)): f does not grow slower than g

Definition

We also write:f (x) = Ω(g(x)) (for x →∞)

if and only if there exist constants N and C > 0 such that

|f (x)| ≥ C |g(x)| for all x > N or, equivalently,|f (x)||g(x)|

≥ C for all x > N.

That is, |f (x)/g(x)| is bounded from below by a positive (i.e., non-zero) numberas x →∞.

Intuitively, this means that f does not grow more slowly than g(i.e., g(x) = O(f (x))).

The letter “Ω” is read as “omega” or just “bounded from below by”.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 19 / 1

Page 62: Lecture Topic: What is a Computer?

f (x) = Ω(g(x)): f does not grow slower than g

Definition

We also write:f (x) = Ω(g(x)) (for x →∞)

if and only if there exist constants N and C > 0 such that

|f (x)| ≥ C |g(x)| for all x > N or, equivalently,|f (x)||g(x)|

≥ C for all x > N.

That is, |f (x)/g(x)| is bounded from below by a positive (i.e., non-zero) numberas x →∞.

Intuitively, this means that f does not grow more slowly than g(i.e., g(x) = O(f (x))).

The letter “Ω” is read as “omega” or just “bounded from below by”.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 19 / 1

Page 63: Lecture Topic: What is a Computer?

f (x) = Θ(g(x)): f grows at the same rate as g

Definition

f (x) = Θ(g(x)) (for x →∞)

if and only if there exist constants N, C and D > 0 such that

D|g(x)| ≤ |f (x)| ≤ C |g(x)| for all x > N ⇐⇒ D ≤ |f (x)||g(x)|

≤ C for all x > N.

That is, |f (x)/g(x)| is bounded from both above and below by positive numbersas x →∞.

Intuitively, this means that f grows roughly at the same rate as g .

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 20 / 1

Page 64: Lecture Topic: What is a Computer?

f (x) = Θ(g(x)): f grows at the same rate as g

Definition

f (x) = Θ(g(x)) (for x →∞)

if and only if there exist constants N, C and D > 0 such that

D|g(x)| ≤ |f (x)| ≤ C |g(x)| for all x > N ⇐⇒ D ≤ |f (x)||g(x)|

≤ C for all x > N.

That is, |f (x)/g(x)| is bounded from both above and below by positive numbersas x →∞.

Intuitively, this means that f grows roughly at the same rate as g .

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 20 / 1

Page 65: Lecture Topic: What is a Computer?

f (x) = Θ(g(x)): f grows at the same rate as g

Definition

f (x) = Θ(g(x)) (for x →∞)

if and only if there exist constants N, C and D > 0 such that

D|g(x)| ≤ |f (x)| ≤ C |g(x)| for all x > N ⇐⇒ D ≤ |f (x)||g(x)|

≤ C for all x > N.

That is, |f (x)/g(x)| is bounded from both above and below by positive numbersas x →∞.

Intuitively, this means that f grows roughly at the same rate as g .

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 20 / 1

Page 66: Lecture Topic: What is a Computer?

Polynomial time: order O(nk)

Example

Let us consider algorithm A with parameter n and polynomial run-time O(nk). Byour definition of O, the algorithm is of order O(nk) if |T (n)/nk | is bounded fromabove as n→∞, or — equivalently — there are real constants a0, a1, . . . , ak withak > 0 so that A requires

aknk + ak−1n

k−1 + · · ·+ a1n + a0

operations to complete in the worst case.Note k is an integer constant independent of the algorithm input and n. Theremay be no such polynomial for the number of operations in terms of n.If there is such a polynomial, A is usually considered “good”as it does not require “very many” operations. ♦

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 21 / 1

Page 67: Lecture Topic: What is a Computer?

Polynomial time: order O(nk)

Example

Let us consider algorithm A with parameter n and polynomial run-time O(nk). Byour definition of O, the algorithm is of order O(nk) if |T (n)/nk | is bounded fromabove as n→∞, or — equivalently — there are real constants a0, a1, . . . , ak withak > 0 so that A requires

aknk + ak−1n

k−1 + · · ·+ a1n + a0

operations to complete in the worst case.Note k is an integer constant independent of the algorithm input and n. Theremay be no such polynomial for the number of operations in terms of n.If there is such a polynomial, A is usually considered “good”as it does not require “very many” operations. ♦

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 21 / 1

Page 68: Lecture Topic: What is a Computer?

Polynomial time: order O(nk)

Example

Let us consider algorithm A with parameter n and polynomial run-time O(nk). Byour definition of O, the algorithm is of order O(nk) if |T (n)/nk | is bounded fromabove as n→∞, or — equivalently — there are real constants a0, a1, . . . , ak withak > 0 so that A requires

aknk + ak−1n

k−1 + · · ·+ a1n + a0

operations to complete in the worst case.Note k is an integer constant independent of the algorithm input and n. Theremay be no such polynomial for the number of operations in terms of n.If there is such a polynomial, A is usually considered “good”as it does not require “very many” operations. ♦

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 21 / 1

Page 69: Lecture Topic: What is a Computer?

Multiple parameters

This notation can also be used with multiple parameters and with otherexpressions on the right side of the equal sign. The notation:

f (n,m) = n2 + m3 + O(n + m)

represents the statement:

there exist C ,N such that, for all n,m > N : f (n,m) ≤ n2 + m3 + C (n + m).

Similarly, O(mn2) would mean the number of operations the algorithm carries outis a polynomial in two indeterminates n and m, with the highest degree termbeing mn2, e.g., 2mn2 + 4mn − 6n2 − 2n + 7. This is most useful if we can relatem and n(e.g., in dense graphs m = O(n2), so O(mn2) would mean O(n4) there).

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 22 / 1

Page 70: Lecture Topic: What is a Computer?

Multiple parameters

This notation can also be used with multiple parameters and with otherexpressions on the right side of the equal sign. The notation:

f (n,m) = n2 + m3 + O(n + m)

represents the statement:

there exist C ,N such that, for all n,m > N : f (n,m) ≤ n2 + m3 + C (n + m).

Similarly, O(mn2) would mean the number of operations the algorithm carries outis a polynomial in two indeterminates n and m, with the highest degree termbeing mn2, e.g., 2mn2 + 4mn − 6n2 − 2n + 7. This is most useful if we can relatem and n(e.g., in dense graphs m = O(n2), so O(mn2) would mean O(n4) there).

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 22 / 1

Page 71: Lecture Topic: What is a Computer?

Multiple parameters

This notation can also be used with multiple parameters and with otherexpressions on the right side of the equal sign. The notation:

f (n,m) = n2 + m3 + O(n + m)

represents the statement:

there exist C ,N such that, for all n,m > N : f (n,m) ≤ n2 + m3 + C (n + m).

Similarly, O(mn2) would mean the number of operations the algorithm carries outis a polynomial in two indeterminates n and m, with the highest degree termbeing mn2, e.g., 2mn2 + 4mn − 6n2 − 2n + 7. This is most useful if we can relatem and n(e.g., in dense graphs m = O(n2), so O(mn2) would mean O(n4) there).

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 22 / 1

Page 72: Lecture Topic: What is a Computer?

Classes of functions commonly met in algorithm analysis

Notation Name

O(1) constantO(log(n)) logarithmicO((log(n))c) polylogarithmicO(n) linearO(n2) quadraticO(nc) polynomialO(cn) exponentialO(n!) factorial

Here, c > 0 is a constant. Once again, if a function f (n) is a sum of functions,the fastest growing one determines the order of f (n).

E.g.: If f (n) = 10 log(n) + 5(log(n))3 + 7n + 3n2 + 6n3, then f (n) = O(n3).

Caveat: the number of summands must be constant and not depend on n.Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 23 / 1

Page 73: Lecture Topic: What is a Computer?

Classes of functions commonly met in algorithm analysis

Notation Name

O(1) constantO(log(n)) logarithmicO((log(n))c) polylogarithmicO(n) linearO(n2) quadraticO(nc) polynomialO(cn) exponentialO(n!) factorial

Here, c > 0 is a constant. Once again, if a function f (n) is a sum of functions,the fastest growing one determines the order of f (n).

E.g.: If f (n) = 10 log(n) + 5(log(n))3 + 7n + 3n2 + 6n3, then f (n) = O(n3).

Caveat: the number of summands must be constant and not depend on n.Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 23 / 1

Page 74: Lecture Topic: What is a Computer?

Classes of functions commonly met in algorithm analysis

Notation Name

O(1) constantO(log(n)) logarithmicO((log(n))c) polylogarithmicO(n) linearO(n2) quadraticO(nc) polynomialO(cn) exponentialO(n!) factorial

Here, c > 0 is a constant. Once again, if a function f (n) is a sum of functions,the fastest growing one determines the order of f (n).

E.g.: If f (n) = 10 log(n) + 5(log(n))3 + 7n + 3n2 + 6n3, then f (n) = O(n3).

Caveat: the number of summands must be constant and not depend on n.Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 23 / 1

Page 75: Lecture Topic: What is a Computer?

Classes of functions commonly met in algorithm analysis

Notation Name

O(1) constantO(log(n)) logarithmicO((log(n))c) polylogarithmicO(n) linearO(n2) quadraticO(nc) polynomialO(cn) exponentialO(n!) factorial

Here, c > 0 is a constant. Once again, if a function f (n) is a sum of functions,the fastest growing one determines the order of f (n).

E.g.: If f (n) = 10 log(n) + 5(log(n))3 + 7n + 3n2 + 6n3, then f (n) = O(n3).

Caveat: the number of summands must be constant and not depend on n.Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 23 / 1

Page 76: Lecture Topic: What is a Computer?

Notes on growth of functions

O(nc) and O(cn) are very different.

The former is polynomial, the latter is exponential and grows much, much faster,no matter how big the constant c is.

A function that grows faster than O(nc) is called superpolynomial. One thatgrows slower than O(cn) is called subexponential.

An algorithm can require time that is both superpolynomial and subexponential.

O(log n) is exactly the same as O(log(nc)).

The logarithms differ only by a constant factor,and the big O notation ignores that.

Similarly, logs with different constant bases are equivalent.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 24 / 1

Page 77: Lecture Topic: What is a Computer?

Notes on growth of functions

O(nc) and O(cn) are very different.

The former is polynomial, the latter is exponential and grows much, much faster,no matter how big the constant c is.

A function that grows faster than O(nc) is called superpolynomial. One thatgrows slower than O(cn) is called subexponential.

An algorithm can require time that is both superpolynomial and subexponential.

O(log n) is exactly the same as O(log(nc)).

The logarithms differ only by a constant factor,and the big O notation ignores that.

Similarly, logs with different constant bases are equivalent.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 24 / 1

Page 78: Lecture Topic: What is a Computer?

Notes on growth of functions

O(nc) and O(cn) are very different.

The former is polynomial, the latter is exponential and grows much, much faster,no matter how big the constant c is.

A function that grows faster than O(nc) is called superpolynomial. One thatgrows slower than O(cn) is called subexponential.

An algorithm can require time that is both superpolynomial and subexponential.

O(log n) is exactly the same as O(log(nc)).

The logarithms differ only by a constant factor,and the big O notation ignores that.

Similarly, logs with different constant bases are equivalent.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 24 / 1

Page 79: Lecture Topic: What is a Computer?

Notes on growth of functions

O(nc) and O(cn) are very different.

The former is polynomial, the latter is exponential and grows much, much faster,no matter how big the constant c is.

A function that grows faster than O(nc) is called superpolynomial. One thatgrows slower than O(cn) is called subexponential.

An algorithm can require time that is both superpolynomial and subexponential.

O(log n) is exactly the same as O(log(nc)).

The logarithms differ only by a constant factor,and the big O notation ignores that.

Similarly, logs with different constant bases are equivalent.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 24 / 1

Page 80: Lecture Topic: What is a Computer?

Notes on growth of functions

O(nc) and O(cn) are very different.

The former is polynomial, the latter is exponential and grows much, much faster,no matter how big the constant c is.

A function that grows faster than O(nc) is called superpolynomial. One thatgrows slower than O(cn) is called subexponential.

An algorithm can require time that is both superpolynomial and subexponential.

O(log n) is exactly the same as O(log(nc)).

The logarithms differ only by a constant factor,and the big O notation ignores that.

Similarly, logs with different constant bases are equivalent.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 24 / 1

Page 81: Lecture Topic: What is a Computer?

Notes on growth of functions

O(nc) and O(cn) are very different.

The former is polynomial, the latter is exponential and grows much, much faster,no matter how big the constant c is.

A function that grows faster than O(nc) is called superpolynomial. One thatgrows slower than O(cn) is called subexponential.

An algorithm can require time that is both superpolynomial and subexponential.

O(log n) is exactly the same as O(log(nc)).

The logarithms differ only by a constant factor,and the big O notation ignores that.

Similarly, logs with different constant bases are equivalent.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 24 / 1

Page 82: Lecture Topic: What is a Computer?

Notes on growth of functions

O(nc) and O(cn) are very different.

The former is polynomial, the latter is exponential and grows much, much faster,no matter how big the constant c is.

A function that grows faster than O(nc) is called superpolynomial. One thatgrows slower than O(cn) is called subexponential.

An algorithm can require time that is both superpolynomial and subexponential.

O(log n) is exactly the same as O(log(nc)).

The logarithms differ only by a constant factor,and the big O notation ignores that.

Similarly, logs with different constant bases are equivalent.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 24 / 1

Page 83: Lecture Topic: What is a Computer?

Exercise

Prove that any later function in the above table grows faster than any earlierfunction.

Hint: you need several small proofs. Also, each function is differentiable.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 25 / 1

Page 84: Lecture Topic: What is a Computer?

Exercise

Prove that any later function in the above table grows faster than any earlierfunction.

Hint: you need several small proofs. Also, each function is differentiable.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 25 / 1

Page 85: Lecture Topic: What is a Computer?

P and NP

In complexity theory there are two commonly used classes of (decision) problems:

The class P consists of all those decision problems that can be solved on adeterministic Turing machine in an amount of time that is polynomial in thesize of the input, i.e., O(nk) for some constant k ;

The class NP consists of all those decision problems whose positive solutionscan be verified in polynomial time on a Turing machine given the rightinformation. That is, given a proposed solution to the problem, we can checkit really is a solution in polynomial time. This is equivalent to finding solutionin polynomial time on a non-deterministic machine, which looks at all theoptions in parallel.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 26 / 1

Page 86: Lecture Topic: What is a Computer?

Is P = NP?

Intuitively, we think of the problems in P as those that can be solved “reasonablyfast”. The biggest open question in theoretical computer science concerns therelationship between these two classes:

Is P = NP?

In essence, this asks: if positive solutions to a problem can be verified quickly, canthe answers also be computed quickly?

Most people think that the answer is probably “no”;some people believe the question may be undecidable (in the sense of Godel) fromthe currently accepted axioms.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 27 / 1

Page 87: Lecture Topic: What is a Computer?

Is P = NP?

Intuitively, we think of the problems in P as those that can be solved “reasonablyfast”. The biggest open question in theoretical computer science concerns therelationship between these two classes:

Is P = NP?

In essence, this asks: if positive solutions to a problem can be verified quickly, canthe answers also be computed quickly?

Most people think that the answer is probably “no”;some people believe the question may be undecidable (in the sense of Godel) fromthe currently accepted axioms.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 27 / 1

Page 88: Lecture Topic: What is a Computer?

Is P = NP?

Intuitively, we think of the problems in P as those that can be solved “reasonablyfast”. The biggest open question in theoretical computer science concerns therelationship between these two classes:

Is P = NP?

In essence, this asks: if positive solutions to a problem can be verified quickly, canthe answers also be computed quickly?

Most people think that the answer is probably “no”;some people believe the question may be undecidable (in the sense of Godel) fromthe currently accepted axioms.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 27 / 1

Page 89: Lecture Topic: What is a Computer?

P and NP

One can phrase the problem also in terms of NP-Hardness.

Informally speaking, a problem is NP-Complete if and only if it is NP and analgorithm for solving it can be adapted to solve any NP problem, with a possibleincrease in run-time being at most polynomial (of the original run-time).

A problem is NP-hard, if it is at least as hard to solve as problems within the classNP. That is: a problem is NP-hard, if it is at least as hard an NP-Completeproblem.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 28 / 1

Page 90: Lecture Topic: What is a Computer?

P and NP

One can phrase the problem also in terms of NP-Hardness.

Informally speaking, a problem is NP-Complete if and only if it is NP and analgorithm for solving it can be adapted to solve any NP problem, with a possibleincrease in run-time being at most polynomial (of the original run-time).

A problem is NP-hard, if it is at least as hard to solve as problems within the classNP. That is: a problem is NP-hard, if it is at least as hard an NP-Completeproblem.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 28 / 1

Page 91: Lecture Topic: What is a Computer?

P and NP

One can phrase the problem also in terms of NP-Hardness.

Informally speaking, a problem is NP-Complete if and only if it is NP and analgorithm for solving it can be adapted to solve any NP problem, with a possibleincrease in run-time being at most polynomial (of the original run-time).

A problem is NP-hard, if it is at least as hard to solve as problems within the classNP. That is: a problem is NP-hard, if it is at least as hard an NP-Completeproblem.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 28 / 1

Page 92: Lecture Topic: What is a Computer?

We think of P as “easy” and “not in P” as “hard”. . .reasonably accurate assumption in complexity theory, not always true in practice:

It ignores constant factors.A problem that takes time 101000n is P, but is may be intractable in practice.A problem, for which the best possible algorithm requires time 10−100002n, isnot P, but may be tractable for some small values of n.It ignores the size of the exponents.A problem with time n1000 is P, yet intractable. A problem with time 2n/1000

is not P, yet is tractable for n up into the thousands.It only considers worst-case times.A problem might have an average time that is polynomial, but the worst caseis exponential, so the problem wouldn’t be in P.It only considers deterministic algorithms.It does not consider numerical computation. As we have seen above,computing

√2 is not only not in P, but also not possible within a finite

amount of time.Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 29 / 1

Page 93: Lecture Topic: What is a Computer?

We think of P as “easy” and “not in P” as “hard”. . .reasonably accurate assumption in complexity theory, not always true in practice:

It ignores constant factors.A problem that takes time 101000n is P, but is may be intractable in practice.A problem, for which the best possible algorithm requires time 10−100002n, isnot P, but may be tractable for some small values of n.It ignores the size of the exponents.A problem with time n1000 is P, yet intractable. A problem with time 2n/1000

is not P, yet is tractable for n up into the thousands.It only considers worst-case times.A problem might have an average time that is polynomial, but the worst caseis exponential, so the problem wouldn’t be in P.It only considers deterministic algorithms.It does not consider numerical computation. As we have seen above,computing

√2 is not only not in P, but also not possible within a finite

amount of time.Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 29 / 1

Page 94: Lecture Topic: What is a Computer?

We think of P as “easy” and “not in P” as “hard”. . .reasonably accurate assumption in complexity theory, not always true in practice:

It ignores constant factors.A problem that takes time 101000n is P, but is may be intractable in practice.A problem, for which the best possible algorithm requires time 10−100002n, isnot P, but may be tractable for some small values of n.It ignores the size of the exponents.A problem with time n1000 is P, yet intractable. A problem with time 2n/1000

is not P, yet is tractable for n up into the thousands.It only considers worst-case times.A problem might have an average time that is polynomial, but the worst caseis exponential, so the problem wouldn’t be in P.It only considers deterministic algorithms.It does not consider numerical computation. As we have seen above,computing

√2 is not only not in P, but also not possible within a finite

amount of time.Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 29 / 1

Page 95: Lecture Topic: What is a Computer?

We think of P as “easy” and “not in P” as “hard”. . .reasonably accurate assumption in complexity theory, not always true in practice:

It ignores constant factors.A problem that takes time 101000n is P, but is may be intractable in practice.A problem, for which the best possible algorithm requires time 10−100002n, isnot P, but may be tractable for some small values of n.It ignores the size of the exponents.A problem with time n1000 is P, yet intractable. A problem with time 2n/1000

is not P, yet is tractable for n up into the thousands.It only considers worst-case times.A problem might have an average time that is polynomial, but the worst caseis exponential, so the problem wouldn’t be in P.It only considers deterministic algorithms.It does not consider numerical computation. As we have seen above,computing

√2 is not only not in P, but also not possible within a finite

amount of time.Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 29 / 1

Page 96: Lecture Topic: What is a Computer?

We think of P as “easy” and “not in P” as “hard”. . .reasonably accurate assumption in complexity theory, not always true in practice:

It ignores constant factors.A problem that takes time 101000n is P, but is may be intractable in practice.A problem, for which the best possible algorithm requires time 10−100002n, isnot P, but may be tractable for some small values of n.It ignores the size of the exponents.A problem with time n1000 is P, yet intractable. A problem with time 2n/1000

is not P, yet is tractable for n up into the thousands.It only considers worst-case times.A problem might have an average time that is polynomial, but the worst caseis exponential, so the problem wouldn’t be in P.It only considers deterministic algorithms.It does not consider numerical computation. As we have seen above,computing

√2 is not only not in P, but also not possible within a finite

amount of time.Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 29 / 1

Page 97: Lecture Topic: What is a Computer?

“Real Computation”

A very different model of computation, more powerful than the Turing machineand the ordinary computers, has been developed within a community known as“real computation”.

Following a suggestion by Turing (1935) concerning the “development of thetheory of functions of a real variable,”

Lenore Blum, Michael Shub, and Steve Smale developed a model ofcomputation, aimed at working with uncountable sets, such as real numbers,

within their definition of a decision problem.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 30 / 1

Page 98: Lecture Topic: What is a Computer?

“Real Computation”

A very different model of computation, more powerful than the Turing machineand the ordinary computers, has been developed within a community known as“real computation”.

Following a suggestion by Turing (1935) concerning the “development of thetheory of functions of a real variable,”

Lenore Blum, Michael Shub, and Steve Smale developed a model ofcomputation, aimed at working with uncountable sets, such as real numbers,

within their definition of a decision problem.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 30 / 1

Page 99: Lecture Topic: What is a Computer?

“Real Computation”

A very different model of computation, more powerful than the Turing machineand the ordinary computers, has been developed within a community known as“real computation”.

Following a suggestion by Turing (1935) concerning the “development of thetheory of functions of a real variable,”

Lenore Blum, Michael Shub, and Steve Smale developed a model ofcomputation, aimed at working with uncountable sets, such as real numbers,

within their definition of a decision problem.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 30 / 1

Page 100: Lecture Topic: What is a Computer?

“Real Computation”

A very different model of computation, more powerful than the Turing machineand the ordinary computers, has been developed within a community known as“real computation”.

Following a suggestion by Turing (1935) concerning the “development of thetheory of functions of a real variable,”

Lenore Blum, Michael Shub, and Steve Smale developed a model ofcomputation, aimed at working with uncountable sets, such as real numbers,

within their definition of a decision problem.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 30 / 1

Page 101: Lecture Topic: What is a Computer?

“Real Computation”

Consider a set R, which may be the set of real numbers, complex numbers, orsimilar.If set R were the set of real numbers, direct sum R ⊕ R would be the Cartesianplane, as in complex numbers.Now consider the direct sum of a possibly infinite number of sets R and denote itR∞.A decision problem is defined by a fixed set S ⊆ R∞.A particular instance of the decision problem in the “real” sense is given by vectorv ∈ R∞ and asks whether v is included in S .

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 31 / 1

Page 102: Lecture Topic: What is a Computer?

“Real Computation”

Consider a set R, which may be the set of real numbers, complex numbers, orsimilar.If set R were the set of real numbers, direct sum R ⊕ R would be the Cartesianplane, as in complex numbers.Now consider the direct sum of a possibly infinite number of sets R and denote itR∞.A decision problem is defined by a fixed set S ⊆ R∞.A particular instance of the decision problem in the “real” sense is given by vectorv ∈ R∞ and asks whether v is included in S .

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 31 / 1

Page 103: Lecture Topic: What is a Computer?

“Real Computation”

Consider a set R, which may be the set of real numbers, complex numbers, orsimilar.If set R were the set of real numbers, direct sum R ⊕ R would be the Cartesianplane, as in complex numbers.Now consider the direct sum of a possibly infinite number of sets R and denote itR∞.A decision problem is defined by a fixed set S ⊆ R∞.A particular instance of the decision problem in the “real” sense is given by vectorv ∈ R∞ and asks whether v is included in S .

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 31 / 1

Page 104: Lecture Topic: What is a Computer?

“Real Computation”

Consider a set R, which may be the set of real numbers, complex numbers, orsimilar.If set R were the set of real numbers, direct sum R ⊕ R would be the Cartesianplane, as in complex numbers.Now consider the direct sum of a possibly infinite number of sets R and denote itR∞.A decision problem is defined by a fixed set S ⊆ R∞.A particular instance of the decision problem in the “real” sense is given by vectorv ∈ R∞ and asks whether v is included in S .

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 31 / 1

Page 105: Lecture Topic: What is a Computer?

“Real Computation”

Consider a set R, which may be the set of real numbers, complex numbers, orsimilar.If set R were the set of real numbers, direct sum R ⊕ R would be the Cartesianplane, as in complex numbers.Now consider the direct sum of a possibly infinite number of sets R and denote itR∞.A decision problem is defined by a fixed set S ⊆ R∞.A particular instance of the decision problem in the “real” sense is given by vectorv ∈ R∞ and asks whether v is included in S .

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 31 / 1

Page 106: Lecture Topic: What is a Computer?

Example (Prisoner Sets.)

Consider a map from complex numbers to complex numbers, f : C → C . When fis an endomorphism, it makes sense to define the iteration, e.g. f 2(z) := f (f (z))and

f t(z) := f (f (· · · f (︸ ︷︷ ︸t

z))).

Starting from a complex number z0, decide whether the modulus (absolute value)of zt := f t(z0) remains bounded as t goes to infinity. All such z0 where themodulus does not go to infinity as t goes to infinity are known collectively as theprisoner set. ♦

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 32 / 1

Page 107: Lecture Topic: What is a Computer?

Blum, Shub, and Smale (BSS) MachineA BSS machine customarily defined by:

a set of r = i + o registers, where the vector in R r is partitioned into inputand output registers

an input space X ⊆ R i , with an associated mapping from X to input registers

an output space Y ⊆ Ro , with an associated mapping from Y to outputregisters

a “flow chart” F = (Q,E ), where Q is a set of nodes, which includes initialnode “1”, and E is a set of oriented edges. Each node q ∈ Q can be of thefollowing types:

I assignment of a constant to a register,I copying of a register to another register,I addition, subtraction, multiplication, or division between values of two registers,

with the output directed to a fixed register (0),I branching on the comparison of two registers.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 33 / 1

Page 108: Lecture Topic: What is a Computer?

Blum, Shub, and Smale (BSS) Machine

Each branching node q ∈ Q will have two outgoing edges in E .

All other nodes will have at most one outgoing edge in E .

Alternatively, one can see the machine as a list of instructions.

In any case, a configuration of the machine is given by a tuple of

the node q (or number of the instruction) currently executed,

and values of all registers.

The initial configuration starts in node “1” and

terminates whenever there is no outgoing edge from the current node.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 34 / 1

Page 109: Lecture Topic: What is a Computer?

Blum, Shub, and Smale (BSS) Machine

Each branching node q ∈ Q will have two outgoing edges in E .

All other nodes will have at most one outgoing edge in E .

Alternatively, one can see the machine as a list of instructions.

In any case, a configuration of the machine is given by a tuple of

the node q (or number of the instruction) currently executed,

and values of all registers.

The initial configuration starts in node “1” and

terminates whenever there is no outgoing edge from the current node.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 34 / 1

Page 110: Lecture Topic: What is a Computer?

Blum, Shub, and Smale (BSS) Machine

Each branching node q ∈ Q will have two outgoing edges in E .

All other nodes will have at most one outgoing edge in E .

Alternatively, one can see the machine as a list of instructions.

In any case, a configuration of the machine is given by a tuple of

the node q (or number of the instruction) currently executed,

and values of all registers.

The initial configuration starts in node “1” and

terminates whenever there is no outgoing edge from the current node.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 34 / 1

Page 111: Lecture Topic: What is a Computer?

Blum, Shub, and Smale (BSS) Machine

Each branching node q ∈ Q will have two outgoing edges in E .

All other nodes will have at most one outgoing edge in E .

Alternatively, one can see the machine as a list of instructions.

In any case, a configuration of the machine is given by a tuple of

the node q (or number of the instruction) currently executed,

and values of all registers.

The initial configuration starts in node “1” and

terminates whenever there is no outgoing edge from the current node.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 34 / 1

Page 112: Lecture Topic: What is a Computer?

Blum, Shub, and Smale (BSS) Machine

Each branching node q ∈ Q will have two outgoing edges in E .

All other nodes will have at most one outgoing edge in E .

Alternatively, one can see the machine as a list of instructions.

In any case, a configuration of the machine is given by a tuple of

the node q (or number of the instruction) currently executed,

and values of all registers.

The initial configuration starts in node “1” and

terminates whenever there is no outgoing edge from the current node.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 34 / 1

Page 113: Lecture Topic: What is a Computer?

Blum, Shub, and Smale (BSS) Machine

Each branching node q ∈ Q will have two outgoing edges in E .

All other nodes will have at most one outgoing edge in E .

Alternatively, one can see the machine as a list of instructions.

In any case, a configuration of the machine is given by a tuple of

the node q (or number of the instruction) currently executed,

and values of all registers.

The initial configuration starts in node “1” and

terminates whenever there is no outgoing edge from the current node.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 34 / 1

Page 114: Lecture Topic: What is a Computer?

Blum, Shub, and Smale (BSS) Machine

Each branching node q ∈ Q will have two outgoing edges in E .

All other nodes will have at most one outgoing edge in E .

Alternatively, one can see the machine as a list of instructions.

In any case, a configuration of the machine is given by a tuple of

the node q (or number of the instruction) currently executed,

and values of all registers.

The initial configuration starts in node “1” and

terminates whenever there is no outgoing edge from the current node.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 34 / 1

Page 115: Lecture Topic: What is a Computer?

Blum, Shub, and Smale (BSS) Machine

Each branching node q ∈ Q will have two outgoing edges in E .

All other nodes will have at most one outgoing edge in E .

Alternatively, one can see the machine as a list of instructions.

In any case, a configuration of the machine is given by a tuple of

the node q (or number of the instruction) currently executed,

and values of all registers.

The initial configuration starts in node “1” and

terminates whenever there is no outgoing edge from the current node.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 34 / 1

Page 116: Lecture Topic: What is a Computer?

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 34 / 1

Page 117: Lecture Topic: What is a Computer?

Example

Consider the following simulator of a BSS machine:

1 def bss(code, outgoing, initValues = [0.0], initNode = "1"):regs = initValuesnode = initNodecounter = 0while outgoing.has_key(node) and counter < 100:

6 counter += 1print counter, ":", node, code[node]nodeType = code[node][0]if nodeType == "branch":

branchType = code[node][1]11 (i, j) = code[node][2:]

(left, right) = outgoing[node]if branchType == "<=" and regs[i] <= regs[j]: node = leftif branchType == "==" and regs[i] == regs[j]: node = leftif branchType == ">=" and regs[i] >= regs[j]: node = left

16 node = rightelse:

(i, j) = code[node][1:]if nodeType == "assign": regs[i] = jelif nodeType == "copy": regs[i] = regs[j]

21 elif nodeType == "add": regs[0] = regs[1] + regs[2]elif nodeType == "subtract": regs[0] = regs[1] - regs[2]elif nodeType == "multiply": regs[0] = regs[1] * regs[2]elif nodeType == "divide": regs[0] = regs[1] / regs[2]else: print "Unknown instruction"

26 node = outgoing[node]

The following BSS halts whenever the input is not in the prisoner sets of adynamical system, just as in Example ??:

# the first register is the output register with themodulus

code = outgoing =

4 # (x + jy)ˆ2 = (xˆ2-yˆ2)+(2xy)jcode["1"] = ["multiply", 1, 1] ; outgoing["1"]

= "sqrReal1"code["sqrReal1"] = ["copy", 0, 6] ;

outgoing["sqrReal1"] = "sqrReal2"code["sqrReal2"] = ["multiply", 2, 2] ;

outgoing["sqrReal2"] = "sqrReal3"code["sqrReal3"] = ["copy", 0, 7] ;

outgoing["sqrReal3"] = "sqrReal4"9 code["sqrReal4"] = ["subtract", 6, 7] ;

outgoing["sqrReal4"] = "sqrReal5"code["sqrReal5"] = ["copy", 0, 8] ;

outgoing["sqrReal5"] = "sqrImag1"code["sqrImag1"] = ["multiply", 1, 2] ;

outgoing["sqrImag1"] = "sqrImag2"code["sqrImag2"] = ["multiply", 0, 9] ;

outgoing["sqrImag2"] = "sqrMov1"code["sqrMov1"] = ["copy", 8, 1] ;

outgoing["sqrMov1"] = "sqrMov2"14 code["sqrMov2"] = ["copy", 0, 2] ;

outgoing["sqrMov2"] = "add1"code["add1"] = ["add", 3, 1] ;

outgoing["add1"] = "add2"code["add2"] = ["copy", 0, 1] ;

outgoing["add2"] = "add3"code["add3"] = ["add", 4, 2] ;

outgoing["add3"] = "add4"code["add4"] = ["copy", 0, 2] ;

outgoing["add4"] = "modulus1"19 # instead of modulus, xˆ2+yˆ2

code["modulus1"] = ["multiply", 1, 1] ;outgoing["modulus1"] = "modulus2"

code["modulus2"] = ["copy", 0, 6] ;outgoing["modulus2"] = "modulus3"

code["modulus3"] = ["multiply", 2, 2] ;outgoing["modulus3"] = "modulus4"

code["modulus4"] = ["add", 0, 6] ;outgoing["modulus4"] = "test"

24 code["test"] = ["branch", "<=", 0, 5] ;outgoing["test"] = ["1", "halt"]

code["halt"] = []

bss(code, outgoing, [0.0, 1.3, 1.3, -0.065, 0.66, 4.0,0.0, 0.0, 0.0, 2.0], "1")

Specifically, the initial configuration uses the same dynamical systemzn+1 = z2

n − 0.065 + 0.66j as in Example ??, and starts from 1.3 + 1.3j . Tryrunning it! Can you understand the code fully?Hint: (x + jy)2 = (x2 − y2) + j(2xy) and whenever |z | > 2, or |z |2 > 4, z0 is notin the prison set. ♦

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 34 / 1

Page 118: Lecture Topic: What is a Computer?

Computability over Rings

Subsequently, one can define the notion of computability over a ring.

For example, Julia sets of Example with z2 + c , |c | > 4 are not computable overthe reals.

The proof is based on a characterisation of computability in terms of recursivelyenumerable sets over the reals,

which are basic semialgebraic sets,

which are defined as the union of a finite number of connected components,

whereas the Julia sets need not.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 35 / 1

Page 119: Lecture Topic: What is a Computer?

Computability over Rings

Subsequently, one can define the notion of computability over a ring.

For example, Julia sets of Example with z2 + c , |c | > 4 are not computable overthe reals.

The proof is based on a characterisation of computability in terms of recursivelyenumerable sets over the reals,

which are basic semialgebraic sets,

which are defined as the union of a finite number of connected components,

whereas the Julia sets need not.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 35 / 1

Page 120: Lecture Topic: What is a Computer?

Computability over Rings

Subsequently, one can define the notion of computability over a ring.

For example, Julia sets of Example with z2 + c , |c | > 4 are not computable overthe reals.

The proof is based on a characterisation of computability in terms of recursivelyenumerable sets over the reals,

which are basic semialgebraic sets,

which are defined as the union of a finite number of connected components,

whereas the Julia sets need not.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 35 / 1

Page 121: Lecture Topic: What is a Computer?

Computability over Rings

Subsequently, one can define the notion of computability over a ring.

For example, Julia sets of Example with z2 + c , |c | > 4 are not computable overthe reals.

The proof is based on a characterisation of computability in terms of recursivelyenumerable sets over the reals,

which are basic semialgebraic sets,

which are defined as the union of a finite number of connected components,

whereas the Julia sets need not.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 35 / 1

Page 122: Lecture Topic: What is a Computer?

Computability over Rings

Subsequently, one can define the notion of computability over a ring.

For example, Julia sets of Example with z2 + c , |c | > 4 are not computable overthe reals.

The proof is based on a characterisation of computability in terms of recursivelyenumerable sets over the reals,

which are basic semialgebraic sets,

which are defined as the union of a finite number of connected components,

whereas the Julia sets need not.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 35 / 1

Page 123: Lecture Topic: What is a Computer?

Computability over Rings

Subsequently, one can define the notion of computability over a ring.

For example, Julia sets of Example with z2 + c , |c | > 4 are not computable overthe reals.

The proof is based on a characterisation of computability in terms of recursivelyenumerable sets over the reals,

which are basic semialgebraic sets,

which are defined as the union of a finite number of connected components,

whereas the Julia sets need not.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 35 / 1

Page 124: Lecture Topic: What is a Computer?

Complexity over Rings

Similarly, one can consider the complexity over rings.

The complete problem for the class of NPR is the 4-Feasibility problem,

i.e., the problem of deciding whether a real degree-4 polynomial has a zero.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 36 / 1

Page 125: Lecture Topic: What is a Computer?

Complexity over Rings

Similarly, one can consider the complexity over rings.

The complete problem for the class of NPR is the 4-Feasibility problem,

i.e., the problem of deciding whether a real degree-4 polynomial has a zero.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 36 / 1

Page 126: Lecture Topic: What is a Computer?

Complexity over Rings

Similarly, one can consider the complexity over rings.

The complete problem for the class of NPR is the 4-Feasibility problem,

i.e., the problem of deciding whether a real degree-4 polynomial has a zero.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 36 / 1

Page 127: Lecture Topic: What is a Computer?

PR = NPR

One can also ask whether PR = NPR over various rings R.

As it turns out, for real number the problem is also open,

but for some other rings, e.g., rational numbers, we already know the negativeanswer.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 37 / 1

Page 128: Lecture Topic: What is a Computer?

PR = NPR

One can also ask whether PR = NPR over various rings R.

As it turns out, for real number the problem is also open,

but for some other rings, e.g., rational numbers, we already know the negativeanswer.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 37 / 1

Page 129: Lecture Topic: What is a Computer?

PR = NPR

One can also ask whether PR = NPR over various rings R.

As it turns out, for real number the problem is also open,

but for some other rings, e.g., rational numbers, we already know the negativeanswer.

Jakub Marecek and Sean McGarraghy (UCD) Numerical Analysis and Software September 16, 2015 37 / 1