Upload
roberto-siagri
View
151
Download
0
Embed Size (px)
Citation preview
Is the universe a supercomputer? by Roberto Siagri
You were not made to live like brutes,
but to pursue virtue and knowledge
Dante's Inferno, Canto XXVI ‐ Ulysses
A book can change your life
When I was twelve I was given a book that I still cherish and that always reminds me of how much
remains to be learned, no matter how much we study. The book is Space, Time and Gravitation by
Arthur Eddington1, the great English physicist who strongly supported Einstein's theory of general
relativity (which extends Newton's theory of universal gravitation), against an English academia
who stubbornly refused everything that came from Germany, then at war with England. Despite
the neutrality and universality of knowledge, conflicts and ideologies can divide nations even on
scientific ideas. Eddington, who was a pacifist, challenged these ideological obstacles and tried to
transcend the contingent tragedy of war, seeking in science a way to build bridges, putting
knowledge above the brutality of current events. Actually at that age I did not know Eddington's
story, I got to know it only later, but I think it is important to remember it. What interested me at
the time was to be able to read that book, and I must confess that I could not get past the first two
chapters. Then other studies and interests took over, and the book was forgotten. I started with
this story because the theory of general relativity (and, later, of quantum mechanics) puts into
play all our notions of cosmology (from the Greek κόσμος, kosmos "universe" and ‐λογία, ‐logia
"study"), which is the study of the origin, evolution and future destination of the universe, seen as
a complex and orderly system, in contrast to chaos (from the greek χάος, khaos, which refers to
the unformed state preceding the creation of the universe). I don't know why a school teacher
gave me that book, although I think it was because at the time I already had a strong interest in
science, and in astronomy in particular. The subject was hard, even for a physics graduate, but it
was a precious gift. Two ideas made a strong impression on me: the need to make an effort to look
at problems from different and unusual angles, and the fact that some people, going beyond their
natural instinct to live and survive, show a propensity to seek knowledge and understanding of the
place where we are, the universe. Although I never finished the book, it reinforced me with the
conviction that understanding the universe is important to give meaning to our existence. Besides,
in good time I convinced myself that our mind needs to classify problems by confining them in
logical finished spaces. Without borders there are no references, and the field is left open to
irrationality, to the chaos and insecurity we fear so much. The edge, the border allows us to stay in
control, because even if we know that we still have no answer, we also know that we can continue
to investigate the known world, to move ahead, to push the boundaries a little farther. That's
what science does. And that's what we relentlessly try to understand: how this universe was born,
how it evolves, and what is our place in it.
1 Arthur S. Eddington, Space, Time and Gravitation, Cambridge University Press, 1920
When some time later I began to look at the sky on a clear night and to think about the vastness of
heavens above, I felt helpless and useless, and certainly this is the way we all feel when faced with
the unknown: awed and paralyzed. It is no wonder that since the dawn of times men have always
tried to set limits, to enclose everything within a familiar strip of land, going as far as to frame the
skies into a fixed structure of concentric spheres. We have tried to make a safe place for ourselves
by reducing the world to a matryoshka of spheres where all the rules were known, by building a
protective shell and pretending that all the rest didn't exist. With time, our vision has changed and
evolved, the universe as we know it now is expanding and populated by many galaxies and clusters
of galaxies. And today yet another vision of the universe is affirming itself, maybe somewhat still
metaphysic: the idea that the universe is continuously calculating its future state and that this
calculation has been going on since the big bang, 14 billion years ago. Starting from nothing (a
blank computer screen) to the universe as we know it (a computer screen filled with windows,
icons and running programs), all could have been done by a single program based on simple
algorithms.
Program and algorithm
In the previous paragraph I have used two words which are by now commonly used, but which I
nevertheless would like to define briefly.
A computer program is a sequence of instructions written in such a way as to be read and
executed by a computer. Programs are also called software, while the word hardware refers to
electronic components. In order to function, a computer needs instructions, because all it can do
is wait for instructions and execute them. Programs belong to two main categories, either system
software or application software. The computers we commonly use can run two or more programs
simultaneously. In computer science and mathematics, an algorithm is an effective method
expressed as a finite set of instructions2 which specify a sequence of steps to be taken in order to
solve a specific problem or class of problems. Algorithms are used both for computing needs and
for data processing, and also to automate logical reasoning. Therefore, a program is the concrete
expression of an algorithm in a particular programming language, designed to solve a specific
problem. Viewed from another angle, we could say that the design phase produces an algorithm,
the implementation phase then produces a program which expresses the designed algorithm. In
this regard, an algorithm lends itself much better than a formula to describe reality. Think about
digital animation: the extremely high level of realism that can be obtained is essentially due to
algorithms and very fast computers. Gregory Chaitin is one of the most famous theorists of the
concept that everything is algorithm. He thus describes the progress of human thought: "The
intellectual legacy of the West, and in this connection let me recall Pythagoras, Plato, Galileo and
James Jeans, states that 'Everything is number; God is a mathematician'. We are now beginning to
believe something slightly different, a refinement of the original Pythagorean credo: 'Everything is
software; God is a computer programmer'. Or perhaps I should say: 'All is algorithm!' 3
2 An algorithm must have the following properties. Finiteness: the algorithm must be completed after a finite number of
steps. Non‐ambiguity: each step must be clearly defined, and must have only one interpretation. Defined sequence: each step must proceed in a unique and predefined way. The first step and the last step must be clearly marked. Feasibility: it must be possible to execute each instruction. Input / Output: there must be a specified number of input values and one or more output values. 3 G. J. Chaitin, Leibniz, Information, Math and Physics 21 Jun 2003 http://arxiv.org/abs/math/0306303
Everything can be digitalized
If we look at the world around us, we can see that it is more discrete than continuous. Counter‐
intuitively, the world can be more easily represented in digital form than in analog form, starting
from the DNA which is basically a sequence, a program for the construction of living structures on
this planet (plants, insects, fish, animals and humans). Not to speak of elementary particles which,
besides being discrete, also exclusively appear with discrete levels of the physical quantities that
characterize them, such as energy. Even music, which seemed more continuous than anything
else, has become digital. Almost all players today use the MP3 digital format, and the same goes
for photography and videos with the JPEG and MPEG digital formats, just to mention a few.
Everything around us is made in the last instance of 0 and 1. Chaitin is also very critical of the
existence of real numbers which, beyond the name, seem to have very little reality: if nature is
discrete, and therefore represented by integers, we should have at most fractions of integers or
rational numbers, from which it follows that real numbers, which include irrational numbers (π pi,
Euler number, σ golden section, etc ..) are derived from our perception and description of a
continuous world, which in fact does not exist. It is tantamount to saying that the only existing
numbers are 0, 1 and natural numbers, while all other numbers are the result of algorithm
operations.
Figure 1: A circle is nothing more than the result of a mathematical limit operation. It is the consequence of
continuously increasing the number of sides of the inscribed or circumscribed polygon, an extrapolation of a regular
polygon with an infinite number of sides.
For example, we do not know all the figures of irrational numbers like pi and we can represent
them only partially, but a program can represent them in their totality. The algorithm which
describes pi is very short and therefore, despite having an infinite number of digits, the amount of
information contained in the number is low, as is the complexity of the algorithm that defines the
amount of information contained in the number (complexity of Kolmogorov). On the contrary, a
truly random number does not have an algorithm that generates it, therefore “it is the algorithm
itself”, it has maximum complexity. A program, which is based on an algorithm, requires a
computer to run it, and this, according to G.O. Longo, is the tipping point: this consideration in fact
promotes the computer to revolutionary philosophical concept. Probably the most striking aspect
of this notion is nothing short of epic – the discovery that there is a language (the programming
language that expresses the algorithm) which reflects reality in a much closer way than words and
numbers can do.4
4 G.O. Longo, A. Vaccaro, Bit Bang. La nascita della filosofa digitale, Apogeo, 2014
Randomness does not exist
When we think about the universe and about life, we cannot avoid thinking about chance, but the
idea that everything happens by chance does not seem reasonable in terms of probability. In this
regard, the well‐known theorem of the tireless monkey programmer states that a monkey
randomly pressing the keys of a keyboard for an indefinitely long time would almost certainly be
able to type any predefined text. Well, while this claim seems plausible, actually the probability
that the monkey would randomly write down the first few lines of Shakespeare's Hamlet is almost
zero. The text of Hamlet contains approximately 130,000 characters, and even if the whole
universe were full of monkeys, one for each atom (1080), and even if they pressed the keys for as
long as 100 times the life of the universe (~1012 years), the probability of reproducing Hamlet
would still be infinitesimal, practically equal to zero (1/ 10183800). However, if the random text
typed by the monkey were a program, things would be very different: the probability of a monkey
randomly writing down a program that could explain the universe as we see it is certainly small,
but still meaningful. In the words of Jürgen Schmidhuber, "our fundamental inability to perceive
our universe's state does not imply its true randomness” 5. Schmidhuber, who is a specialist of
artificial intelligence and whose algorithm for the definition of beauty is well known, opposes the
idea of randomness and is also a proponent of the idea that the universe as we know it is the
result of a program. It is also true that the universe has always been described by men on the basis
of historically available knowledge, first as an organism, then like a complex oscillating
mechanism, finally, in the digital era, like a great computer. Maybe we simply describe the world
with the tools available to us in each historical period, but maybe there is also something else
behind this, namely that our instruments, being more and more refined, allow us to understand
our world and our universe better and better. In the words of Slavoj Žižeck: “Technology no longer
merely imitates nature, rather, it reveals the underlying mechanism which generates it.”6
If we look at the life and career of John Archibald Wheeler7, one of the great physicists of the last
century, we have an example of this evolution. Until 1950, there was a phase that he himself
called "Everything is particle", a time when he was looking for ways to build all basic entities, such
as neutrons and protons, from lighter and fundamental particles such as quarks. In the second
phase, which he called "Everything is field", he began to see the world as a combination of force
fields, in which the particles were simply demonstrations of electric, magnetic and gravitational
fields, and of space‐time itself. The third phase, which is also the most recent, which he called
"Everything is information", began when he focused on the idea that logic and information are the
foundations of physical theory.
Based on these considerations, we can begin to realize that a computer is not just a complex
machine, but something more, and not only because of the philosophical reasoning behind it.
Computers enable new mathematics and therefore a new way of thinking. This is important
because in the past many difficulties in accepting the idea that everything is deterministic were
due to a lack of mathematical models which, while deterministic, would also leave some room for
human unpredictability and for free will. Unlike those who think that determinism and free will are
5 Jürgen Schmidhuber, A Computer Scientist’s View of Life, the Universe and Everything , in C. Freksa (editor), Lecture
Notes in Computer Science, Springer, 1337, 1997 6 Slavoj Žižeck, The Plague of Fantasies, Verso 1997 7 http://robward.org/john‐wheeler‐summarises‐his‐life‐in‐physics/
mutually exclusive, I believe like many others that the two ideas can coexist, provided we are clear
about the definition of free will, given the non‐uniqueness of the term. If free will is defined simply
by the perception of being an "actor" that every human being has while acting, then the two
concepts can coexist. That is, even if humans cannot form their desires and beliefs independently
of context, the important thing is that the interpretation of free will remains linked to the
possibility of translating those desires and beliefs into voluntary actions. With these premises, let
us now search for mathematics that can combine determinism and free will.
The advent of computers and recursive formulas
With computers we can finally see the work and results of recursive mathematics, which are
difficult to apply without computers, and therefore have never been used to describe physical
models until the 1960s. This kind of mathematics is based on very simple formulas, but in most
cases the results can be seen only after a very large number of iterations: this is why the help of
computers is essential. The two best known types of recursive formulas are fractals and cellular
automata. Let us begin with fractals, of which we have all seen pictures, perhaps not knowing
what they represented (see figure 2). These objects are among the most complex in mathematics,
even if their formula is often apparently very simple. For example, in the case of Mandelbrot type
fractals, studied by Benoît Mandelbrot8 from whom they take their name, the formula looks like
this: Zn+1=Zn 2 + A (where n is an integer ranging from 0 to infinity while Z and A are complex
numbers9).
In the formula, all the A represent the points in the complex plane. For the Z to be part of the
Mandelbrot set, the sequence of Zn for a given A must converge to a finite number. If the series is
converging, the dot is black. If the series is diverging (i.e. the number is getting larger and larger),
then a color is assigned to it according to the speed with which it is diverging. So the colored dots
are not part of the Mandelbrot set.
Fig. 2: a) Mandelbrot fractal10 b) Three‐dimensional development of a fractal11
8 Benoît B. Mandelbrot, The Fractal Geometry of Nature, W. H. Freeman and Company 1982 9 A complex number Z is composed of a pair of real numbers X and Y, which can be represented as a point in a plane. The
first number is called the real part, the second is called the imaginary part, and it is written as (X, iY). The only "magic" of these pairs is that they come with an exchange rule between real part and imaginary part, which materializes with the multiplication between pairs, namely: the product of two imaginary parts gives rise to a real part, the product of two real parts still gives rise to a real part, the product of a real part and an imaginary part gives rise to an imaginary part. 10 Phil Reed https://www.flickr.com/photos/master‐phillip/ Mandelbrot Exploration
We can see the paradigm shift lying behind this apparent simplicity. The n index in the formula
indicates that each result is calculated from the previous one. If we want to know the 100th result,
we need to introduce in the formula the 99th result, but in order to know it we need the 98th result
and so on. To get any result, we must previously calculate all the former ones. Before the advent
of computers these functions remained unexplored because, as you can guess, making all the
calculations required a very long time. Figure 2b shows a three‐dimensional representation of a
fractal. In nature we do not find the Mandelbrot fractal set. However, the mathematical models
that originate a Mandelbrot set can be found in a number of natural systems. The figures below
are examples of connections between the periodicity of the Mandelbrot set and the periodicities
found in nature: sea coasts, plants, river deltas etc.
Figure 3: Examples of fractal structures in nature: a) sea coast12; b) broccoli13; c) trees14 From fractals to cellular automata
The formula of a fractal is a first example of a recursive formula where recursion is used to
compute the result in a given point. As we have seen, this type of formulas cannot be solved
analytically. For the solution we need to apply a formal procedure consisting of the repetition of a
specific and finite number of steps, in other words, of an algorithm. The algorithm is transformed
by a programming language into a program, which is then executed by a computer. In general we
solve any mathematical or geometrical problem, albeit instinctively and without realizing it, in an
algorithmic way, but with a limited number of steps. Many formulas, including formulas for
fractals, require so many iterations (repetitions of a step sequence) that they were virtually
impossible to calculate by hand. Thanks to computers, it has become possible to start exploring
these areas of mathematics, opening up new lines of research not only focused on mathematical
objects such as fractals, but also on logical/mathematical objects such as cellular automata.
Cellular automata are mathematical models that evolve according to a logical rule which can be
very simple. The name already reveals part of their nature: the term "automaton" indicates that
they behave (they change over time) automatically, following a rule; the term "cell" indicates that
their nature is discrete; in fact, they are represented graphically as small squares. Each cell can
11 Tai Le https://www.flickr.com/photos/taile/ 3D fractal 12 Ken Douglas, https://www.flickr.com/photos/good_day/ 13 Karen Booth, https://www.flickr.com/photos/frenchtart/ 14 Paulo Valdivieso, https://www.flickr.com/photos/p_valdivieso/
take only two values, empty or full, white or black (in the digital world, 0 or 1). The evolution rule,
applied to the color of the cell under examination, is a function of the state (color) of neighboring
cells and dictates the future result (color) of the cell. To understand more clearly how this works,
let us take a one‐dimensional cellular automaton, a row of cells: as the color depends on the
neighbors and as each cell has two neighbors, the number of possible states in which three cells
can be grouped is 8 (for the rule of combinations: 2states_cell1 * 2states_cell2 * 2states_cell3 =
23 = 8). Each one of these grouped states admits other two possible futures, therefore the
possible evolution rules are 256, or any combination of the eight possible states of the three cells
(for the rule of combinations and as we have 8 configurations: 2future states config.1 * 2future
states config.2 * ... * 2 future states config.8 = 28 = 256). Each automaton is identified by the rule
number. Figure 4 shows the behavior of cellular automaton rule 90. The first image represents the
rule, the second shows the first few movements, where regular patterns appear that repeat each
other in time.
Figure 4: a) Cellular automaton rule 90; b) Evolution in time
If we change the rule, and take for instance automaton rule 30 (Figure 5), we see that, after a
consistent number of repetitions, irregularities start to appear: we now witness a totally
unpredictable behavior. On the contrary, automaton 90 maintains its regularity over time.
Figure 5: a) Cellular automaton rule 30; b) Evolution in time
In his book A New Kind of Science, Stephen Wolfram15 extensively deals with this type of one‐
dimensional automata. He explains how he managed to find a few that in their reproduction abide
by many laws of physics, while others are totally sterile or give rise to the same pattern
indefinitely. These mathematical models can also be found in nature: in the figure below we can
15 Stephen Wolfram, A New Kind of Science, Wolfram Media, 2002
see a type of shell which lives on the sea floor and never sees sunlight: it presents a pattern that
recalls cellular automaton rule 30.
Figure 6 : This textile cone shell (a)16 shows the pattern of cellular automaton rule 30 (b)
A well‐known cellular automaton is the two‐dimensional version invented by John Conway17,
called The Game of Life. As a whole, it is not very different from the one‐dimensional fractal, if not
for the fact that the square cells evolve on a kind of chessboard. Like before, each cell can only
have two states, "alive" or "dead", represented by the colors black and white (0 and 1). At first, a
random number of cells is defined as "alive", and left to evolve over time, state after state,
according to the predefined rule for color change based on the state of the neighboring cells,
which are now 8 (they were only 2 in the one‐dimensional case). The rule for living is to have two
or three neighboring cells alive. If there are less than two, the cell dies of solitude, if there are
more, the cell dies of overcrowding. Another rule is that if a dead cell finds itself close to two or
three live cells, it comes back to life. Since the invention of the game in 1970, many interesting
"creatures" living in that universe have been detected. They include models that remain forever
unchanged, models that oscillate periodically, models that glide through space while oscillating,
and even models that emit cells which depart on their own, called spaceships. These behaviors can
be used to implement models that carry information or make logical operations. But if The Game
of Life can make logical operations, then we can say that it is a computer in itself, and Paul
Rendell18 reaches the same conclusion in 2000.
Figure 7: The Game of Life: nine evolution steps of pattern 1 according to the rule described in the text.
Non‐recursive formulas and irreducible complexity
16 Richard Ling https://www.flickr.com/photos/rling/ Textile Cone 17 Gardner, Martin "Mathematical Games – The Fantastic Combinations of John Conway's New Solitaire Game "Life",
Scientific American 223, October 1970. 18 http://rendell‐attic.org/gol/tm.htm
Fractals and cellular automata are two examples of recursive formulas. Now we will turn to non‐
recursive formulas, which are much more commonly used: we all learned them in school, because
all the functions that describe physical phenomena belong to this group. Let us take for example
one of the simplest, y=a * x
Figure 8: Representation of a straight line in a plane (in this example a = 2)
which is the formula of a straight line passing through the origin in a Cartesian plane (Figure 8).
The formula of motion at constant speed takes the same form: s = v * t. It is possible to know
what distance has been covered at time t without knowing what the covered distance was at time
t‐1. In other words, in order to know one result we do not need to know the previous one. These
formulas are also called functions. As a further example, consider the trajectory of a ball hit by a
footballer. It can be easily described by a parabola of the type: y = a x2 + b x + c.
Figure 9: a) Parabola in one quadrant of the Cartesian plane b) Ideal trajectory of a kicked ball
Therefore, motion is described by a function which can be solved analytically without using
recursive formulas; in other words, each point can be calculated without considering the evolution
in time of the previous points.
Phenomena of this type shall be called "reducible", since they can be described (reduced),
although they are complex, by a formula which is a function. On the other hand, we shall call
"irreducible" those phenomena which cannot be traced back to formulas that can be calculated
analytically with a function, and which therefore conceal their complexity. The following examples
will explain this difference more clearly.
We all know that a clock is a complicated mechanism. But no matter how complicated the internal
mechanism is, the general movement can always be broken down into a number of elementary
movements, the sum of which will rebuild the totality. In this case, the system operates
independently from its past history: if we wished to calculate its operation, assuming that we
knew the formula that describes it, we could start from any moment in time, ignoring previous
behavior completely.
Figure 10: a) A clock is a complicated system; 19 b) A demonstration is a complex system.20
For our second example, let us observe a public demonstration. It is easy to see that this kind of
phenomenon can never be broken into subsystems describable by functions, and that it is
impossible to find an analytic function to describe it. The various components of the system, in
addition to being dependent on each other as in the case of the clock, are also dependent on their
own past. In other words, there is a strong dependency between what happens at time t and what
will happen at time t+1. These situations, for which we have no analytical formulas, are called
"complex", and their complexity is said to be "irreducible". When the complexity is reducible, we
are simply dealing with a complicated mechanism.
The formulas of cellular automata, of whatever dimension, seem to be more similar to complex
phenomena developing in a consequential way. These formulas therefore represent a real
paradigm shift: while in non‐recursive formulas future outcomes can be calculated (once the
formula is known) regardless of past outcomes, in recursive formulas future outcomes can be
predicted if, and only if, all past values have been calculated. In short, in order to know the future
we would have to trace history back to the very beginning, which, in terms of our common human
experience, should not surprise us all that much.
The universe recursively computes its future state
At first the discovery of these "strange" recursive formulas intrigued me, both for their
algorithmical solution and their proximity to computers. Besides, I noticed that they were also very
easy to program. I somewhat saw a link with some structures of the physical world, but I did not
see any connection to the evolution of the universe until I came across the book by Stephen
Wolfram mentioned earlier21. Far from saying that I now understand the question, I must
recognize that this book greatly helped me to rearrange my ideas, at least from a logical point of
view. Trying to explain how all this makes sense, I have to go back to the training I had, which gave
me a deterministic view of the world and the universe. Within this vision, there is no room for
chance and everything is causality. The problem arose from the fact that the only formulas I knew,
the formulas that were used to describe phenomena and therefore the universe, were of the non‐
recursive type, and came from the concept of a "reducible" universe that we can trace back to
19 Ben Grantham https://www.flickr.com/photos/ijammin/ 20 Steve Kaiser https://www.flickr.com/photos/djbones/ 21 See note 15
Descartes and Leibnitz. The idea that the formula of the universe could be solved analytically
worried me a lot... it was hard to accept that the universe could be reduced to a formula
calculated analytically, because no matter how complex, the calculation would still be possible
(albeit with approximations), and this possibility would have put the future into the hands of
whoever had the formula, and a computer powerful enough to make the calculations. This would
obviously take away from us all, at least in principle, the possibility of self‐determination and free
will.
The first problem is therefore how to combine determinism and free will: the discovery of
recursive formulas, such as cellular automata, offers a good answer to this question. In fact, if the
formula of the universe were recursive, determinism and free will could be brought together
without fear that someone might "steal" the future, because a recursive formula would not allow
its owners to look into the future in advance and to take advantage of it. In order to do this, they
would have to own a computer powerful enough to execute the formula from its first iteration
(starting from the birth of the universe) to the iteration leading to the current state, all within a
time lapse well below 14 billion years. However, we know from logical and physical considerations
that it is impossible to build a system more efficient than the one containing it. Similarly, it is
impossible to build a computer more efficient than the one that contains it. Therefore, whoever
owned the recursive formula of the universe would only be able to satisfy an aesthetic and
scientific curiosity. This wonderful formula would enable us to discover what happened in the
early stages of the universe, and perhaps to understand the evolution of some local phenomena,
since patterns within recursive formulas tend to repeat themselves. Indeed, repetition is a
prevalent modality, which explains why we can construct scientific theories and find laws which
work perfectly well, at least within a defined dimensional scale.
Furthermore, I think that there is an even deeper reason for recursitivity in the algorithm of the
universe, and it is the fact that a description of the universe based on a non‐recursive formula
would bring about a contradiction. Let me explain: a non‐recursive formula, no matter how
complex, could still be calculated in a reasonably finite time, at least within the first orders of
magnitude. And here is the contradiction: if we knew the formula, we could calculate (predict) the
future, and if we knew the future we might try to change it by changing the course of current
events. This possibility, however, would clash with the existence of a formula which really predicts
the future. If on the contrary the formula were in fact recursive, that is, if the universe were not
reducible to a formula that could be solved analytically, then there would be no contradiction
because, as we said before, we could never be in a position to calculate the future: the computer
processing the hypothetical formula of the universe would have a time lag of 14 billion years.
These considerations could lead us to conclude that the universe is maybe a kind of cellular
automaton, immune to any possibility of manipulation of the future in the medium to long term
by those who live in it. The hypothesis that the universe has an evolutionary mechanism on the
model of cellular automata is supported by several scholars, including Stephen Wolfram and
before him Konrad Zuse, one of the fathers of the computer as we know it today. Zuse theorized
this idea in the 1967 essay Rechnender Raum22 (Calculating Space), where he implies that the
22 Konrad Zuse, Rechnender Raum, Elektronische Datenverarbeitung, vol. 8, 1967. In English: Calculating Space, MIT
Technical Translation AZT‐70‐164‐GEMIT, Cambridge, Mass., 1970
universe is a computer and that its evolution is in fact the execution of a program, a very simple
one, very similar to the reproduction rule of a cellular automaton: a very elementary pattern at
first, but which gets so complicated over time that it generates the impressive phenomena that we
observe in the universe. In this regard I would like to mention Richard Feynman, one of the
greatest physicists of all times, who wrote: “It always bothers me that, according to the laws as we
understand them today, it takes a computing machine an infinite number of logical operations to
figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region
of time. How can all that be going on in that tiny space? Why should it take an infinite amount of
logic to figure out what one tiny piece of space/time is going to do? So I have often made the
hypothesis that ultimately physics will not require a mathematical statement, that in the end the
machinery will be revealed, and the laws will turn out to be simple, like a chequer board with all its
apparent complexities”.23
Information
By running programs, a computer essentially processes information and creates new information.
In his paper Rechnender Raum, Zuse brings up the idea of information having the same nature and
value as mass and energy, a real physical quantity. Zuse made the hypothesis that this additional
quantity would probably make it possible to explain some physical laws, like the law of the
conservation of energy. This belief still exists and some physicists like Seth Lloyd24 think that
through a better understanding of the role of information we could reconcile the theory of general
relativity with quantum mechanics. Now that information is beginning to be considered a physical
quantity, we could try to define it as Bateson did25: a difference that generates a difference. Unlike
mass and energy, information cannot be located in time and space and cannot be conserved. We
could also say that information is not divided but multiplied. Besides, information is somewhat
based on a principle of non‐ulteriority: no need to resort to other concepts to explain what it is.
Paraphrasing Edward Fredkin26, one of the great supporters of the idea that the universe is a
computer and the father of digital philosophy27: for everything in the world where you wonder
what it is made of, the only thing where the question doesn't have to be answered with anything
else is information.
Talking about information means talking about its basic unit, the bit. A bit can be represented as a
cell either empty or full, white or black, and if we assign a number to emptiness and fullness we
have 0 and 1. If we have bits and a computer that processes them with algorithms, the result is
information. There will be either a lot of information, or very little information, according to the
changes it produces, and these changes do not depend on how many 0 or 1 we have, but on their
degree of predictability. An infinite sequence of 0 and 1 repeated regularly (0,1,0,1,0,1,0,1...)
contains very little information because we can write a very simple rule to describe it. So the more
random the sequence, the more information it contains, up to the limit of infinite information for
an infinite random sequence.
23 Richard Feynman, The Character of Physical Law, Modern Library, 1994 24 Seth Lloyd, Programming the Universe, Alfred A. Knopf, 2006 25 Gregory Bateson, Steps to an Ecology of Mind, Jason Aronson Inc., 1972 26 Edward Fredkin, quoted in the article by Richard Wright The On‐Off Universe, The Sciences (January‐February 1985) 27 http://www.digitalphilosophy.org/
Claude E. Shannon focused on measuring the quantity of information, and explained his theory in
the treatise A Mathematical Theory of Communication (1948), which is still one of the pillars of
information theory and the basis of all the theories that link energy, entropy and information.
Entropy and a new perspective for the future of the universe
Since primary school, we are taught that entropy is disorder and that the universe is evolving
towards an increase in entropy and a leveling of temperatures, and that it is heading for what is
known as “heat death”. First let us dispel the notion that entropy is disorder: it is not!28 This
confusion between disorder and entropy dates back to 1895, a time when the existence of
molecules was unknown even to the most eminent of scientists, and the details of energy levels in
atoms and molecules were not understood. Those who proposed and developed the second law of
thermodynamics had no better expression to describe what they believed happened in
substances. Only later, at the beginning of the twentieth century, the existence of quantized
energy levels began to be known and understood, and the terms order/disorder became
increasingly obsolete. The second law of thermodynamics is a powerful tool that helps us
understand why the world works the way it does, why hot pots cool down, why we are warm even
in the cold, why gas makes an engine run. This law tells us that in our material world energy, of
whatever kind, dissolves or disperses if not hampered in doing so. Entropy is the quantitative
measure of this spontaneous process: it tells us how much energy has passed from a system in the
restricted or concentrated state to a system in a more widely scattered state (at the temperature
of the process). From 1860 to date, in physics and chemistry entropy has been applied only to
situations involving an energy flow that can be measured in the form of heat. Entropy is not
disorder, nor a measure of chaos, nor a driving force. The spread of energy, or its dispersion in
many micro‐states, is the driving force of chemistry. Entropy is the measure or index of this
dispersion. In thermodynamics, the entropy of a substance increases when it is heated because
more heat means more micro‐states within the substance. On the contrary, when gases or liquids
are left to expand or mixed in a larger volume, the increase in entropy is due to a greater
dispersion of their unchanged original thermal energy. From a molecular point of view, all
increases in entropy involve the dispersion of energy in a larger number, or in a more easily
accessible set, of micro‐states.
For instance, let us take a can of gas: we know that it is highly flammable, that is to say unstable,
because it does not like to remain in a state of such high concentration of energy. This is why a
tiny spark is enough to make it explode. The can of gas can be seen as a system that contains all
the gasoline molecules. This system can be described by a limited amount of information, since
the molecules can do nothing apart from staying inside the can, and since their speed depends
only on temperature (according to Boltzmann's distribution). In the can there are billions and
billions of molecules, all easily describable, if we accept a small error. In the presence of a spark
the can explodes, and the final result is that the system now has much more distributed energy
and more information. Events that occur spontaneously tend to equally distribute energy and to
increase information: after the explosion, we need much more data to describe all the molecules.
The same would happen in any other similar situation, think for instance of an air‐inflated tire
28 Frank L. Lambert, Professor Emeritus (Chemistry), http://entropysite.oxy.edu/
which is suddenly punctured.
Inspired by Shannon's29 entropy, we are naturally drawn to associate entropy with the amount of
information. The conclusion is that the universe is moving toward states with greater and greater
information content. What seems to us a disorder is simply information not yet understood.
Is the universe an infomorph?
The perspective of entropy as information changes the way we look at the universe. We could
assume that the universe, rather than being anthropomorphic, is in fact an infomorph. We might
make the hypothesis that the universe is as it is, not so much in order for us to observe it, but
because it tends to the emergence of intelligent structures which, among other things, are able to
observe it. If the universe is evolving towards greater and greater information levels, if it is
becoming smarter and smarter, and if our presence is a proof of this, then we should find in the
universal evolution some structures that can handle increasing levels of power density. In fact, as
we shall see in the next paragraph, it is this very ability which increases the amount of
manageable information, because there is a direct link between free energy and entropy (linked to
the system's micro‐states). In this regard, the studies of Eric Chaisson30 show that the universe is
evolving toward structures able to handle higher and higher power densities. The following graph
illustrates this tendency through the ages that mark the history of the universe.
Figure 11: Energy density flow of the structures that have gradually developed in our universe
From these studies we infer that the more complex and organized these structures are, the higher
the energy flow they can handle. It is therefore not surprising, although it does seem counter‐
intuitive, that a leaf of grass manages a power density (103 erg/sec/g) about 100 times greater
than that of a star like the sun (<10 erg/sec/g). Today the structure that can handle the highest
energy flow (1011 erg/sec/g, a billion times higher than that of a leaf of grass) is a processor for
notebooks or PCs. While it may seem strange, the energy flow managed by a notebook is
comparable to that of an atomic bomb if normalized in time: a one‐megaton nuclear bomb will
release 1017 erg/s/g in one millionth of a second (106) which means that, as Kevin Kelly says, “if
you 'amortize' a nuclear blast so that it spends its energy over a full second of time instead of
29 Sriram Vajapeyam, Understanding Shannon's Entropy Metric for Information, http://arxiv.org/abs/1405.2061 30 https://www.cfa.harvard.edu/~ejchaisson/reprints/EnergyRateDensity_II_galley_2011.pdf
microseconds, its power density would be reduced to only 1011 erg/s/g which is about as intense
as a laptop computer chip. Energy wise, a Pentium chip is just a slow nuclear explosion”.31
How many calculations has the universe made, and how much information can it store?
Assuming that the universe is a computer, in Computational Capacity of the Universe Seth Lloyd32
asks the following question: how many operations has the universe made from its birth up to
now, relentlessly calculating its future states and therefore itself? To answer this question, Lloyd
has found a link between energy and logical operations, an equivalence between the energy of a
system and the amount of operations that a system can perform per second. The formula is
surprisingly simple:
number_operations / second = 2E / πħ (where E is the system's energy and ħ is Planck's constant)
Using Einstein's mass‐energy relation E = mc2, and substituting it in the previous formula, we get
the following:
Total_Number_Operations_Universe = 2 mc2 t / πħ where m is the mass of the universe and t is
the elapsed time from the big bang to the present day (14 billion years).
The mass of the universe can be calculated from its density and volume, that is: m= ρ V . The
density ρ of the universe can be estimated in about one hydrogen atom per cubic meter (the
universe happens to be a very empty place!), while the volume V can be calculated considering the
universe as a sphere expanding at the speed of light from the moment of the big bang.
Substituting these values33 in the formula we can conclude that the universe has performed
around 10120 operations from the big bang to the present day.
A computer performs its operations in the memory, so the second question that arises is how
many memory cells the universe can count on. To calculate this figure, Seth Lloyd uses the
relationship between entropy and maximum amount of information. The first observation he
makes is that information is recorded by physical systems, and all physical systems (that is to say,
all matter, if we adopt the point of view of quantum mechanics) can record information. The
amount of information, measured in bits, which can be recorded by any physical system is linked
to the number of states/configurations that a particle can assume, and is therefore also related to
entropy, since entropy depends on the number of possible configurations of the system at a given
temperature34. For the calculation of entropy, Lloyd makes the assumption (routinely made by
cosmologists) that all the matter in the universe is converted into radiation. With this data, the
formula gives as a result a total capacity of 1090 bits on which information can be stored (for the
calculation of the maximum entropy of the universe, please refer to Lloyd's article mentioned
above).
In conclusion, the computing power of the universe should be around 10103 operations per second,
and its total storage capacity around 1090 bits. To get an idea of the enormity of these numbers,
31 http://kk.org/thetechnium/2006/02/from‐slumber‐to/ 32 Seth Lloyd , Computational Capacity of the Universe, Phys.Rev.Lett.88:237901,2002
33 ρ~ 10−27 kg/m3 ; c~ 3*108 m/sec; t~4*1017sec; ħ~10−34 kg m2/sec 34 The amount of information, measured in bits, that can be recorded by any physical system is equal to the logarithm in
base 2 of the number of distinct quantum states available in the system. We know that the number of accessible states W of a
physical system is related to thermodynamic entropy from the formula: S=kB*lnW where S is the maximum entropy of the system
and KB = 1,38 10−23 joule/K is Boltzmann's constant, from which, writing lnW= log2W*ln2 and solving for lnW=I we obtain I=S /
KB*ln 2.
just think that the power of all computers (including smartphones) now in use on the planet can
be rounded up to 1019 operations per second, and that the total memory of all computers is
estimated at 1020 bits. The differences are abysmal.
The entanglement
In my opinion, another strange phenomenon of quantum mechanics that could be easily explained
if the universe were a computer is entanglement. It is a phenomenon that occurs in the world of
the very small and obeys the physical laws of quantum mechanics, different from those of our
daily experience. The word “entanglement" means tie, or even involvement. What happens is that
when two particles interact and then separate again, they remain forever linked by a mysterious
connection. Whatever happens to one of the two particles, its twin, no matter how far away in the
universe, will react instantly. The entangled state is a superposition of quantum states in a system
consisting of two particles. Let us assume that particle 1 can be in one of the two states A or C,
while particle 2 can be in one of the two states B or D. When the system is in the product state AB,
we know that particle 1 is in state A and that particle 2 is in state B. Similarly, when the system is
in the product state CD, we know that particle 1 is in state C and that particle 2 is in state D. The
entangled state occurs when the system formed by the two particles 1 and 2 is in a particular
state, that we can call AB+CD. This strange condition is the superposition of all possible states of
the two particles in the same instant, and is precisely defined as entangled state. In other words,
when particle 1 and particle 2 are entangled, there is no way to characterize one of them without
referring to the other. Erwin Schrödinger was the first to realize that particles produced in a
process that bound them to each other would remain forever related, and in 1935 he gave this
phenomenon the name of entanglement. Entanglement is at odds with all the concepts of reality
that we entertain based on our everyday sensory experience. For example, it destroys our concept
of spatial distance because, even separated by light years, the two particles behave as a whole.
Moreover, since the entangled particles always interact simultaneously, we can also say that they
are indifferent to time. Many physicists have found it difficult to accept this theory, and among
them Einstein, who refused to believe that nature worked in a probabilistic way: "God doesn't play
dice with the world". To support this conviction, in 1935 Einstein collaborated with Podolski and
Rosen to write an article that will remain known as the EPR article (from the initials of the three
physicists' names), and which is based not on an experiment but on a logical reasoning. Einstein
and his collaborators argue that nature is based on two principles, the principle of reality and the
principle of locality. The principle of reality states that if, without making a measurement of a
physical system, one can predict with certainty the value of its magnitude, then there exists a
physical reality associated with such magnitude that is always true. The principle of locality states
that what happens in one location depends on elements of reality that are at that location, and
cannot be influenced by what is happening in a distant place, unless some signal is sent. Now, the
properties of entangled systems tell us that it is possible to obtain information on particle B by
performing measurements on particle A, but this violates both the principle of reality and the
principle of locality:
‐ It violates the principle of reality because by measuring A, we immediately know the value of B,
without having measured B. Now, for the principle of reality, if we can predict with certainty the
value of a quantity without measuring it, it is a physical reality that is always valid, and therefore
the value of B was true even before the measurement. But for the principle of state
superposition, this is not true.
‐ It violates the principle of locality, because the measurement of A influences what happens to B
(which is in a distant location) without any signal being sent, and for the principle of locality this is
not possible.
The three scientists solved the paradox by stating that quantum theory is incomplete, but we
could also solve it by accepting the idea that the principles of reality and locality cannot be applied
to the quantum world. If the universe calculated its future state at the speed of light, then the
speed of light would be the maximum speed allowed in our universe, in agreement both with the
theory of relativity and with quantum mechanics. Besides, if future states depend on previous
states, we can see how the two principles of reality and locality can no longer be considered valid
unconditionally, and may include some exceptions.
On time and matter
The EPR paradox suggests some considerations on the meaning of time. In Galilean physics, time is
an absolute entity independent from space. In general relativity, time is considered as another
dimension, intimately connected with space, so much so that one cannot exist without the other.
In quantum mechanics, time again becomes detached, almost like in Galilean physics, or rather, it
becomes a time of evolution which does not interfere with matter, as it does in Einstein's
relativity. Basically, I think that both theories hold some truth. With general relativity, one senses
that time and space cannot be independent, and so it is: if the universe is a computer every new
spatial state is tied to a time cycle. In quantum mechanics, time has above all the function of a
counter, very much like the clock35 of a computer. Besides, in quantum mechanics physical
quantities are discrete, not continuous like in classical mechanics or in relativity. This discretization
does not only apply, for example, to energy or to the angular momentum of particles. I think, like
many others, that it also applies to space and time: if the universe is a computer, space and time
are not continuous but discrete.
One last observation concerns the quantity of matter in the universe. Ordinary matter amounts to
only 4.9%, which is a very small fraction of the amount required for the laws of physics to explain
the universe as we know it. To explain the universe we must introduce a 26.8% of dark matter
(which has an attracting function between masses and on the scale of a galaxy) and a 68.3% of
dark energy (which has a repelling function on a large cosmic scale). Now, if the universe were a
computer, it would not be necessary to introduce either dark matter or dark energy: its dynamics
would be explained by the evolution algorithm and not by mass and energy.
Conclusion
The reductionist vision considered the universe a great machine, a kind of huge clock with lots of
gears, and believed in the possibility of finding a formula: the formula of the universe. But it was
distressing to think that the owners of this formula would be able to know future events in detail.
If on the one hand a formula would satisfy the need of humans to feel secure, no longer at the
mercy of chance, the elimination of fortuity would also lead to the loss of all possibilities, of the
35 A computer clock is a kind of step counter which sets the performance rate of operations.
idea that we can build a future for ourselves. This is the question we asked ourselves: how can we
combine determinism and free will?
We have seen that recursive mathematics and cellular automata could be suitable to explain the
origin and evolution of the universe, and we have verified that this type of mathematics is not very
useful to make quick calculations and to predict the future (we do not have either the right
computer or enough time to calculate the formula since the beginning of time). For everyday life
we have to settle for analytical formulas that bring computing time within acceptable levels.
So, if reality is based on a recursive algorithm, then non‐recursive formulas are merely an
approximation of reality, and their validity is necessarily limited by the change of scale of time,
space, mass or speed. Non‐recursive formulas carry along an inaccuracy that we tend to call
chance. It is like trying to approximate a curve with its tangent: it works, but only as long as we
stay in the neighborhood of the point of tangency. As soon as we move farther away, the
approximation fades.
Figure 12: The curve C and its tangent line TA at point A
We must accept that our vision of the world is blurred, and understand that fortuity is not a fact in
itself but the result of incomplete information that is inherent in the mathematics we are using to
describe physical phenomena, especially when they become complex and are no longer reducible
to a sum of elementary phenomena. The error comes from our interpretative model of the world
which, although at the moment it is the best we have, is not the real way in which the universe
operates.
It is plausible to think that the universe is evolving toward states of greater and greater
information. If so, the universe is not, as is often thought, headed for “heat death”. It is true that
heat death would be the logical consequence of thermal equilibrium, according to the principles of
thermodynamics, but let us not forget that at the atomic scale and at absolute zero there are
phenomena, such as superfluidity and superconductivity, that thermodynamics cannot explain.
Quantum mechanics teach us that matter is forever calculating. If this is true, it naturally follows
that the universe can be seen as an immense computer, which uses quantum mechanics to
perform its logical operations. Among other things, the universe also provides an example of how
a computer can operate without an external energy supply, and this gives us great hope for our
chances of future progress. Since industrial and technological development has only just started to
understand the immense possibilities of matter at the atomic and molecular level, I think we can
say that we are at the beginning of an amazing century in terms of technological discoveries that
will improve our lives and the world around us.