Upload
losolores
View
219
Download
1
Embed Size (px)
Citation preview
8/22/2019 Computation in Physical Systems
1/41
1
Computation in Physical Systems
Piccinini, Gualtiero
First published Wed Jul 21, 2010
In our ordinary discourse, we distinguish between physical systems that perform
computations, such as computers and calculators, and physical systems that don't, such as
rocks. Among computing devices, we distinguish between more and less powerful ones. These
distinctions affect our behavior: if a device is computationally more powerful than another, we
pay more money for it. What grounds these distinctions? What is the principled difference, if
there is one, between a rock and a calculator, or between a calculator and a computer?
Answering these questions is more difficult than it may seem.
In addition to our ordinary discourse, computation is central to many sciences. Computer
scientists design, build, and program computers. But again, what counts as a computer? If a
salesperson sold you an ordinary rock as a computer, you should probably get your money
back. Again, what does the rock lack that a genuine computer has?
How powerful a computer can you build? Can you build a machine that computes anything you
wish? Although it is often said that modern computers can compute anything (i.e., any
function of natural numbers, or equivalently, any function of strings of letters from a finite
alphabet), this is not correct. Ordinary computers can compute only a tiny subset of all
functions. Is it physically possible to do better? Which functions are physically computable?
These questions are bound up with the foundations of physics.
Computation is also central to psychology and neuroscience (and perhaps other areas of
biology). According to the computational theory of cognition, cognition is a kind of
computation: the behavior of cognitive systems is causally explained by the computations they
perform. In order to test a computational theory of something, we need to know what counts
as a computation in a physical system. Once again, the nature of computation lies at the
foundation of empirical science.
1. Abstract Computation and Concrete Computation 2. Accounts of Concrete Computation
o 2.1 The Simple Mapping Accounto 2.2 Causal, Counterfactual, and Dispositional Accountso 2.3 The Semantic Accounto 2.4 The Syntactic Accounto 2.5 The Mechanistic Account
3. Is Every Physical System Computational?o 3.1 Varieties of Pancomputationalismo 3.2 Unlimited Pancomputationalismo 3.3 Limited Pancomputationalism
http://plato.stanford.edu/entries/computation-physicalsystems/#AbsComConComhttp://plato.stanford.edu/entries/computation-physicalsystems/#AccConComhttp://plato.stanford.edu/entries/computation-physicalsystems/#SimMapAcchttp://plato.stanford.edu/entries/computation-physicalsystems/#CauCouDisAcchttp://plato.stanford.edu/entries/computation-physicalsystems/#SemAcchttp://plato.stanford.edu/entries/computation-physicalsystems/#SynAcchttp://plato.stanford.edu/entries/computation-physicalsystems/#MecAcchttp://plato.stanford.edu/entries/computation-physicalsystems/#EvePhySysComhttp://plato.stanford.edu/entries/computation-physicalsystems/#VarPanhttp://plato.stanford.edu/entries/computation-physicalsystems/#UnlPanhttp://plato.stanford.edu/entries/computation-physicalsystems/#LimPanhttp://plato.stanford.edu/entries/computation-physicalsystems/#LimPanhttp://plato.stanford.edu/entries/computation-physicalsystems/#UnlPanhttp://plato.stanford.edu/entries/computation-physicalsystems/#VarPanhttp://plato.stanford.edu/entries/computation-physicalsystems/#EvePhySysComhttp://plato.stanford.edu/entries/computation-physicalsystems/#MecAcchttp://plato.stanford.edu/entries/computation-physicalsystems/#SynAcchttp://plato.stanford.edu/entries/computation-physicalsystems/#SemAcchttp://plato.stanford.edu/entries/computation-physicalsystems/#CauCouDisAcchttp://plato.stanford.edu/entries/computation-physicalsystems/#SimMapAcchttp://plato.stanford.edu/entries/computation-physicalsystems/#AccConComhttp://plato.stanford.edu/entries/computation-physicalsystems/#AbsComConCom8/22/2019 Computation in Physical Systems
2/41
2
o 3.4 The Universe as a Computing System 4. Physical Computability
o 4.1 The Physical Church-Turing Thesis: Boldo 4.2 The Physical Church-Turing Thesis: Modesto 4.3 Hypercomputation
Bibliography Other Internet Resources Related Entries
1. Abstract Computation and Concrete Computation
Computation may be studied mathematically by formally defining computational objects, such
as algorithms and Turing machines, and proving theorems about their properties. The
mathematical theory of computation is a well-established branch of mathematics. It deals with
computation in the abstract, without worrying much about physical implementation.
By contrast, most uses of computation in science and ordinary practice deal with concrete
computation: computation in concrete physical systems such as computers and brains.
Concrete computation is closely related to abstract computation: we speak of physical systems
as running an algorithm or as implementing a Turing machine, for example. But the
relationship between concrete computation and abstract computation is not part of the
mathematical theory of computation per se and requires further investigation. Questions
about concrete computation are the main subject of this entry. Nevertheless, it is important to
bear in mind some basic mathematical results.
The most important notion of computation is that ofdigitalcomputation, which Alan Turing,
Kurt Gdel, Alonzo Church, Emil Post, and Stephen Kleene formalized in the 1930s. Their work
investigated the foundations of mathematics. One crucial question was whether first order
logic is decidable whether there is an algorithm that determines whether any given first
order logical formula is a theorem.
Turing (19367) and Church (1936) proved that the answer is negative: there is no such
algorithm. To show this, they offered precise characterizations of the informal notion of
algorithmically computable function. Turing did so in terms of so-called Turing machines
devices that manipulate discrete symbols written on a tape in accordance with finitely many
instructions. Other logicians did the same thing they formalized the notion of algorithmically
computable function in terms of other notions, such as -definable functions and general
recursive functions.
To their surprise, all such notions turned out to be extensionally equivalent, that is, any
function computable within any of these formalisms is computable within any of the others.
They took this as evidence that their quest for a precise definition of algorithm oralgorithmically computable function had been successful. The resulting view that Turing
http://plato.stanford.edu/entries/computation-physicalsystems/#UniComSyshttp://plato.stanford.edu/entries/computation-physicalsystems/#PhyComhttp://plato.stanford.edu/entries/computation-physicalsystems/#PhyChuTurTheBolhttp://plato.stanford.edu/entries/computation-physicalsystems/#PhyChuTurTheModhttp://plato.stanford.edu/entries/computation-physicalsystems/#Hyphttp://plato.stanford.edu/entries/computation-physicalsystems/#Bibhttp://plato.stanford.edu/entries/computation-physicalsystems/#Othhttp://plato.stanford.edu/entries/computation-physicalsystems/#Relhttp://plato.stanford.edu/entries/computation-physicalsystems/#Relhttp://plato.stanford.edu/entries/computation-physicalsystems/#Othhttp://plato.stanford.edu/entries/computation-physicalsystems/#Bibhttp://plato.stanford.edu/entries/computation-physicalsystems/#Hyphttp://plato.stanford.edu/entries/computation-physicalsystems/#PhyChuTurTheModhttp://plato.stanford.edu/entries/computation-physicalsystems/#PhyChuTurTheBolhttp://plato.stanford.edu/entries/computation-physicalsystems/#PhyComhttp://plato.stanford.edu/entries/computation-physicalsystems/#UniComSys8/22/2019 Computation in Physical Systems
3/41
3
machines and other equivalent formalisms capture the informal notion of algorithm is now
known as the Church-Turing thesis (more on this in Section 4). The study of computable
functions, made possible by the work of Turing et al., is part of the mathematical theory of
computation.
The theoretical significance of Turing et al.'s notion of computation can hardly be overstated.
As Gdel pointed out (in a lecture following one by Tarski):
Tarski has stressed in his lecture (and I think justly) the great importance of the concept of
general recursiveness (or Turing's computability). It seems to me that this importance is largely
due to the fact that with this concept one has for the first time succeeded in giving an absolute
definition of an interesting epistemological notion, i.e., one not depending on the formalism
chosen. (Gdel 1946, 84)
Turing also showed that there are universal Turing machines machines that can compute
any function computable by any other Turing machine. Universal machines do this by
executing instructions that encode the behavior of the machine they simulate. Assuming the
Church-Turing thesis, universal Turing machines can compute any function computable by
algorithm. This result is significant for computer science: you don't need to build different
computers for different functions; one universal computer will suffice to compute any
computable function. Modern digital computers approximate universal machines in Turing's
sense: digital computers can compute any function computable by algorithm for as long as
they have time and memory. (Strictly speaking, a universal machine has an unbounded
memory, whereas digital computer memories can be extended but not indefinitely, so they are
not unbounded.)
The above result should not be confused with the common claim that computers can
compute anything. This claim is false: another important result of computability theory is that
most functions are notcomputable by Turing machines (and hence, by digital computers).
Turing machines compute functions defined over denumerable domains, such as strings of
letters from a finite alphabet. There are uncountably many such functions. But there are only
countably many Turing machines; you can enumerate Turing machines by enumerating all lists
of Turing machine instructions. Since an uncountable infinity is much larger than a countable
one, it follows that Turing machines (and hence digital computers) can compute only a tiny
portion of all functions (over denumerable domains, such as natural numbers or strings of
letters).
Turing machines and most modern computers are known as (classical) digital computers, that
is, computers that manipulate strings of discrete, unambiguously distinguishable states. Digital
computers are sometimes contrasted withanalogcomputers, that is, machines that
manipulate continuous variables. Continuous variables are variables that can change their
value continuously over time while taking any value within a certain interval. Analog
8/22/2019 Computation in Physical Systems
4/41
4
computers are used primarily to solve certain systems of differential equations (Pour-El 1974,
Rubel 1993).
Classical digital computers may also be contrasted with quantumcomputers. Quantum
computers manipulate quantum states called qubits. Unlike the computational states of digitalcomputers, qubits are not unambiguously distinguishable from one another. This entry will
focus primarily on classical digital computation. For more on quantum computation, see the
entry on quantum computing.
The same objects studied in the mathematical theory of computation Turing machines,
algorithms, and so on are typically said to be implemented by concrete physical systems.
This poses a problem: how can a concrete, physical system perform a computation when
computation is defined by an abstract mathematical formalism? This may be called the
problem of computational implementation.
The problem of computational implementation may be formulated in a couple of different
ways. Some people interpret the formalisms of computability theory as defining
abstract objects. According to this interpretation, Turing machines, algorithms, and the like are
abstract objects. But how can a concrete physical system implement an abstract object? Other
people treat the formalisms of computability theory simply as abstract
computational descriptions. But how can a concrete physical system satisfy an abstract
computational description? Regardless of how the problem of computational implementation
is formulated, solving it requires an account of concrete computation an account of what it
takes for a physical system to perform a given computation.
A closely related problem is that of distinguishing between physical systems such as digital
computers, which appear to compute, and physical systems such as rocks, which appear not to
compute. Unlike computers, ordinary rocks are not sold in computer stores and are usually not
considered computers. Why? What do computers have that rocks lack, such that computers
compute and rocks don't? (If indeed they don't?) In other words, what does it take for a
computation to be implemented in a concrete physical system? Different answers to these
questions give rise to different accounts of concrete computation.
Questions on the nature of concrete computation should not be confused with questions
about computational modeling. The dynamical evolution of many physical systems may be
described by computational models. Computational models describe the dynamics of a system
that are written into, and run by, a computer. The behavior of rocks as well as rivers,
ecosystems, and planetary systems, among many others may well be modeled
computationally. From this, it doesn't follow that the modeled systems are computing devices
that they themselves perform computations. Prima facie, only relatively few and quite
special systems compute. Explaining what makes them special or explaining away our
feeling that they are special is the job of an account of concrete computation.
8/22/2019 Computation in Physical Systems
5/41
5
2. Accounts of Concrete Computation
2.1 The Simple Mapping Account
One of the earliest and most influential accounts of computation is due to Hilary Putnam. To a
first approximation, the account says that anything that is accurately described by a
computational description Cis a computing system implementing C.
More precisely, Putnam sketches his earliest account in terms of Turing machines only,
appealing to the machine tables that are a standard way of defining specific Turing
machines. A machine table consists of one column for each of the (finitely many) internal
states of the Turing machine and one row for each of the machine's symbol types. Each entry
in the machine table specifies what the machine does given the pertinent symbol and internal
state. Here is how Putnam explains what it takes for a physical system to be a Turing machine:
A machine tabledescribes a machine if the machine has internal states corresponding to the
columns of the table, and if it obeys the instruction in the table in the following sense: when
it is scanning a square on which a symbol s1appears and it is in, say, state B, that it carries out
the instruction in the appropriate row and column of the table (in this case, column B and
row s1). Any machine that is described by a machine table of the sort just exemplified is a
Turing machine. (Putnam 1960/1975a, 365; cf. also Putnam 1967/1975a, 4334)
This account relies on several unexplained notions, such as square (of tape), symbol, scanning,
and carrying out an instruction. Furthermore, the account is specified in terms of Turingmachine tables, but there are other kinds of computational description. A general account of
concrete computation should cover other computational descriptions besides Turing machine
tables. Perhaps for these reasons, Putnam soon followed by many others abandoned
reference to squares, symbols, etc.; he substituted them with an appeal to a physical
description of the system. The result of that substitution is what Godfrey-Smith (2009) dubs
the simple mapping account of computation.
According to the simple mapping account, a physical system S performs computation Cjust in
case (i) there is a mapping from the states ascribed to S by a physical description to the statesdefined by computational description C, such that (ii) the state transitions between the
physical states mirror the state transitions between the computational states. Clause (ii)
requires that for any computational state transition of the form s1s2 (specified by the
computational description C), if the system is in the physical state that maps onto s1, it then
goes into the physical state that maps onto s2.
One difficulty with the formulation above is that ordinary physical descriptions, such as
systems of differential equations, generally ascribe uncountably many states to physical
systems, whereas ordinary computational descriptions, such as Turing machine tables, ascribe
at most countably many states. Thus, there are not enough computational states for the
8/22/2019 Computation in Physical Systems
6/41
6
physical states to map onto. One solution to this problem is to reverse the direction of the
mapping, requiring a mapping of the computational states onto (a subset of) the physical
states. Another, more common solution to this problem often left implicit is to select
either a subset of the physical states or equivalence classes of the physical states and map
those onto the computational states. When this is done, clause (i) is replaced by the following:(i) there is a mapping from a subset of(or equivalence classes of) the states ascribed to S by a
physical description to the states defined by computational description C.
The simple mapping account turns out to be very liberal: it attributes many computations to
many systems. In the absence of restrictions on which mappings are acceptable, such
mappings are relatively easy to come by. As a consequence, some have argued that every
physical system implements every computation (Putnam 1988, Searle 1992). This thesis, which
trivializes the claim that something is a computing system, will be discussed in Section 3.1.
Meanwhile, the desire to avoid this trivialization result is one motivation behind otheraccounts of concrete computation.
2.2 Causal, Counterfactual, and Dispositional Accounts
One way to construct accounts of computation that are more restrictive than the simple
mapping account is to impose a constraint on acceptable mappings. Specifically, clause (ii) may
be modified so as to require that the conditional that specifies the relevant physical state
transitions be logically stronger than a material conditional.
As the simple mapping account has it, clause (ii) requires that for any computational state
transition of the form s1s2(specified by a computational description), if the system is in the
physical state that maps onto s1, it then goes into the physical state that maps onto s2. The
second part of (ii) is a material conditional. It may be strengthened by turning it into a logically
stronger conditional specifically, a conditional expressing a relation that supports
counterfactuals.
In a pure counterfactual account, clause (ii) is strengthened simply by requiring that the
physical state transitions support certain counterfactuals (Maudlin 1989, Copeland 1996). In
other words, the pure counterfactual account requires the mapping between computational
and physical descriptions to be such that the counterfactual relations between the physical
states are isomorphic to the counterfactual relations between the computational states.
Different authors formulate the relevant counterfactuals in slightly different ways: (a) if the
system had been in a physical state that maps onto an arbitrary computational state (specified
by the relevant computational description), it would then have gone into a physical state that
maps onto the relevant subsequent computational state (as specified by the computational
description) (Maudlin 1989, 415), (b) if the system had been in a physical state that maps
onto s1, it would have gone into a physical state that maps onto s2 (Copeland 1996, 341), (c) ifthe system were in a physical state that maps onto s1, it would go into a physical state that
8/22/2019 Computation in Physical Systems
7/41
7
maps onto s2 (Chalmers 1996, 312). Regardless of the exact formulation, none of these
counterfactuals are satisfied by the material conditional of clause (ii) as it appears in the simple
mapping account of computation. Thus, counterfactual accounts are stronger than the simple
mapping account.
An account of concrete computation in which the physical state transitions support
counterfactuals may also be generated by appealing to causal or dispositional relations,
assuming (as most people do) that causal or dispositional relations support counterfactuals.
Appealing to causation or dispositions may also have advantages over pure counterfactual
accounts in blocking unwanted computational implementations (Klein 2008, 145, makes the
case for dispositional versus counterfactual accounts).
In a causal account, clause (ii) is strengthened by requiring a causal relation between the
physical states: for any computational state transition of the form s1s2 (specified by a
computational description), if the system is in the physical state that maps onto s1, its physical
state causes it to go into the physical state that maps onto s2 (Chrisley 1995, Chalmers 1995,
1996, Scheutz 1999, 2001).
To this causal constraint on acceptable mappings, David Chalmers (1995, 1996) adds a further
restriction (in order to avoid pancomputationalism, which is discussed in Section 3): a genuine
physical implementation of a computational system must divide into separate physical
components, each of which maps onto the components specified by the computational
formalism. As Godfrey-Smith (2009, 293) notes, this combination of a causal and
a localizationalconstraint goes in the direction of mechanistic explanation (Machamer,
Darden, and Craver 2000). An account of computation that is explicitly based on mechanistic
explanation will be discussed in Section 2.5. For now, the causal account simpliciter requires
only that the mappings between computational and physical descriptions be such that the
causal relations between the physical states are isomorphic to the relations between state
transitions specified by the computational description. Thus, according to the causal account,
concrete computation is the causal structure of a physical process.
In a dispositional account, clause (ii) is strengthened by requiring a dispositional relation
between the physical states: for any computational state transition of the
form s1s2 (specified by a computational description), if the system is in the physical state
that maps onto s1, the system manifests a disposition whose manifestation is the transition
from the physical state that maps onto s1 to the physical state that maps onto s2 (Klein 2008).
In other words, the dispositional account requires the mapping between computational and
physical descriptions to be such that the dispositional relations between the physical states are
isomorphic to the relations between state transitions specified by the computational
description. Thus, according to the dispositional account, concrete computation is the
dispositional structure of a physical process.
8/22/2019 Computation in Physical Systems
8/41
8
The difference between the simple mapping account on the one hand and counterfactual,
causal, and dispositional accounts on the other may be seen by examining a simple example.
Consider a rock under the sun, early in the morning. During any time interval, the rock's
temperature rises. The rock goes from temperature Tto temperature T+1, to T+2, to T+3. Nowconsider a NOT gate that feeds its output back to itself. At first, suppose the NOT gate receives
0 as an input; it then returns a 1. After the 1 is fed back to the NOT gate, the gate returns a
0 again, and so on. The NOT gate goes back and forth between outputting a 0 and
outputting a 1. Now map physical statesTand T2 onto 0; then mapT+1 and T3 onto 1.
According to the simple mapping account, the rock implements a NOT gate undergoing the
computation represented by 0101.
By contast, according to the counterfactual account, the rock's putative computational
implementation is spurious, because the physical state transitions do not support
counterfactuals. If the rock were put in state T, it may or may not transition into T+1
depending on whether it is morning or evening and other extraneous factors. Since the rock's
physical state transitions that map onto the NOT gate's computational state transitions do not
support counterfactuals, the rock does not implement the NOT gate according to the
counterfactual account.
According to the causal and dispositional accounts too, this putative computational
implementation is spurious, because the physical state transitions are not due to causal or
dispositional properties of the rock and its states. Tdoes not cause T+1, nor does the rock have
a disposition to go into T+1 when it is in T. Rather, the rock changes its state due to the action
of the sun. Since the rock's physical state transitions that map onto the NOT gate's
computational state transitions are not grounded in either the causal or dispositional
properties of the rock and its states, the rock does not implement the NOT gate according to
the causal and dispositional accounts.
It is important to note that under the present family of accounts, there are mappings between
any physical system and at least some computational descriptions. Thus, according to the
present accounts, everything performs at least some computations (cf. Section 3.2). This still
strikes some as overly inclusive. In computer science and cognitive science, there seems to be
a distinction between systems that compute and systems that do not. To account for this
distinction, one option is to retain the current account of computational implementation while
restricting the class of descriptions that count as computational descriptions. Another option is
to move beyond this account of implementation.
2.3 The Semantic Account
In our everyday life, we usually employ computations to process meaningful symbols, in orderto extract information from them. The semantic account of computation turns this practice
8/22/2019 Computation in Physical Systems
9/41
9
into a metaphysical doctrine: computation is the processing of representations or at least,
the processing of appropriate representations in appropriate ways. Opinions as to which
representational manipulations constitute computations vary a great deal (Fodor 1975,
Cummins 1983, Pylyshyn 1984, Churchland and Sejnowski 1992, Shagrir 2006). What all
versions of the semantic account have in common is that they take seriously the reference tosymbols in Putnam's original account of computation: there is no computation without
representation (Fodor 1981, 180).
The semantic account may be seen as imposing a further restriction on acceptable mappings.
In addition to the causal restriction imposed by the causal account (mutatis mutandis for the
counterfactual and dispositional accounts), the semantic account imposes a semantic
restriction. Only physical states that qualify as representations may be mapped onto
computational descriptions, thereby qualifying as computational states. If a state is not
representational, it is not computational either.
The semantic account is probably the most popular in the philosophy of mind, because it
appears to fit its specific needs better than other accounts. Since minds and digital computers
are generally assumed to manipulate (the right kind of) representations, they turn out to
compute. Since most other systems are generally assumed notto manipulate (the relevant
kind of) representations, they do not compute. Thus, the semantic account appears to
accommodate some common intuitions about what does and does not count as a computing
system. It keeps minds and computers in while leaving most everything else out, thereby
vindicating the computational theory of cognition as a strong and nontrivial theory.
The semantic account raises three important questions: how representations are to be
individuated, what counts as a representation of the relevant kind, and what gives
representations their semantic content.
On the individuation of computational states, the main debate divides internalists from
externalists. According to externalists, computational vehicles are symbols individuated by
their wide cognitive contents paradigmatically, the things that the symbols stand for (Burge
1986, Shapiro 1997, Shagrir 2001). By contrast, most internalists maintain that computational
vehicles are symbols individuated by narrow cognitive contents (Segal 1991). Narrow contents
are, roughly speaking, semantic contents defined in terms of intrinsic properties of the
system. Cognitive contents, in turn, are contents ascribed to a system by a cognitive
psychological theory. For instance, the cognitive contents of the visual system are visual
contents, whereas the cognitive contents of the auditory system are auditory contents.
To illustrate the dispute, consider two physically identical cognitive systemsA and B. Among
the symbols processed byAis symbol S.A produces instances ofS wheneverA is in front of
bodies of water, whenA is thinking of water, and whenAis forming plans to interact with
water. In short, symbol S appears to stand for water. Every timeA processes S,
system Bprocesses symbol S, which is physically identical to S. But system B lives in an
8/22/2019 Computation in Physical Systems
10/41
10
environment different fromA's environment. WheneverA is surrounded by water, B is
surrounded by twater. Twater is a substance superficially indistinguishable from water but in
fact physically different from it. Thus, symbol S appears to stand for twater (cf. Putnam
1975b). So, we are assuming thatA and B live in relevantly different environments, such
that S appears to stand for water while S
appears to stand for twater. We are also assumingthatA is processing S in the same way that B is processing S. There is no intrinsic physical
difference betweenA and B.
According to externalists, whenA is processing S and B is processing S they are in
computational states ofdifferenttypes. According to internalists,A and B are in computational
states of the same type. In other words, externalists maintain that computational states are
individuated in part by their reference, which is determined at least in part independently of
the intrinsic physical properties of cognitive systems. By contrast, internalists maintain that
computational states are individuated in a way that supervenes solely on the intrinsic physicalproperties of cognitive systems.
So far, externalists and internalists agree on one thing: computational states are individuated
by cognitive contents. This assumption can be resisted without abandoning the semantic
account of computation. According to Egan (1999), computational vehicles are not
individuated by cognitive contents of any kind, whether wide or narrow. Rather, they are
individuated by their mathematicalcontents that is, mathematical functions and objects
ascribed as semantic contents to the computational vehicles by a computational theory of the
system. Since mathematical contents are the same across physical duplicates, Egan maintains
that her mathematical contents are a kind of narrow content she is a kind of internalist.
Let us now turn to what counts as a representation. This debate is less clearly delineated.
According to some authors, only structures that have a language-like combinatorial syntax,
which supports a compositional semantics, count as computational vehicles, and only
manipulations that respect the semantic properties of such structures count as computations
(Fodor 1975, Pylyshyn 1984). This suggestion flies in the face of computability theory, which
imposes no such requirement on what counts as a computational vehicle. Other authors are
more inclusive on what representational manipulations count as computations, but they have
not been especially successful in drawing the line between computational and non-
computational processes. Few people would include all manipulations of representations
including, say, painting a picture and recording a speech as computations, but there is no
consensus on where to draw the boundary between representational manipulations that
count as computations and representational manipulations that do not.
A third question is what gives representations their semantic content. There are three families
of views. Instrumentalists believe that ascribing semantic content to things is just heuristically
useful for prediction and explanation; semantic properties are not real properties of
computational states (e.g., Dennett 1987, Egan forthcoming). Realists who are not naturalists
believe semantic properties are real properties of computational states, but they are
8/22/2019 Computation in Physical Systems
11/41
11
irreducible to non-semantic properties. Finally, realists who are also naturalists believe
semantic properties are both real and reducible to non-semantic properties, though they
disagree on exactly how to reduce them (e.g., Fodor 2008, Harman 1987).
The semantic account of computation is closely related to the common view that computationis information processing. This idea is less clear than it may seem, because there are several
notions of information. The connection between information processing and computation is
different depending on which notion of information is at stake. What follows is a brief
disambiguation of the view that computation is information processing based on four
important notions of information (cf. Piccinini and Scarantino forthcoming).
1. Information in the sense of thermodynamics is closely related to thermodynamicentropy. Entropy is a property of every physical system. Thermodynamic entropy is,
roughly, a measure of an observer's uncertainty about the microscopic state of a
system after she considers the observable macroscopic properties of the system. The
study of the thermodynamics of computation is a lively field with many implications in
the foundations of physics (Leff and Rex 2003). In this thermodynamic sense of
information, any difference between two distinguishable states of a system may be
said to carry information. Computation may well be said to be information processing
in this sense, but this has little to do with semantics properly so called. However, the
connections between thermodynamics, computation, and information theory are one
possible source of inspiration for the view that every physical system is a computing
system (see Section 3.4).
2. Information in the sense of communication theory is a measure of the averagelikelihood that a given message is transmitted between a source and a receiver
(Shannon and Weaver 1949). This has little to do with semantics, too.
3. Information in one semantic sense is approximately the same as natural meaning(Grice 1957). A signal carries information in this sense just in case it reliably correlates
with a source (Dretske 1981). The view that computation is information processing in
this sense is prima facie implausible, because many computations such as
arithmetical calculations carried out on digital computers do not seem to carry any
natural meaning. Nevertheless, this notion of semantic information is relevant here
because it has been used by some theorists to ground an account of representation
(Dretske 1981, Fodor 2008).
4. Information in another semantic sense is just ordinary semantic content or non -natural meaning (Grice 1957). This is the kind of semantic content that most
philosophers discuss. The view that computation is information processing in this
sense is similar to a generic semantic account of computation.
Although the semantic account of computation appears to fit the needs of philosophers of
mind, it appears less suited to make sense of other sciences. Most pertinently, representation
does not seem to be presupposed by the notion of computation employed in at least some
8/22/2019 Computation in Physical Systems
12/41
12
areas of cognitive science as well as computability theory and computer science the very
sciences that gave rise to the notion of computation at the origin of the computational theory
of cognition (Piccinini 2008a, Fresco 2010). If this is correct, the semantic account may not
even be adequate to the needs of philosophers of mind at least those philosophers of mind
who wish to make sense of the analogy between minds and the systems designed and studiedby computer scientists and computability theorists. Another criticism of the semantic account
is that specifying the kind of representation and representational manipulation that is relevant
to computation may require a non-semantic way of individuating computations (Piccinini
2004). These concerns motivate efforts to account for computation in non-semantic terms.
2.4 The Syntactic Account
As we saw, the semantic account needs to specify which representations are relevant to
computation. One view is that the relevant representations are language-like, that is, they
have the kind of syntactic structure exhibited by sentences in a language. Computation, then,
is the manipulation of language-like representations in a way that is sensitive to their syntactic
structure and preserves their semantic properties (Fodor 1975).
As discussed in the previous section, however, using the notion of representation in an
account of computation involves some difficulties. If computation could be accounted for
without appealing to representation, those difficulties would be avoided. One way to do so is
to maintain that computation simply is the manipulation of language-like structures in
accordance with their syntactic properties, leaving semantics by the wayside. The structures
being manipulated are assumed to be language-like only in that they have syntactic properties
they need not have any semantics. In this syntactic account of computation, the notion of
representation is not used at all.
The syntactic account may be seen as adding a restriction on acceptable mappings that
replaces the semantic restriction proposed by the semantic account. Instead of a semantic
restriction, the syntactic account imposes a syntactic restriction: only physical states that
qualify as syntactic may be mapped onto computational descriptions, thereby qualifying as
computational states. If a state lacks syntactic structure, it is not computational.
What remains to be seen is what counts as a syntactic state. An important account of syntax in
the physical world is due to Stephen Stich (1983, 150157). Although Stich does not use the
term computation, his account of syntax is aimed at grounding a syntactic account of m ental
states and processes. Stich's syntactic theory of mind is, in turn, his interpretation of the
computational theories proposed by cognitive scientists in competition with Fodor's
semantic interpretation. Since Stich's account of syntax is ultimately aimed at grounding
computational theories of cognition, Stich's account of syntax also provides an (implicit)
syntactic account of computation.
8/22/2019 Computation in Physical Systems
13/41
13
According to Stich, roughly speaking, a physical system contains syntactically structured
objects when two conditions are satisfied. First, there is a mapping between the behaviorally
relevant physical states of the system and a class of syntactic types, which are specified by a
grammar that defines how complex syntactic types can be formed out of (finitely many)
primitive syntactic types. Second, the behavior of the system is explained by a theory whosegeneralizations are formulated in terms of formal relations between the syntactic types that
map onto the physical states of the system.
The syntactic account of computation is not very popular. A common objection is that it seems
difficult to give an account of primitive syntactic types that does not presuppose a prior
semantic individuation of the types (Crane 1990, Jacquette 1991, Bontly 1998). In fact, it is
common to make sense of syntax by construing it as a way to combine symbols, that is,
semantically interpreted constituents. If syntax is construed in this way, it presupposes
semantics. And if so, the syntactic account of computation collapses into the semanticaccount.
Another objection is that language-like syntactic structure is not necessary for computation as
it is understood in computer science and computability theory. Although computing systems
surely can manipulate linguistic structures, they don't have to. They can also manipulate
simple sequences of letters, without losing their identity as computers. (Computability
theorists call any set of words from a finite alphabet a language, but that broad notion of
language should not be confused with the narrower notion inspired by grammars in logic
and linguistics that Stich employs in his syntactic account of computation.)
2.5 The Mechanistic Account
The mechanistic account (Piccinini 2007b, Piccinini and Scarantino forthcoming, Section 3)
avoids appealing to both syntax and semantics. Instead, it accounts for concrete computation
in terms of the mechanistic properties of a system. According to the mechanistic account,
concrete computing systems are functional mechanisms of a special kind mechanisms that
perform concrete computations.
A functional mechanism is a system of organized components, each of which has functions to
perform (cf. Craver 2007, Wimsatt 2002). When appropriate components and their functions
are appropriately organized and functioning properly, their combined activities constitute the
capacities of the mechanism. Conversely, when we look for an explanation of the capacities of
a mechanism, we decompose the mechanism into its components and look for their functions
and organization. The result is a mechanistic explanation of the mechanism's capacities.
This notion of mechanism is familiar to biologists and engineers. For example, biologists
explain physiological capacities (digestion, respiration, etc.) in terms of the functions
performed by systems of organized components (the digestive system, the respiratory system,etc.).
8/22/2019 Computation in Physical Systems
14/41
14
According to the mechanistic account, a computation in the generic sense is the processing of
vehicles according to rules that are sensitive to certain vehicle properties, and specifically, to
differences between different portions of the vehicles. The processing is performed by a
functional mechanism, that is, a mechanism whose components are functionally organized to
perform the computation. Thus, if the mechanism malfunctions, a miscomputation occurs.
Digital computation, analog computation, etc. turn out to be species of generic computation.
They are differentiated by more specific properties of the vehicles being processed. If a
computing system processes strings of discrete states, then it performs a digital computation.
If a computing system processes continuous variables, then it performs an analog
computation. If a computing system processes qubits, then it performs a quantum
computation.
When we define concrete computations and the vehicles that they manipulate, we need not
consider all of their specific physical properties. We may consider only the properties that are
relevant to the computation, according to the rules that define the computation. A physical
system can be described more or less abstractly. According to the mechanistic account, an
abstract description of a physical system is not a description of an abstract object but rather a
description of a concrete system that omits certain details. Descriptions of concrete
computations and their vehicles are sufficiently abstract as to be defined independently of the
physical media that implement them in particular cases. Because of this, the mechanistic
account calls concrete computations and their vehicles medium-independent.
In other words, a vehicle is medium-independent just in case the rules (i.e., the input-output
maps) that define a computation are sensitive only to differences between portions of the
vehicles along specific dimensions of variation they are insensitive to any more concrete
physical properties of the vehicles. Put yet another way, the rules are functions of state
variables associated with a set of functionally relevant degrees of freedom, which can be
implemented differently in different physical media. Thus, a given computation can be
implemented in multiple physical media (e.g., mechanical, electro-mechanical, electronic,
magnetic, etc.), provided that the media possess a sufficient number of dimensions of
variation (or degrees of freedom) that can be appropriately accessed and manipulated and
that the components of the mechanism are functionally organized in the appropriate way.
Notice that the mechanistic account avoids pancomputationalism. First, physical systems that
are not functional mechanisms are ruled out. Functional mechanisms are complex systems of
components that are organized to perform functions. Any system whose components are not
organized to perform functions is not a computing system because it is not a functional
mechanism. Second, mechanisms that lack the function of manipulating medium-independent
vehicles are ruled out. Finally, medium-independent vehicle manipulators whose
manipulations fail to accord with appropriate rules are ruled out. The second and third
constraints appeal to special functional properties manipulating medium-independent
vehicles, doing so in accordance with rules defined over the vehicles that are possessed only
8/22/2019 Computation in Physical Systems
15/41
15
by relatively few physical systems. According to the mechanistic account, those few systems
are the genuine computing systems.
Another feature of the mechanistic account is that it accounts for the possibility of
miscomputation a possibility difficult to make sense of under other accounts. To illustratethe point, consider an ordinary computer programmed to compute functionfon input i.
Suppose that the computer malfunctions and produces an output different fromf(i). According
to the causal (semantic) account, the computer just underwent a causal process (a
manipulation of representations), which may be given a computational description and hence
counts as computing some function g(i), where gf. By contrast, according to the mechanistic
account, the computer simply failed to compute, or at least it failed to complete its
computation correctly. Given the importance of avoiding miscomputations in the design and
use of computers, the ability of the mechanistic account to make sense of miscomputation
may be an advantage over rival accounts.
A final feature of the mechanistic account is that it distinguishes and characterizes precisely
many different kinds of computing systems based on the specific vehicles they manipulate and
their specific mechanistic properties. The mechanistic account has been used to explicate
digital computation (Piccinini 2007b), analog computation (Piccinini 2008b, Section 3.5),
computation by neural networks (Piccinini 2008c), and other important distinctions such as
hardwired vs. programmable and serial vs. parallel computation (Piccinini 2008b).
3. Is Every Physical System Computational?
Which physical systems perform computations? According to pancomputationalism, they all
do. Even rocks, hurricanes, and planetary systems contrary to appearances are
computing systems. Pancomputationalism is quite popular among some philosophers and
physicists.
3.1 Varieties of Pancomputationalism
Varieties of pancomputationalism vary with respect to how manycomputations all, many, a
few, or just one they attribute to each system.
The strongest version of pancomputationalism is that every physical system
performs everycomputation or at least, every sufficiently complex system implements a
large number of non-equivalent computations (Putnam 1988, Searle 1992). This may be
called unlimited pancomputationalism.
The weakest version of pancomputationalism is that every physical system performs some (as
opposed to every) computation. A slightly stronger version maintains that everything
performs a fewcomputations, some of which encode the others in some relatively
8/22/2019 Computation in Physical Systems
16/41
16
unproblematic way (Scheutz 2001). These versions may be called limited
pancomputationalism.
Varieties of pancomputationalism also vary with respect to whyeverything performs
computations the source of pancomputationalism.
One alleged source of pancomputationalism is that which computation a system performs is a
matter of relatively free interpretation. If whether a system performs a given computation
depends solely or primarily on how the system is perceived, as opposed to objective fact, then
it seems that everything computes because everything may be seen as computing (Searle
1992). This may be called interpretivist pancomputationalism.
Another alleged source of pancomputationalism is that everything has causal structure.
According to the causal account, computation is the causal structure of physical processes
(Chrisley 1995, Chalmers 1995, 1996, Scheutz 1999, 2001). Assuming that everything has
causal structure, it follows that everything performs the computation constituted by its causal
structure. This may be called causal pancomputationalism.
Not everyone will agree that everything has causal structure. Some processes may be non-
causal, or causation may be just a faon de parler that does not capture anything fundamental
about the world (e.g., Norton 2003). But those who have qualms about causation can recover
a view similar to causal pancomputationalism by reformulating the causal account of
computation and consequent version of pancomputationalism in terms they like e.g., in
terms of the dynamical properties of physical systems.
A third alleged source of pancomputationalism is that every physical state carries information,
in combination with an information-based semantics plus a liberal version of the semantic
view of computation. According to the semantic view of computation, computation is the
manipulation of representations. According to information-based semantics, a representation
is anything that carries information. Assuming that every physical state carries information, it
follows that every physical system performs the computations constituted by the manipulation
of its information-carrying states (cf. Shagrir 2006). Both information-based semantics and the
assumption that every physical state carries information (in the relevant sense) remain
controversial.
Yet another alleged source of pancomputationalism is that computation is the nature of the
physical universe. According to some physicists, the physical world is computational at its most
fundamental level. This view, which is a special version of limited pancomputationalism, will be
discussed in Section 3.4.
8/22/2019 Computation in Physical Systems
17/41
17
3.2 Unlimited Pancomputationalism
Arguments for unlimited pancomputationalism go back to Hinckfuss's pail, a putative
counterexample to computational functionalism the view that the mind is the software of
the brain. Hinckfuss's pail is named after its proponent, Ian Hinckfuss, but was first discussed inprint by William Lycan. A pail of water contains a huge number of microscopic processes:
Now is all this activity not complex enough that, simply by chance, it might realize a human
program for a brief period (given suitable correlations between certain micro-events and the
requisite input-, output-, and state-symbols of the program)? (Lycan 1981, 39)
Hinckfuss's implied answer to this question is that yes, a pail of water might implement a
human program, and therefore any arbitrary computation, at least for a short time.
Other authors developed more detailed arguments along the lines of Hinckfuss's pail. John
Searle (1992) explicitly argues that whether a physical system implements a computation
depends on how an observer interprets the system; therefore, for any sufficiently complex
object and for any computation, the object can be described as implementing the
computation. The first rigorous argument for unlimited pancomputationalism is due to Hilary
Putnam (1988), who argues that every ordinary open system implements every abstract finite
automaton (without inputs and outputs).
Putnam assumes that electromagnetic and gravitational fields are continuous and that physical
systems are in different maximal states at different times. He considers an arbitrarily chosen
finite automaton whose table calls for the sequence of states ABABABA. He then considers an
arbitrary physical system S over the arbitrarily chosen time interval from 12:00 to 12:07 and
argues that S implements the sequenceABABABA. Since both the automaton and the physical
system are arbitrary, the argument generalizes to any automaton and any physical state. Here
is the core of Putnam's argument:
Let the beginnings of the intervals during which S is to be in one of its stagesA or B be t1, t2,
tn (in the example given, n = 7, and the times in question are t1 = 12:00, t2 = 12:01, t3 =
12:02, t4 = 12:03, t5 = 12:04, t6 = 12:05, t7= 12:06). The end of the real-time interval duringwhich we wish Sto obey this table we calltn+1 (= t8 = 12:07, in our example). For each of the
intervals tito ti+1, i= 1, 2, ,n, define a (nonmaximal) interval statesiwhich is the region in
phase space consisting of all the maximal states with tit< t+1. (I.e., S is in si just in case S is
in one of the maximal states in this region.) Note that the systemS is in s1 from t1 to t2,
in s2 from t2 to t3, , insnfrom tn to tn+1. (Left endpoint included in all cases but not the right
this is a convention to ensure the machine is in exactly one of thesiat a given time.)
DefineA = s1s3s5s7; B = s2s4s6.
8/22/2019 Computation in Physical Systems
18/41
18
Then, as is easily checked, S is in stateA from t1 to t2, from t3 to t4, and from t5 to t6, and
from t7 to t8, and in state B at all other times between t1 and t8. So Shas the table we
specified, with the statesA,Bwe just defined as the realizations of the statesA,B described
by the table. (Putnam 1988, 1223, emphasis original)
In summary, Putnam picks an arbitrary physical system with continuous dynamics, slices up its
dynamics into discrete time intervals, and then aggregates the slices so that they correspond
to an arbitrary sequence of computational states. He concludes that every physical system
implements every finite automaton.
Putnam points out that his argument does not apply directly to computational theories of
cognition, because cognitive systems receive specific physical inputs through their sensory
organs and yield specific physical outputs through their motor organs. To determine which
computations are implemented by a system with physical inputs and outputs, the inputs and
outputs must be taken into account:
Imagine that an objectSwhich takes strings of 1s as inputs and prints such strings as
outputs behaves from 12:00 to 12:07 exactly as ifit had a certain [computational]
description D. That is, Sreceives a certain string, say 111111, at 12:00 and prints a certain
string, say 11, at 12:07, and there exists (mathematically speaking) a machine with
description D which does this (by being in the appropriate state at each of the specified
intervals, say 12:00 to 12:01, 12:01 to 12:02, , and printing or erasing what it is supposed to
print or erase when it is in a given state and scanning a given symbol). In this case, S too can
be interpretedas being in these same logical statesA,B,C, at the very same times and
following the very same transition rules; that is to say, we can findphysicalstatesA,B,C
which S possesses at the appropriate times and which stand in the appropriate causal relations
to one another and to the inputs and the outputs. The method of proof is exactly the same
Thus we obtain that the assumption that something is a realization of a given automaton
description is equivalent to the statement that it behaves as if it had that
description (Putnam 1988, 124, emphasis original).
In summary, Putnam picks an arbitrary physical system with physically specified inputs and
outputs and then matches it to an arbitrary finite automaton whose abstractly specified inputs
and outputs map onto the physically specified inputs and outputs. He then slices up the
physical system's internal dynamics as before, and then aggregates the slices so that they
correspond to the sequence of computational states of the finite automaton. It follows that
given any physical system and any finite automaton with isomorphic inputs and outputs, the
physical system implements the computational system.
Although this result is weaker than the result for systems without inputs and outputs, it is still
striking because for any abstract input-output pair , there are infinitely many automata
that yield output o given input i. Given Putnam's conclusion, any physical system with inputs
8/22/2019 Computation in Physical Systems
19/41
19
and outputs isomorphic to iand o implements all of the infinitely many automata with
input iand output o.
If unlimited pancomputationalism is correct, then the claim that a system S performs a certain
computation becomes trivially true and vacuous or nearly so; it fails to distinguish S fromanything else (or perhaps from anything else with the same inputs and outputs). Thus,
unlimited pancomputationalism threatens the computational theory of cognition. If cognition
is computation simply because cognitive systems, like everything else, may be seen as
performing computations, then it appears that the computational theory of cognition is both
trivial and vacuous. By the same token, unlimited pancomputationalism threatens the
foundations of computer science, where the objective computational power of different
systems is paramount. The threat of trivialization is a major motivation behind responses to
the arguments for unlimited pancomputationalism.
The first thing to notice is that arguments for unlimited pancomputationalism rely either
implicitly or explicitly on the simple mapping account of computation. They assume that an
arbitrary mapping from a computational description Cto a physical description of a system is
sufficient to conclude that the system implements C. In fact, avoiding unlimited
pancomputationalism is a major motivation for rejecting the simple mapping account of
computation. By imposing restrictions on which mappings are legitimate, other accounts of
computation aim to avoid unlimited pancomputationalism.
In one response to unlimited pancomputationalism, Jack Copeland (1996) argues that the
mappings it relies on are illegitimate because they are constructed ex post facto after the
computation is already given. In the case of kosher computational descriptions the kind
normally used in scientific modeling the work of generating successive descriptions of a
system's physical dynamics is done by a computer running an appropriate program (e.g., a
weather forecasting program), not by the mapping relation. In the sort of descriptions
employed in arguments for unlimited pancomputationalism, instead, the descriptive work is
done by the mapping relation.
An arbitrarily chosen computational description, such as those employed in arguments for
unlimited pancomputationalism, does not generate successive descriptions of the state of an
arbitrary system. If someone wants a genuine computational description of a physical system,
she must first identify physical states and state transitions of the system, then represent them
by a computational description (thereby fixing the mapping relation between the
computational description and the system), and finally use a computer to generate subsequent
representations of the state of the system, while the mapping relation stays fixed. By contrast,
the arguments for unlimited pancomputationalism pick a computation first, then slice and
aggregate the physical system to fit the computational description, and finally generate the
mapping between the two. The work of describing the physical system is not done by the
computational description but by whoever constructs the mapping. Copeland concludes that
such ex post facto mappings are illegitimate.
8/22/2019 Computation in Physical Systems
20/41
20
In addition, both Chalmers (1995, 1996) and Copeland (1996) argue that the mappings invoked
by unlimited pancomputationalism violate the counterfactual relations between the
computational states. Consider again Putnam's slice-and-aggregate strategy for generating
mappings. The mappings are constructed based on an arbitrary dynamical evolution of an
arbitrary physical system. No attempt is made to establish what would happen to the physicalsystem had conditions been different. Chalmers and Copeland argue that this is illegitimate, as
a genuine implementation must exhibit the same counterfactual relations that obtain between
the computational states. This response leads to the counterfactual account of computation,
according to which the counterfactual relations between the physical states must be
isomorphic to the counterfactual relations between the computational states.
Another possible response to unlimited pancomputationalism is that its mappings fail to
construct an isomorphism between the causal structure of the physical system and the state
transitions specified by the computational description. Consider Putnam's argument again. Themapping from the computational description to the physical description is chosen with no
regard to the causal relations that obtain between the physical states of the system. Thus,
after a computational description is mapped onto a physical description in that way, the
computational description does not describe the causal structure of the physical system.
According to several authors, non-causal mappings are illegitimate (Chrisley 1995, Chalmers
1995, 1996, Scheutz 1999, 2001). Naturally, these authors defend the causal account of
computation, according to which acceptable mappings must respect the causal structure of a
system.
Yet another response to unlimited pancomputationalism is implicitly given by Godfrey-Smith
(2009). Although Godfrey-Smith is primarily concerned with functionalism as opposed to
computation per se, his argument is still relevant here. Godfrey-Smith argues that for a
mapping to constitute a genuine implementation, the microscopic physical states that are
clustered together (to correspond to a given computational state) must be physically similarto
one another there cannot be arbitrary groupings of arbitrarily different physical states, as in
the arguments for unlimited pancomputationalism. Godfrey-Smith suggests that his similarity
restriction on legitimate mappings may be complemented by the kind of causal and
localizational restrictions proposed by Chalmers (1996).
The remaining accounts of computation the semantic, syntactic, and mechanistic accounts
are even more restrictive than the causal and counterfactual accounts; they impose further
constraints on acceptable mappings. Therefore, like the causal and counterfactual accounts,
they have resources for avoiding unlimited pancomputationalism.
Such resources are not always straightforward to deploy. For example, consider the semantic
account, according to which computation requires representation. If being a representation of
something is an objective property possessed by relatively few things, then unlimited
pancomputationalism is ruled out on the grounds that only the few items that constitute
representations are genuine computational states. If, however, everything is representational
8/22/2019 Computation in Physical Systems
21/41
21
in the relevant way, then everything is computational (cf. Churchland and Sejnowski 1992,
Shagrir 2006). If, in addition, whether something represents something else is just a matter of
free interpretation, then the semantic account of computation gives rise to unlimited
pancomputationalism all over again. Similar considerations apply to the syntactic and
mechanistic accounts. For such accounts to truly avoid unlimited pancomputationalism, theymust not rely on free interpretation.
3.3 Limited Pancomputationalism
Limited pancomputationalism is much weaker than its unlimited cousin. It holds that every
physical system performs one (or relatively few) computations. Which computations are
performed by which system is deemed to be a matter of fact, depending on objective
properties of the system. In fact, several authors who have mounted detailed responses to
unlimited pancomputationalism explicitly endorse limited pancomputationalism (Chalmers
1996, 331, Scheutz 1999, 191).
Unlike unlimited pancomputationalism, limited pancomputationalism does not turn the claim
that something is computational into a vacuous claim. Different systems generally have
different objective properties; thus, according to limited pancomputationalism, different
systems generally perform different computations. Nevertheless, it may seem that limited
pancomputationalism still trivializes the claim that a system is computational. For according to
limited pancomputationalism, digital computers perform computations in the same sense in
which rocks, hurricanes, and planetary systems do. This may seem to do an injustice to
computer science in computer science, only relatively few systems count as performing
computations and it takes a lot of difficult technical work to design and build systems that
perform computations reliably. Or consider the claim that cognition is computation. This
computational theory of cognition was introduced to shed new and explanatory light on
cognition. But if every physical process is a computation, the computational theory of
cognition seems to lose much of its explanatory force (Piccinini 2007b).
Another objection to limited pancomputationalism begins with the observation that any
moderately complex system satisfies indefinitely many objective computational descriptions
(Piccinini 2010). This may be seen by considering computational modeling. A computational
model of a system may be pitched at different levels of granularity. For example, consider
cellular automata models of the dynamics of a galaxy or a brain. The dynamics of a galaxy or a
brain may be described using an indefinite number of cellular automata using different
state transition rules, different time steps, or cells that represent spatial regions of different
sizes. Furthermore, an indefinite number of formalisms different from cellular automata, such
as Turing machines, can be used to compute the same functions computed by cellular
automata. It appears that limited pancomputationalists are committed to the galaxy or the
brain performing all these computations at once. But that does not appear to be the sense in
which computers (or brains) perform computations.
8/22/2019 Computation in Physical Systems
22/41
22
In the face of these objections, limited pancomputationalists are likely to maintain that the
explanatory force of computational explanations does not come from the claim that a system
is computational simpliciter. Rather, explanatory force comes from the specific computations
that a system is said to perform. Thus, a rock and a digital computer perform computations in
the same sense. But they perform radically different computations, and it is the differencebetween their computations that explains the difference between them. As to the objection
that there are still too many computations performed by each system, limited
pancomputationalists have two main options: either to bite the bullet and accept that every
system implements indefinitely many computations, or to find a way to single out, among the
many computational descriptions satisfied by each system, the one that is ontologically
privileged the one that captures the computation performed by the system. One way to do
this is to postulate a fundamental physical level, whose most accurate computational
description identifies the (most fundamental) computation performed by the system. This
response is built into the view that the physical world is fundamentally computational (nextsection).
As to those who remain unsatisfied with limited pancomputationalism, their desire to avoid
limited pancomputationalism motivates the shift to more restrictive accounts of computation,
analogously to how the desire to avoid unlimited pancomputationalism motivates the shift
from the simple mapping account to more restrictive accounts of computation, such as the
causal account. The semantic account may be able to restrict genuine computational
descriptions to fewer systems than the causal account, provided that representations which
are needed for computation according to the semantic account are hard to come by.Mutatis mutandis, the same is true of the syntactic and mechanistic accounts.
3.4 The Universe as a Computing System
Some authors argue that the physical universe is fundamentally computational. The universe
itself is a computing system, and everything in it is a computing system too (or part thereof).
Unlike the previous versions of pancomputationalism, which originate in philosophy, this ontic
pancomputationalism originates in physics. It includes both an empirical claim and a
metaphysical one. Although the two claims are logically independent, supporters of ontic
pancomputationalism tend to make them both.
The empirical claim is that all fundamental physical magnitudes and their state transitions are
such as to be exactly described by an appropriate computational formalism without
resorting to the approximations that are a staple of standard computational modeling. This
claim takes different forms depending on which computational formalism is taken to describe
the universe exactly. The two main options are cellular automata, which are a classical
computational formalism, and quantum computing, which is non-classical.
The earliest and best known version of ontic pancomputationalism is due to Konrad Zuse
(1970, 1982) and Edward Fredkin, whose unpublished ideas on the subject influenced a
8/22/2019 Computation in Physical Systems
23/41
23
number of American physicists (e.g., Feynman 1982, Toffoli 1982, Wolfram 2002; see also
Wheeler 1982, Fredkin 1990). According to some of these physicists, the universe is a giant
cellular automaton. A cellular automaton is a lattice of cells; each cell can take one out of
finitely many states and updates its state in discrete steps depending on the state of its
neighboring cells. For the universe to be a cellular automaton, all fundamental physicalmagnitudes must be discrete, i.e., they must take at most finitely many values. In addition,
time and space must be fundamentally discrete or must emerge from the discrete processing
of the cellular automaton. At a fundamental level, continuity is not a real feature of the world
there are no truly real-valued physical quantities. This flies in the face of most mainstream
physics, but it is not an obviously false hypothesis. The hypothesis is that at a sufficiently small
scale, which is currently beyond our observational and experimental reach, (apparent)
continuity gives way to discreteness. Thus, all values of all fundamental variables, and all state
transitions, can be fully and exactly captured by the states and state transitions of a cellular
automaton.
Although cellular automata have been shown to describe many aspects of fundamental
physics, it is difficult to see how to simulate the quantum mechanical features of the universe
using a classical formalism such as cellular automata (Feynman 1982). This concern motivated
the development of quantum computing formalisms (Deutsch 1985, Nielsen and Chuang
2000). Instead of relying on digits most commonly, binary digits or bits quantum
computation relies on qudits most commonly, binary qudits or qubits. The main difference
between a digit and a qudit is that whereas a digit can take only one out of finitely many
states, such as 0 and 1 (in the case of a bit), a qudit can also take an uncountable number ofstates that are a superposition of the basis states in varying degrees, such as superpositions of
0 and 1 (in the case of a qubit). Furthermore, unlike a collection of digits, a collection of qudits
can exhibit quantum entanglement. According to the quantum version of ontic
pancomputationalism, the universe is not a classical computer but a quantum computer, that
is, not a computer that manipulates digits but a computer that manipulates qubits (Lloyd 2006)
or, more generally, qudits.
The quantum version of ontic pancomputationalism is less radical than the classical version.
The classical version eliminates continuity from the universe, primarily on the grounds that
eliminating continuity allows classical computers to describe the universe exactly rather than
approximately. Thus, the classical version appears to be motivated not by empirical evidence
but by epistemological concerns. Although there is no direct evidence for classical ontic
pancomputationalism, in principle it is a testable hypothesis (Fredkin 1990). By contrast,
quantum ontic pancomputationalism may be seen as a reformulation of quantum mechanics in
the language of quantum computation and quantum information theory (qubits), without
changes in the empirical content of the theory (e.g., Fuchs 2004, Bub 2005).
But ontic pancomputationalists do not limit themselves to making empirical claims. They often
make an additional metaphysical claim. They claim that computation (or information, in the
8/22/2019 Computation in Physical Systems
24/41
24
physical sense described in Section 2.3) is what makes up the physical universe. This point is
sometimes made by saying that at the most fundamental physical level, there are brute
differences between states nothing more need or can be said about the nature of the
states. This view reverses the traditional conception of the relation between computation and
the physical world.
According to the traditional conception, which is presupposed by all accounts of computation
discussed above, physical computation requires a physical substratum that implements it.
Computation is an aspect of the organization and behavior of a physical system; there is no
software without hardware. Thus, according to the traditional conception, if the universe is a
cellular automaton, the ultimate constituents of the universe are the physical cells of the
cellular automaton. It is legitimate to ask what kind of physical entity such cells are and how
they interact with one another so as to satisfy their cellular automata rules.
By contrast, according to the metaphysical claim of ontic pancomputationalism, a physical
system is just a system of computational states. Computation is ontologically prior to physical
processes, as it were. Hardware *is+ made of software (Kantor 1982, 526, 534). According
to this non-traditional conception, if the universe is a cellular automaton, the cells of the
automaton are not concrete, physical structures that causally interact with one another.
Rather, they are software purely computational entities.
Such a metaphysical claim requires an account of what computation, or software, or physical
information, is. If computations are not configurations of physical entities, the most obvious
alternative is that computations are abstract, mathematical entities, like numbers and sets. As
Wheeler (1982, 570) puts it, the building element *of the universe+ is the elementary yes, no
quantum phenomenon. It is an abstract entity. It is not localized in space and time. Under this
account of computation, the ontological claim of ontic pancomputationalism is a version of
Pythagoreanism. All is computation in the same sense in which more traditional versions of
Pythagoreanism maintain that all is number or that all is sets (Quine 1976).
Ontic pancomputationalism may be attacked on both the empirical and the ontological fronts.
On the empirical front, there is little positive evidence to support ontic pancomputationalism.
Supporters appear to be motivated by the desire for exact computational models of the world
rather than empirical evidence that the models are correct. Even someone who shares this
desire may well question why we should expect nature to fulfill it. On the metaphysical front,
Pythagoreanism faces the objection that the abstract entities it puts at the fundamental
physical level lack the causal and qualitative properties that we observe in the physical world
or at least, it is difficult to understand how abstract entities could give rise to physical
qualities and their causal powers (e.g., Martin 1997).
8/22/2019 Computation in Physical Systems
25/41
25
4. Physical Computability
According to the Church-Turing thesis (CTT), any function that is intuitively computable is
computable by some Turing machine (i.e., Turing-computable). Alternatively, CTT may be
formulated as follows: any function that is naturally regardedas computable (Turing 19367,135) is Turing-computable. The phrases intuitively computable and naturally regarded as
computable are somewhat ambiguous. When they are disambiguated, CTT takes different
forms.
In one sense, intuitively computable means computable by following an algorithm or
effective procedure. An effective procedure is a finite list of clear instructions for generating
new symbolic structures out of old symbolic structures. When CTT is interpreted in terms of
effective procedures, it may be called Mathematical CTT, because the relevant evidence is
more logical or mathematical than physical. Mathematical CTT says that any
function computable by an effective procedure is Turing-computable.
There is compelling evidence that Mathematical CTT is true (Kleene 1952, 62, 67; cf. also
Sieg 2006):
There are no known counterexamples. Diagonalization over Turing machines, contrary to what may be expected, does not
yield a function that is not Turing-computable.
Argument from confluence: all the formalisms proposed to capture the intuitive notionof computability by effective procedure formalisms such as general recursiveness
(Gdel 1934), -definability (Church 1932, Kleene 1935), Turing-computability (Turing
1936-7), and reckonability (Gdel 1936) turn out to capture the same class of
functions.
A Turing machine seems capable of reproducing any operation that a human being canperform while following an effective procedure (Turing 19367's main argument for
CTT).
In another sense, intuitively computable means computable by physical means. When CTT is
so interpreted, it may be called Physical CTT(following Pitowsky 1990), because the relevant
evidence is more physical than logical or mathematical.
4.1 The Physical Church-Turing Thesis: Bold
Physical CTT is often formulated in very strong forms. To a first approximation, Bold Physical
CTTholds that any physical process anything doable by a physical system is computable
by some Turing machine.
Bold Physical CTT can be made more precise in a number of ways. Here is a representative
sample, followed by references to where they are discussed:
8/22/2019 Computation in Physical Systems
26/41
26
A. Any physical process can be simulated by some Turing machine (e.g., Deutsch 1985,Wolfram 1985, Pitowsky 2002).
B. Any function over denumerable domains (such as natural numbers) that is computableby an idealized computing machine that manipulates arbitrary real-valued quantities
(as defined by Blum et al. 1998) is Turing-computable.C. Any system of equations describing a physical system gives rise to computable
solutions (cf. Earman 1986, Pour-El 1999). A solution is said to be computable just in
case, given computable real numbers as initial conditions, it returns computable real
numbers as values. A real number is said to be computable just in case there is a
Turing machine whose output effectively approximates it.
D. For any physical system S and observable W, there is a Turing-computablefunctionf: NN such that for all timestN,f(t)=W(t) (Pitowsky 1990).
Thesis (A) is ambiguous between two notions of simulation. In one sense, simulation is theprocess by which a digital computing system (such as a Turing machine) computes the same
function as another digital computing system. This is the sense in which universal Turing
machines can simulate any other Turing machine. If (A) is interpreted using this first notion of
simulation, it entails that everything in the universe is a digital computing system. This is (a
variant of) ontic pancomputationalism (Section 3.4).
In another sense, simulation is the process by which the output of a digital computing system
represents an approximate description of the dynamical evolution of another system. This is
the sense in which computational models of the weather simulate the weather. If (A) is
interpreted using this second notion of simulation, then (A) is true only if we do not care how
close our computational approximations are. If we want close computational approximations
as we usually do then (A) turns into the claim that any physical process can be
computationally approximated to the degree of accuracy that is desired in any given case.
Whether that is true varies from case to case depending on the dynamical properties of the
system, how much is known about them, what idealizations and simplifications are adopted in
the model, what numerical methods are used in the computation, and how many
computational resources (such as time, processing speed, and memory) are available (Piccinini
2007b).
Thesis (B) is straightforwardly and radically false. Blum et al. (1989) set up a mathematical
theory of computation over real-valued quantities, which they see as a fruitful extension of
ordinary computability theory. Within such a theory, Blum et al. define idealized computing
machines that perform addition, subtraction, multiplication, division, and equality testing as
primitive operations on arbitrary real-valued quantities. They easily prove that such machines
can compute all sets defined over denumerable domains by encoding their characteristic
function as a real-valued constant (ibid., 405). Although they do not discuss this result as a
refutation of Physical CTT, their work is often cited in discussions of physical computability and
Physical CTT.
8/22/2019 Computation in Physical Systems
27/41
27
Theses (C) and (D) have interesting counterexamples that are consistent with some physical
theories (cf. below and Pour-El 1999). These theoretical counterexamples may or may not
occur in our concrete physical universe.
Each of (A)(D) raises important questions pertaining to the foundations of computer science,physics, and mathematics. It is not clear, however, that any of these theses bears an
interesting analogy to Mathematical CTT. Below are two reasons why.
First, (A)(D) are falsified by processes that cannot be built and used as computing devices. The
most obvious example is (B). Blum et al.'s result is equivalent to demonstrating that all
functions over denumerable domains including the uncountably many functions that are
not Turing-computable are computable by Blum et al.'s computing systems, which are
allowed to mani