Upload
truongtruc
View
214
Download
2
Embed Size (px)
Citation preview
Info‐Computational Philosophy Of Nature: An Informational Universe With
Computational Dynamics
Gordana Dodig‐Crnkovic
Abstract. Starting with the Søren Brier’s Cybersemiotic critique of the existing practice of
Wissenshaft, this article develops the argument for an alternative naturalization of knowledge
production. It presents the framework of natural info‐computationalism, ICON, as a new Natural
Philosophy based on concepts of information (structure) and computation (process). In this
approach, which is a synthesis of informationalism (the view that nature is informational) and
computationalism (the view that nature computes its own time development), computation is in
general not a substrate‐independent disembodied symbol manipulation. Based on the
informational character of nature, where matter and informational structure are equivalent,
information processing in general is embodied and in general substrate specific. The Turing
Machine model of abstract discrete sequential symbol manipulation is a subset of the Natural
computing model. With this generalized idea of Natural computing and Informational Structural
Realism, Info‐computationalism (ICON), adopting scientific third‐person account, covers the entire
list of requirements for naturalist knowledge production framework from Brier (2010) except for
qualia as experienced in a first‐person mode.
The Cybersemiotics Critique of the Existing Practice of Wissenshaft
In his article ‘Cybersemiotics: An Evolutionary World View Going Beyond Entropy and Information
into the Question of Meaning’ (Brier 2010), Søren Brier rightly criticizes the present state of the
scientific understanding of nature, (“life, consciousness and cultural meaning all as a part of nature
and evolution, including humans and other living beings”), specifically:
1. The physico‐chemical scientific paradigm based on third person objective empirical
knowledge and mathematical theory, but with no conceptions of experiential life, meaning
and first person embodied consciousness and therefore meaningful linguistic
intersubjectivity;
2. The biological and natural historical science approach understood as the combination of
genetic evolutionary theory with an ecological and thermodynamic view based on the
evolution of experiental living systems as the ground fact and engaged in a search for
empirical truth, yet doing so without a theory of meaning and first person embodied
consciousness and thereby linguistic meaningful intersubjectivity;
3. The linguistic‐cultural‐social structuralist constructivism that sees all knowledge as
constructions of meaning produced by the intersubjective web of language, cultural
mentality and power, but with no concept of empirical truth, life, evolution, ecology and a
very weak concept of subjective embodied first person consciousness even while taking
conscious intersubjective communication and knowledge processes as the basic fact to study
(the linguistic turn);
4. Any approach which takes the qualitative distinction between subject and object as the
ground fact, on which all meaningful knowledge is based, considering all results of the
sciences including linguistics and embodiment of consciousness as secondary knowledge, as
opposed to a phenomenological (Husserl) or actually phaneroscopic (Peirce) first person
point of view considering conscious meaningful experiences in advance of the subject/object
distinction.
This is the view Brier argues for in more detail in his book Cybersemiotics: Why Information is
Not Enough! (Brier 2008) where he also proposes the Cybersemiotic star ‐ a diagram showing how
knowledge understood as Wissenschaft arises in a naturalist framework in four different
approaches to cognition, communication, meaning and consciousness: the exact natural sciences,
the life sciences, phenomenological‐hermeneutic interpretational humanities and the sociological
discursive‐linguistic, which are all considered to be equally important and have to be united in a
transdisciplinary theory of information, semiotics, first person consciousness and an
intersubjective cultural social‐communicative approach:
The semiotic star in cybersemiotics claims that the internal subjective, the intersubjective
linguistic, our living bodies, and nature are irreducible and equally necessary as
epistemological prerequisites for knowing. The viable reality of any of them cannot be
denied without self‐refuting paradoxes. There is an obvious connectedness between the
four worlds, which Peirce called “synechism.” It also points to Peirce’s conclusion that logic
and rationality are part of the process of semiosis, and that meaning in the form of semiosis
is a fundamental aspect of reality, not just a construction in our heads. (Brier 2009)
Cybersemiotics is an ambitious and important project. It correctly identifies problems in the
established traditional ideas of knowledge and science and proposes new possible directions of
the development.
In the present article, I will argue that Info‐computationalism (ICON), built on different
grounds and adopting scientific third‐person account (Dodig‐Crnkovic 2006‐2011), covers the
entire list from (Brier 2010) except for qualia as experienced in a first‐person mode. Qualia as
other natural phenomena are accounted for in a third‐person perspective.
Info‐Computationalism is a purely scientific naturalistic framework. It approaches the
complex world of natural phenomena with info‐computational tools and models based on
research results from a range of sciences – from mathematics to computing, biology and ecology.
ICON is constructed as what Wolfram (2002) calls “a new kind of science”. It uses among other
modeling tools newly discovered generative models of complex systems, which starting from often
very simple rules, generates complex behaviors such as self‐organization of an insect swarm or an
immune system. This type of approach is known as agent‐based modeling (ABM), (Axelrod 1997;
Epstein 2007). Being a truly new scientific modeling approach, it is under intensive development
and we can already anticipate future insights into generative mechanisms of emergent behaviors
of a variety of complex systems. It provides a framework that cuts across all four of the domains of
Brier’s semiotic star.
So even though “science(s)”, as typically taught at the universities today in Philosophy of
Science/Theory of Science courses, consist of disparate fields of study which more or less ignore
each other, scientific practice is changing rapidly and interdisciplinary projects are becoming
increasingly important and main‐stream.
From the research grassroots, very much thanks to information and communication
technologies providing smooth communication means, cross‐disciplinary scientific practices arise,
bringing the necessity of understanding across research field boundaries. Generative models are
excellent tools which help bridge gaps across research fields. (Dodig Crnkovic 2003). Denning
(2007) declares: “Computing is a natural science” and ICON provides plenty of evidence for this
claim. Biologists such as Kurakin (2009, 2011) also add to the information‐based naturalism:
Recently, it was proposed that living matter as a whole represents a multiscale structure‐
process of energy/matter flow/circulation, which obeys the empirical laws of nonequilibrium
thermodynamics and which evolves as a self‐similar structure (fractal) due to the pressures
of economic competition and evolutionary selection [6‐9]. According to the self‐organizing
fractal theory (SOFT) of living matter, certain organizational structures and processes are
scale‐invariant and occur over and over again on all scales of the biological organizational
hierarchy, at the molecular, cellular, organismal, populational, and higher‐order levels of
biological organization. (Kurakin 2011: 5)
Info‐computationalism and Cybersemiotics
ICON is based on two principles: information (structure) and computation (dynamics) (Dodig
Crnkovic 2006, 2009). The fundamental nature of reality is informational (one might say: in ICON,
being is informational) and generalized understanding of computation as its dynamics (in ICON,
becoming is computational). In Kant’s vocabulary, the thing‐in‐itself (das Ding an sich),
unknowable while certainly existing, is understood as proto‐information ‐ equally existing and
unknowable ‐ which for an agent through interaction becomes information which will be used for
all sorts of agency in the world – sensorimotor as well as language‐related. Being a contemporary
version of Natural Philosophy, ICON of course includes evolution as well as all scientific results
from complexity and the rest of physics, biology, ecology, sociology and other third‐person
scientific accounts.
Cybersemiotics builds on Peirce’s synechism as a connectedness between matter and mind,
nature and culture. This continuity and connectedness is often opposed to the computational
universe based on discrete models such as Ed Fredkin’s Digital Physics. However, it is important to
emphasize that natural computationalism is a general idea of the existing physical universe with all
its many organizational levels, seen as a computational network where computation is going on
from the quantum‐mechanical level up to the cosmic level and back. Computing exists as both
discrete (the dominant model of computing today) and continuous (as found in some analog
computers which were developed in the early computer era, and now superseded by digital ones).
Natural computationalism is thus not essentially dependent on the assumption about the
discreteness of the universe. If the Nature computes, its computation is both continuous and
discrete. Historically there have been both types of frameworks, both geometry & calculus
(continuous) and algebra & algorithms (discrete) and those two ways of thinking are applicable for
different purposes. The discrete Platonic world based on integers ("God made the integers; all else
is the work of man" ‐ Kronecker's quote from a talk he gave at the Berliner Naturforscher‐
Versammlung in 1886 which appears in the obituary by Weber (1891–92)) is according to Chaitin
(in Zenil 2011 p. 355) more beautiful and more comprehensive than the world of real numbers
with its uncountable infinities. In any case, what makes computational universe attractive is its
constructive nature – we construct a model which we have control over, and compare the
behavior of the model with the real world phenomena:
So I said: let us design a world with pure thought that is computational, with computation as
a building block. It’s like playing God, but it would be a world we can understand, no? If we
invent it we can understand it – whereas if we try to figure out how this world works it turns
into metaphysics again. .
Cybersemiotics adopts Peirce's trichotomy of Firstness, Secondness and Thirdness. Peirce defines
Firstness as being independent of anything else; Secondness as being relative to, the interaction
with something else and Thirdness as mediation, through which a Firstness and a Secondness are
brought into relation. Similarly, Peirce describes the sign relation as the triad of icon, index and
symbol: an icon represents an object by its inherent form, which resembles the object; an index
represents the object through some causal relationship and a symbol represents an object by a
convention within a community of practice. According to Sowa’s interpretation of Peirce:
…in every living being, from bacteria to humans and perhaps beyond, semiosis is the crucial
Thirdness that enables the organism to respond to signs by taking actions that serve to
further its goals of getting food, avoiding harm, and reproducing its kind. For most life forms,
those goals are unconscious, and most of them are built into their genes. But there is no
difference in principle between the evolutionary learning that is encoded in genes and the
individual learning that is encoded in neurons. Understanding life at every level and in every
kind of organization from colonies of bacteria to human businesses and governments
requires an understanding of signs, goals, communication, cooperation, and competition —
all of which involve aspects of Thirdness. (see http://www.jfsowa.com/pubs/signproc.htm)
If we are to establish mapping between Peirce’s approach and Info‐computational one, Firstness
would correspond to Proto‐Information (the mode of being of that which is without reference to
any subject or object), Secondness to interaction (the mode of being of that which is itself in
referring to a second subject, any type of information exchange), and Thirdness would correspond
to (intentional) agency (the mode of being of that which is itself in bringing a second and a third
subject into relation with each other).
Info‐computationalism has strong connections not only with physics, biology and cognitive
science but also with artificial intelligence and robotics. It is therefore a framework which is
suitable for generalization from biological to artificial agents. Cognitive agents within this
framework are not mechanical deterministic machines, but adaptive, learning and anticipative
beings increasingly capable of adequate and intelligent behavior.
For sciences today the intelligent agent is something that can be biological but also artificial.
ICON is constructed as a generalization in this respect.
Peirce’s semiosis is situated in Thirdness, while Info‐computationalism extends through all
three domains: Firstness (proto‐information), Secondness (interaction) and Thirdness (intentional
agency). As Protoinformation stands for the reality in itself, in the framework of ICON, with its
Informational Structural Realism every sign is information but not all of information is a sign in the
sense of Peirce:
A sign, or representamen, is something which stands to somebody for something in some
respect or capacity. It addresses somebody, that is, creates in the mind of that person an
equivalent sign, or perhaps a more developed sign. That sign which it creates I call the
interpretant of the first sign. The sign stands for something, its object. It stands for that
object, not in all respects, but in reference to a sort of idea, which I have sometimes called
the ground of the representamen. (CP 2.228)
The aim of this article is not of course to claim superiority of any specific approach, but rather to
elucidate their domains of applicability and focus. For further comparisons between
Cybersemiotics and Info‐computationalism, see (Dodig Crnkovic 2010). As Whitehead cogently
noticed:
Human knowledge is a process of approximation. In the focus of experience, there is
comparative clarity. But the discrimination of this clarity leads into the penumbral
background. There are always questions left over. The problem is to discriminate exactly
what we know vaguely. Whitehead (1937)
From the historical lessons learned we may conclude that information (and computation) will not
be enough to provide the ultimate world‐view. There has never been a thought system in the
history that could withstand changes in human civilization. Many systems have greatly contributed
to the development of humanity, often in combination, and more frequently in opposition to each
other. They have nonetheless broadened the horizons and helped human imagination in
constructing new and ever more complex thought systems, theories and applications.
In retrospect, looking at the previous paradigm of Clockwork Universe, we can conclude:
mechanics was not enough. Nevertheless we learned a lot and mechanics was an extremely
fruitful conceptual device. As Whitehead (1933) points out, each specific method or approach is at
length exhausted. Initially a system may be a success but developed to the limits of what it can
support, it collapses and finally presents an obstacle for new systems to come. What makes a
framework worthwhile is its fruitfulness as a generator of new ideas and knowledge (Dodig
Crnkovic & Müller 2009). It is important to be able to recognize this potential in a new paradigm.
Here is the summary of what makes the info‐computationalist naturalism a promising
research programme (Dodig‐Crnkovic & Müller 2009):
‐ Unlike mechanicism, info‐computationalist naturalism has the ability to tackle fundamental
physical structures as life phenomena within the same conceptual framework. The observer is an
integral part of the info‐computational universe.
‐ Integration of scientific understanding of the structures and processes of life with the rest of
natural world will help to achieve “the unreasonable effectiveness of mathematics” (or computing
in general) even for the complex phenomena of biology that today lack mathematical
effectiveness (Gelfand) – in sharp contrast to physics (Wigner 1960), according to Lesk (2000, p
29).
‐ Info‐computationalism (which presupposes computationalism and informationalism) presents a
unifying framework for common knowledge production in many up to now unrelated research
fields. Present day specialization into various isolated research fields has led to the alarming
impoverishment of the common world view.
‐ Our existing computing devices are a subset of a set of possible physical computing machines,
and the Turing Machine model is a subset of envisaged, more general, natural computational
models. Advancement of our computing methods beyond the Turing‐Church paradigm will result
in computing capable of handling complex phenomena such as living organisms and processes of
life, social dynamics, communication and control of large interacting networks as addressed in
organic computing and other kinds of unconventional computing.
‐ Understanding of the semantics of information as a part of the data‐information‐knowledge‐
wisdom sequence, in which more and more complex relational structures are created by
computational processing of information. An evolutionary naturalist view of semantics of
information in living organisms is given based on interaction/information exchange of an organism
with its environment.
‐ Discrete and analogue are both needed in physics and so in physical computing which can help us
to deeper understanding of their relationship.
‐ Relating phenomena of information and computation understood in an interactive paradigm will
enable investigations into the logical pluralism of information produced as a result of interactive
computation. Of special interest are open systems in communication with the environment and
related logical pluralism including paraconsistent logic.
‐ Of all manifestations of life, mind seems to be information‐theoretically and philosophically the
most interesting one. Info‐computationalist naturalism (computationalism + informationalism) has
a potential to support, by means of models and simulations, our effort in learning about mind and
developing artifactual (artificial) intelligence in the direction of organic computing.
In a situation of a paradigm shift where many different approaches co‐exist, Sowa’s notion of
“knowledge soup” is useful as it stands for “the fluid, dynamically changing nature of the
information that people learn, reason about, act upon, and communicate” (Sowa 2000).
Universe as Informational Structure
The universe is "nothing but processes in structural patterns all the way down" (Ladyman, et al.
2007 p. 228) "From the metaphysical point of view, what exist are just real patterns" (p. 121).
Understanding patterns as information, one may infer that information is a fundamental
ontological category. The ontology is scale‐relative. What we know about the universe is what we
get from sciences, as "special sciences track real patterns" (p. 242). "Our realism consists in our
claim that successful scientific practice warrants networks of mappings as identified above
between the formal and the material" (p. 121).
This idea of informational universe coincides with Floridi’s Informational Structural Realism
(Floridi 2008; Floridi 2009). We know as much of the world as we explore and cognitively “digest”:
Since we wish to devise an intelligible conceptual environment for ourselves, we do so not
by trying to picture or photocopy whatever is in the room (mimetic epistemology), but by
interacting with it as a resource for our semantic tasks, interrogating it through experience,
tests and experiments. Reality in itself is not a source but a resource for knowledge.
Structural objects (clusters of data as relational entities) work epistemologically like
constraining affordances: they allow or invite certain constructs (they are affordances for the
information system that elaborates them) and resist or impede some others (they are
constraints for the same system), depending on the interaction with, and the nature of, the
information system that processes them. They are exploitable by a theory, at a given Level of
Abstraction, as input of adequate queries to produce information (the model) as output.
(Floridi 2008 p. 370).
What info‐computationalist naturalism wants is to understand that dynamical interaction of
informational structures as a computational process. It includes digital and analogue, continuous
and discrete as phenomena existing in the physical world on different levels of description and
digital computing is a subset of a more general natural computing. Wolfram finds equivalence
between the two descriptions – matter and information:
[M]atter is merely our way of representing to ourselves things that are in fact some pattern
of information, but we can also say that matter is the primary thing and that information is
our representation of that. It makes little difference, I don’t think there’s a big distinction – if
one is right that there’s an ultimate model for the representation of universe in terms of
computation. (Wolfram in Zenil 2011, p. 389).
More detailed discussion of different questions of the informational universe, natural info‐
computationalism including cognition, meaning, intelligent agency and other similar topics is given
in Dodig Crnkovic and Hofkirchner (2011).
In what follows I will focus on explaining the new idea of computation which is essentially
different from the notion of performing a given procedure in a deterministic mechanical way. This
new concept of computation, natural computation (sometimes called unconventional
computation in order to emphasize its difference from the computational models we are used to),
allows for nondeterministic complex computational systems with self‐* properties. Here self‐*
stands for self‐organization, self‐configuration, self‐optimization, self‐healing, self‐protection, self‐
explanation, and self/context‐awareness – applied to information‐processing systems. Scheutz
(2002) argues that this new kind of computationalism applied to the theory of mind is able to
explain the nature of intentionality and the origin of language.
Info‐Computationalism as Natural Philosophy
Ever since Turing proposed his computation model which identifies computation with the
execution of an algorithm, a predefined (discrete, finite) procedure, there have been questions
about how widely the Turing Machine model is applicable. The Church‐Turing Thesis establishes
the equivalence between a Turing Machine and an algorithm, interpreted so as to imply that all of
computation must be algorithmic. However, with the advent of computer networks, the model of
a computer in isolation, represented by a Turing Machine, has become insufficient. Today’s
computer systems have become large, consisting of massive numbers of autonomous and parallel
elements across multiple scales. At the nano‐scale they approach programmable matter; at the
macro scale, a huge number of cores compute in clusters, grids or clouds, while at the planetary
scale, sensor networks connect environmental and satellite data to track climate and other global‐
scale phenomena. The commonality of these modern computing systems consists in the fact that
they are ensemble‐like (as they form one whole in which the parts act in concert to achieve a
common goal the way an organism is an ensemble of its cells) and physical (as ensembles act in
the physical world and interact with their environment through sensors and actuators).
The solution for the problems of extreme complexity of modern computational networks is
sought in Natural computing as a new paradigm of computing which deals with computability in
the physical world. It has brought a fundamentally new understanding of computation and
presents a promising new approach to the complex world of autonomous, intelligent, adaptive,
and networked computing that has successively emerged in recent years. Significant for Natural
computing is a bidirectional research (Rozenburg and Kari 2008): as the natural sciences are
rapidly absorbing ideas of information processing, computing concurrently assimilates ideas from
natural sciences.
The Definition of Computation and the Turing Machine Model
The definition of computation is still under debate and, at the moment, the closest to common
acceptance is the view of computation as information processing, found in different mathematical
accounts of computing as well as Cognitive science and Neuroscience (see Burgin 2005).
Basically, for a process to be a computation, a model must exist such as an algorithm,
network topology, physical process or in general any mechanism which ensures definability of its
behavior. (Dodig Crnkovic 2011)
The characterization of computing can be made in several dimensions by classification into
orthogonal types: digital/analog, symbolic/subsymbolic, interactive/batch and sequential/parallel.
Nowadays digital computers are used to simulate all sorts of natural processes, including those
that in physics are understood as continuous. However, it is important to distinguish between the
mechanism of computation and the simulation model.
It does not matter if the data constitute any symbols; computation is a process of change of
data structures. Symbols appear at a high level of organization and complexity, and always in
relation with living organisms. Symbols represent something for a living organism; have a function
as carriers of meaning.
The notion of computation as formal (mechanical) symbol manipulation originates from
discussions in mathematics in the early twentieth century. The most influential program for
formalization was initiated in the early 1920s by Hilbert, who treated formalized reasoning as a
symbol game in which the rules of derivation are expressed in terms of the syntactic properties of
the symbols (see Zach 2009). As a result of Hilbert’s program, large areas of mathematics have
been formalized. Formalization implies the establishment of the basic language which is used to
formulate the system of axioms and derivation rules defined such that the important semantic
relationships are preserved by inferences defined only by the syntactic form of the expressions.
Hilbert’s Grundlagen der Mathematik, and Whitehead and Russell’s Principia Mathematica are
examples of such formalization. However, there are limits to what can be formalized, as
demonstrated by Gödel’s incompleteness theorems.
A second important issue after the formalization of mathematics was to determine the class
of functions that are computable in the sense of being decidable by the application of a
mechanical procedure or an algorithm. Not all mathematical functions are computable in this
sense. It was first Alan Turing who developed a general method to define the class of computable
functions. He proposed the “logical computing machine", which is a description of a procedure
that processes symbols written on a tape/paper in a way analogous to what a human does when
computing a function by application of a mechanical rule. According to Turing, the class of
computable functions was equivalent to the class of functions that could be evaluated in a finite
number of steps by a “logical computing machine” (Turing machine).
The basic idea was that any operations that are sensitive only to syntax can be simulated
mechanically. What the human following a formal algorithm does by recognition of syntactic
patterns, a machine can be made to do by purely mechanical means. Formalization and
computation are closely related and together entail that reasoning which can be formalized can
also be simulated by the Turing machine. Turing assumed that a machine operating in this way
would actually be doing the same thing as the human performing computation.
Some critics have suggested that what the computer does is merely an imitation or
simulation of what the human does, even though it might be at some level isomorphic to the
human activity, but not in all relevant respects. I would add an obvious remark. The Turing
machine is supposed to be given from the outset – its logic, its physical resources, and the
meanings ascribed to its actions. The Turing Machine presupposes a human as a part of a system –
the human is the one who poses the questions, provides material resources and interprets the
answers.
In its original formulation (Church 1935, 1936), the Church‐Turing thesis states that the
effectively calculable function of positive integers is identical with a recursive function of positive
integers or of a lambda‐definable function of positive integers (Church 1936 p. 356). Computation
was considered to be a process of computing the function of positive integers. Actually, the
Church‐Turing thesis is used as a definition for computation. There has never been a proof, but the
evidence for its validity comes from the equivalence of computational models such as cellular
automata, register machines, and substitution systems.
The Church‐Turing thesis has been extended to a proposition about the processes in the
natural world by Stephen Wolfram in his Principle of computational equivalence (Wolfram 2002),
in which he claims that there are only a small number of intermediate levels of computing before a
system is universal and that most natural systems can be described as universal computational
mechanisms. However, a number of computing specialists and philosophers of computing (Hava
Siegelman, Mark Burgin, Jack Copeland, and representatives of natural computing) question the
claim that all computational phenomena in all relevant aspects are equivalent to the Turing
Machine.
George Kampis for example, in his book Self‐Modifying Systems in Biology and Cognitive
Science (1991) claims that the Church‐Turing thesis applies only to simple systems. According to
Kampis (p. 223), complex biological systems must be modeled as self‐referential, self‐organizing
systems he calls "component‐systems" (self‐generating systems), whose behavior, though
computational in a generalized sense, goes far beyond the simple Turing machine model:
A component system is a computer which, when executing its operations (software) builds a
new hardware.... [W]e have a computer that re‐wires itself in a hardware‐software interplay:
the hardware defines the software and the software defines new hardware. Then the circle
starts again.
Goertzel (1994) suggests that stochastic and quantum computing models would be more suitable
for component systems. Molecular computers are even more obvious candidates.
The Computing Universe – Naturalist Computationalism
Konrad Zuse was the first to suggest (in 1967) that the physical behavior of the entire universe is
being computed on a basic level, possibly on cellular automata, by the universe itself which he
referred to as "Rechnender Raum" or Computing Space/Cosmos. Consequently, Zuse was the first
pancomputationalist (natural computationalist). Here is Chaitin’s account:
And how about the entire universe, can it be considered to be a computer? Yes, it certainly
can, it is constantly computing its future state from its current state, it's constantly
computing its own time‐evolution! And as I believe Tom Toffoli pointed out, actual
computers like your PC just hitch a ride on this universal computation! (Chaitin, 2007 p. 13)
Even Wolfram in his A New Kind of Science advocates for a pancomputationalist view, a new
dynamic kind of reductionism in which the complexity of behaviors and structures found in nature
are derived (generated) from a few basic mechanisms. Natural phenomena are thus the products
of computation processes. In a computational universe new and unpredictable phenomena
emerge as a result of simple algorithms operating on simple computing elements, e.g. cellular
automata, and complexity originates from the bottom‐up emergent processes. Cellular automata
are equivalent to a universal Turing Machine. Cook has proven Wolfram’s conjecture that one of
the simplest possible cellular automata (number 110) is capable of universal computation. This
result was first described in (Wolfram, 2002). Wolfram’s critics remark however that cellular
automata do not evolve beyond a certain level of complexity. The mechanisms involved do not
necessarily produce evolutionary development. Actual physical mechanisms at work in the
physical universe appear to be quite different from simple cellular automata. Critics also claim that
it is unclear if the cellular automata are to be thought of as a metaphor or whether real systems
are supposed to use the same mechanism on some level of abstraction. Wolfram meets this
criticism by pointing out that cellular automata are models and as such surprisingly successful
ones.
Fredkin in his digital philosophy (1992) suggests that particle physics can emerge from
cellular automata. The universe is digital, time and space are not continuous but discrete. He goes
a step beyond the usual “computational universe” picture: even humans are software running on
a universal computer.
Wolfram and Fredkin assume that the universe is on a fundamental level a discrete system,
and so a suitable basis for an all‐encompassing digital computer. Actually the hypothesis about the
discreteness of the physical world is not decisive for pancomputationalism (natural
computationalism). As already mentioned, there are digital as well as analogue computers. On a
quantum‐mechanical level, the universe performs computation (Lloyd 2006) on characteristically
dual wave‐particle objects.
There are interesting philosophical connections between digital and analog processes. For
example, the DNA code (digital) is closely related to protein folding (analog) for its functioning in
biological systems. Moreover, even if in some representations it may be digital (and thus conform
to the Pythagorean ideal of number as a principle of the world) computation in the universe is
performed at many different levels of organization, including quantum computing, bio‐computing,
membrane computing, spatial computing etc. ‐ some of them digital, others analog.
Information Processing Beyond the Turing Limit
Computation is nowadays performed by computer systems connected in global networks of
multitasking, often mobile, interacting devices. Classical understanding of computation as
syntactic mechanical symbol manipulation is being replaced by information processing, with both
syntactic and semantic aspects being expressed. According to Burgin (2005), information
processing in practice includes the following:
(1). Preserving information (protecting information from change).
(2). Changing information or its representation.
(3). Changing the location of information in the physical world.
In the above list (3) can actually be understood as changing the representation, and is therefore a
subset of the (2). Moreover, preserving information (1) can be described as change performed by
identity operation. What it boils down to is that computation is in general a change of information
or as usually expressed, information processing.
Searching for further generalization, it can be noted that mechanisms of both computation
and communication imply the transformation and preservation of information. Bohan Broderick
(2004) compares notions of communication and computation and arrives at the conclusion that
computation and communication are often conceptually indistinguishable. He argues that the
difference between computation and communication lies only in the domain: computation is
limited to a process within a system and communication is an interaction between a system and
its environment. An interesting problem of distinction arises when the computer is conceived as
an open system in communication with the environment, the boundary of which is dynamic, as in
biological computing.
Burgin identifies three distinct components of information processing systems: hardware
(physical devices), software (programs that regulate its functioning), and infoware which
represents information processing performed by the system. Infoware is a shell built around the
software‐hardware core which is the traditional domain of automata and algorithm theory.
Semantic Web is an example of infoware.
Natural Computation
The classical mathematical theory of computation devised long before global computer networks
is based on the theory of algorithms. Ideal, classical theoretical computers are mathematical
objects and they are equivalent to algorithms, abstract automata (Turing machines), effective
procedures, recursive functions, or formal languages.
Compared with new computing paradigms, Turing machines form the proper subset of the
set of information processing devices, in much the same way that Newton’s theory of gravitation
is a special case of Einstein’s theory, or Euclidean geometry is a limit case of non‐Euclidean
geometries.
For implementations of computationalism, interactive computing (such as, among others,
agent‐based) is the most appropriate model, as it naturally suits the purpose of modeling a
network of mutually communicating processes.(see Dodig Crnkovic 2006‐2011).
Among the new paradigms of computing, Natural computation has a prominent place. It is a
study of computational systems including the following:
‐ Computing techniques that take inspiration from nature for the development of novel problem‐
solving methods.
‐ Use of computers to simulate natural phenomena; and
‐ Computing with natural materials (e.g., molecules, atoms)
Natural computation is well suited for dealing with large, complex, and dynamic problems. It is an
emerging interdisciplinary area closely related to artificial intelligence and cognitive science, vision
and image processing, neuroscience, systems biology, bioinformatics ‐ to mention but a few.
Fields of research within Natural computing are, among others, biological computing/organic
computing, artificial neural networks, swarm intelligence, artificial immune systems, computing on
continuous data, membrane computing, artificial life, DNA computing, quantum computing, neural
computation, evolutionary computation, evolvable hardware, self‐organizing systems, emergent
behaviors, machine perception and systems biology.
Computational paradigms studied by natural computing are abstracted from natural
phenomena such as self‐X attributes of living (organic) systems (including ‐replication, ‐repair, ‐
definition and ‐assembly), the functioning of the brain, evolution, the immune systems, cell
membranes, and morphogenesis. These computational paradigms can be implemented not only in
electronic hardware, but also in materials such as biomolecules (DNA, RNA), or quantum
computing systems (physical computing).
According to pancomputationalism (natural computationalism) (Dodig Crnkovic 2006‐2011),
one can view the time development (dynamics) in nature as information processing, and learn
about its computational characteristics. Such processes include self‐assembly, developmental
processes, gene regulation networks, gene assembly in unicellular organisms, protein‐protein
interaction networks, biological transport networks, and similar.
Natural computing has specific criteria for the success of a computation. Unlike the case of
the Turing model, the halting problem is not the central issue, but instead the adequacy of the
computational response. Organic computing system adapts dynamically to the current conditions
of its environments by self‐organization, self‐configuration, self‐optimization, self‐healing, self‐
protection and context‐awareness. In many areas, we have to computationally model emergence
not as algorithmic (Barry Cooper, Aaron Sloman) which makes it interesting to investigate
computational characteristics of non‐algorithmic natural computation (sub‐symbolic, analog):
An "Organic Computing System" is a technical system, which adapts dynamically to the
current conditions of its environment. It is characterised by the self‐X properties: self‐
organization & self‐configuration (auto‐configuration); self‐optimisation (automated
optimization), self‐protection (automated computer security) & self‐healing, self‐explaining
and context‐awareness. Ideas of Organic Computing and its fundamental concepts arose
independently in different research areas like Neuroscience, Molecular Biology, and
Computer Engineering. Self‐organising systems have been studied for quite some time by
mathematicians, sociologists, physicists, economists, and computer scientists, but so far
almost exclusively based on strongly simplified artificial models. Central aspects of Organic
Computing systems have been and will be inspired by an analysis of information processing
in biological systems. http://www.organic‐computing.org
In sum, solutions are being sought in natural systems with evolutionary developed strategies for
handling complexity in order to improve complex networks of massively parallel autonomous
engineered computational systems. Research in theoretical foundations of Natural computing is
needed to improve understanding on the fundamental level of computation as information
processing which underlies all computing in nature.
Much like research in other disciplines of Computing such as AI, SE, and Robotics, Natural
computing is interdisciplinary research and has a synthetic approach, unifying knowledge from a
variety of related fields. Research questions, theories, methods and approaches are used from
Computer Science (such as Theory of automata and formal languages, Interactive computing),
Information Science (e.g. Shannon’s theory of communication), ICT studies, Mathematics (such as
randomness, Algorithmic theory of information), Logic (e.g. pluralist logic, game logic),
Epistemology (especially naturalized epistemologies), evolution and Cognitive Science
(mechanisms of information processing in living organisms) in order to investigate foundational
and conceptual issues of Natural computation and information processing in nature. In these
times brimming with excitement, our task is nothing less than to discover a new, broader, notion
of computation, and to understand the world around us in terms of information processing.
(Rozenberg and Kari 2008) This development necessitates what Cooper, Löve and Sorbi (2007 p. X)
call “taking computational research beyond the constraints of ‘normal science’”.
Conclusion
This article proposes a kind of epistemological naturalism, different from Brier’s Cybersemiotics,
based on the synthesis of two fundamental cosmological ideas: the universe as informational
structure (informationalism) and the universe as a network of computational processes
(pancomputationalism/naturalist computationalism). In this framework, computational processes
are understood as natural computation, since information processing (computation) is not only
found in human communication and computational machinery but also in the entirety of nature.
Information represents the world (reality as an informational web) for a cognizing agent, while
information dynamics (information processing, computation) implements physical laws through
which all the changes of informational structures unfold. Computation as it appears in the natural
world is more general than the human process of calculation modeled by the Turing machine.
Natural computing takes place through the interactions of concurrent asynchronous
computational processes which we argue to be the most general representation of information
dynamics.
Based on Informational Structural Realism and the generalized idea of computing, Info‐
computationalism (ICON) meets Brier’s list of requirements for a naturalist knowledge production
framework (Brier 2010). However, it adopts a scientific third‐person account (Dodig‐Crnkovic
2006‐2011), which implies that qualia experienced in a first‐person mode are outside the domain
of info‐computationalism. Instead, qualia as well as other natural phenomena are accounted for in
a third‐person perspective.
References
Axelrod, R. (1997). The Complexity of Cooperation: Agent‐Based Models of Competition and
Collaboration. Princeton: Princeton University Press.
Bohan Broderick, P. (2004). On Communication and Computation. Minds and Machines, 14(1), 1–
19.
Brier, S. (2008). Cybersemiotics: Why Information is not enough!. University of Toronto Press:
Toronto, Canada.
Brier, S. (2009). Cybersemiotic pragmaticism and constructivism. Cons. Foun. 5, 19–38.
Brier, S. (2010). Cybersemiotics: An Evolutionary World View Going Beyond Entropy and
Information into the Question of Meaning. Entropy 12, no. 8: 1902‐1920.
Burgin, M. (2005). Super‐Recursive Algorithms. Springer Monographs in Computer Science.
Chaitin, G. (2007). Epistemology as Information Theory: From Leibniz to Ω. In (Dodig Crnkovic and
Stuart 2007, Computation, Information, Cognition: The Nexus and the Liminal).
Church, A. (1935). Abstract No. 204. Bull. Amer. Math. Soc. 41, 332‐333.
Church, A. (1936). An Unsolvable Problem of Elementary Number Theory. Amer. J. Math. 58, 345.
Cooper, S.B.; Löwe, B.; Sorbi, A. (eds.) (2007). New Computational Paradigms. Changing
Conceptions of What is Computable. Springer Mathematics of Computing series, XIII.
Denning, P. (2007). Computing is a natural science, communications of the ACM, 50(7), 13–18.
http://cs.gmu.edu/cne/pjd/PUBS/CACMcols/cacmJul07.pdf.
Dodig Crnkovic, G. & Burgin, M. (2010). Information and Computation. World Scientific Pub Co Inc.
Singapore.
Dodig Crnkovic, G. & Hofkirchner, W (2011). Floridi’s “Open Problems in Philosophy of
Information”, Ten Years After. forthcoming.
Dodig Crnkovic, G. (2006). Investigations into Information Semantics and Ethics of Computing (pp.
1‐133). Västerås, Sweden: Mälardalen University Press.
Dodig Crnkovic, G. (2009). Information And Computation Nets. Investigations into Info‐
computational World. Information and Computation (pp. 1‐96). Saarbrucken: Vdm Verlag.
Dodig Crnkovic, G. (2010). The Cybersemiotics and Info‐Computationalist Research Programmes as
Platforms for Knowledge Production in Organisms and Machines. Entropy 12, no. 4: 878‐901.
Dodig Crnkovic, G. (2010a). Biological Information and Natural Computation in Thinking machines
and the philosophy of computer science: concepts and principles. In J. Vallverdú (Ed.),
Hershey PA: Information Science Reference.
Dodig Crnkovic, G. (2011). Significance of Models of Computation from Turing Model to Natural
Computation. Minds and Machines, (R. Turner and A. Eden guest eds.) Volume 21, Issue 2,
Page 301. http://www.springerlink.com/openurl.asp?genre=article&id=doi:10.1007/s11023‐
011‐9235‐1
Dodig Crnkovic, G., & Mueller, V. (2009). A Dialogue Concerning Two World Systems: Info‐
Computational vs. Mechanistic. In (Dodig Crnkovic &. Burgin 2011, Information and
Computation), 149–84.
Dodig Crnkovic, G., & Stuart, S. (2007). Computation, Information, Cognition: The Nexus and the
Liminal. Cambridge Scholars Pub. Newcastle, UK.
Dodig‐Crnkovic, G. (2003). Shifting the paradigm of the philosophy of science: The philosophy of
information and a new renaissance. Minds and Machines, 13(4), 521–536.
http://www.springerlink.com/content/g14t483510156726/fulltext.pdf
Epstein, J. M. (2007). Generative Social Science: Studies in Agent‐Based Computational Modeling.
Princeton University.
Floridi L. (2008). A defense of informational structural realism. Synthese 161: 2, Springer, 219–253.
Floridi L. (2009). Against digital ontology. Synthese, 168 (1), 151–78.
Fredkin, E. (1992). Finite Nature. Proceedings of the XXVIIth Rencotre de Moriond.
Goertzel, B. (1994). Chaotic Logic. Plenum Press.
Kampis, G. (1991). Self‐Modifying Systems In Biology And Cognitive Science: A New Framework For
Dynamics, Information, And Complexity. Pergamon Press.
Weber H. (1891–92). Jahresbericht der Deutschen Mathematiker‐Vereinigung, Vol. 2. Page 19.
Kurakin, A. (2011). The self‐organizing fractal theory as a universal discovery method: the
phenomenon of life, Theoretical Biology and Medical Modelling, 8:4
http://www.tbiomed.com/content/8/1/4.
Kurakin, A. (2009). Scale‐free flow of life: on the biology, economics, and physics of the cell,
Theoretical biology & medical modelling 6.
Ladyman, J., Ross, D., Spurrett, D., and Collier, J. (2007). Everything must go: metaphysics
naturalized. Clarendon Press, Oxford: 1–368.
Lesk, A. (2000). The Unreasonable Effectiveness of Mathematics in Molecular Biology. The
Mathematical Intelligencer, Vol. 22, No. 2, 28–36.
Lloyd, S. (2006). Programming the universe: a quantum computer scientist takes on the cosmos.
Knopf. New York.
Rozenberg, G. and Kari, L. (2008). The many facets of natural computing. Communications of the
ACM, 51. 72–83.
Scheutz, M. (2002). Computationalism new directions. MIT Press, Cambridge Mass.
Sowa, J. F. (2000). Knowledge Representation: Logical, Philosophical, and Computational
Foundations. Brooks/Cole Publishing Co., Pacific Grove, CA. http://www.jfsowa.com/krbook/
Whitehead, A. N. (1933). Adventures of Ideas. Macmillan, New York.
Whitehead, A. N. (1937). Essays in Science and Philosophy. Philosophical Library, New York.
Wigner E. (1960). The Unreasonable Effectiveness of Mathematics in the Natural Sciences.
Communications in Pure and Applied Mathematics 13‐I.
Wolfram, S. (2002). A New Kind of Science. Wolfram Science.
Zach, R. (2009). Hilbert's Program. The Stanford Encyclopedia of Philosophy (Spring 2009 Edition),
Edward N. Zalta (ed.), http://plato.stanford.edu/archives/spr2009/entries/hilbert‐program
Zenil H. (2011). Randomness Through Computation: Some Answers, More Questions. World
Scientific Publishing Co. Singapore.
All the links accessed at 07 06 2011