Upload
yopghm698
View
220
Download
0
Embed Size (px)
Citation preview
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
1/41
Toward an evolvable neuromolecular hardware: a hardware
design for a multilevel artificial brain with digital circuits
Jong-Chen Chen, Ruey-Dong Chen
Department of Management Information Systems, National YunLin University of Science
and Technology, Touliu, Taiwan, R.O.C.
Author to whom correspondence should be sent: Jong-Chen Chen
Ph: +886-5-534-2601 ext. 5300 (dept.)+886-5-534-2601 ext. 5332 (office)
+886-5-551-2762 (home)FAX: +886-5-531-2077
email: [email protected]
Running Title: Evolutionary Neural Networks
Revised: Feb., 2001
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
2/41
1
Toward an evolvable neuromolecular hardware: a hardware
design for a multilevel artificial brain with digital circuits
Jong-Chen Chen, Ruey-Dong Chen
Department of Management Information Systems, National YunLin University of Science
and Technology, Touliu, Taiwan, R.O.C.
Abstract: A biologically inspired neuromolecular architecture implemented on digital circuits
is proposed in this paper. Digital machines and biological systems provide different modes
of information processing. The former are designed to be effectively programmable, whereas
the latter have self-organizing dynamics. Previously, we developed a multilevel computer
model that captures intra- and interneuronal information processing. The experimental
results showed that this self-organizing model has long-term evolutionary learning capability
that allows it to learn in a continuous manner, and that the function of the system changes as
its structure is altered. Malleability and gradual transformability play an important role in
facilitating evolutionary learning. The implementation of this model on digital circuits
would allow it to perform on a real-time basis and to provide an architectural paradigm for
emerging molecular or neuromolecular electronic technologies.
keywords: evolutionary adaptability, artificial brain, multilevel evolutionary learning,
evolvable hardware
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
3/41
2
1. Introduction
Our brain is a highly activated, asynchronous concurrent network. This network has
significant information processing capability that allows us to think, imagine, dream, and so
on. In contrast, conventional digital computers have excellent computational power for
performing an enormous amount of repetitive work and a variety of information processing
tasks ranging over a wide spectrum of applications. Conrad [19] indicated that the major
dichotomy between brains and machines is the ability to evolve versus programmability.
Evolution by variation and selection is the foundation of natures problem-solving
method [23]. In biological systems, functions and structures are closely related [15]. That
is, when the structures of a system are altered, its functions (or behaviors) change accordingly.
Evolvability and a close structure-function relationship provide organisms with the
malleability (gradual transformability) to cope with environmental changes (i.e., noisy
environments) and to learn new survival strategies for uncertain environments (i.e., new
environments). In recent years, the application of evolutionary computational techniques to
different problem domains has gained more attention and grown rapidly. The major
contributions were made by evolutionary optimization procedures [1], the evolutionary
programming approach [29], evolutionary strategies [57,59], and genetic algorithms [30,41].
Unlike biological systems, conventional computers are deficient in coping with problem
change [15,20,79]. A slight modification in a computer program can easily produce an
incorrect program, or a major malfunction. Usually, reprogramming is inevitable with only
a slight change in problem requirements. However, as advocated by Turing [71], there does
exist an effective procedure (or program) that can simulate (or solve) any problem as long as
it can be defined (or described) in a formal, precise manner. This means that conventional
computers have an effective programmability that allows us to simulate any physically
realizable process in nature [16].
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
4/41
3
As indicated above, evolvablity is an important feature in our brain that allows
adaptability. Conventional computers have an effective programmability that allows us to
apply them to various problem domains. One of the ultimate goals is to integrate the merits
of information processing mechanisms provided by both brains and computers (which might
be called a brain-like computer) into a system, which might generate synergistic effects that
cannot be performed by a brain or computer alone. However, the principles of biological
information processing in the human brain are not understood completely. While the
success of a real brain-like computer may seem faraway, a feasible approach is to employ
some possible information processing mechanisms understood from our brain, develop a
system based on this, and perform a variety of experiments. A vast number of research
projects have been conducted along this line. This research has included connectionist
models, evolutionary neural models, evolvable hardware, molecular computing, molecular
electronics and neuromolecular systems.
Connectionist models [32,33,36,45,49,73,74], which attempt to use the strength of
connections among neurons to represent information, are the most well-known neural models.
A number of investigators further applied evolutionary learning techniques to connectionist
models [46,58,63-65,75,77,78,81,82] and to intraneuronal models [23,24,46,47]. The
advantages of evolutionary design over human design can be found in Yao and Higushi [80].
The above models have more flexible learning capabilities than classic artificial intelligence
(AI) models and are applicable to a variety of problem domains. However, most models
developed so far are software simulation systems. It is very time-consuming to simulate a
population of networks, in particular an ensemble of evolutionary neural networks. The
studies on evolvable hardware have thus emerged.
As pointed out by Yao [79], there is no unanimous definition of evolvable hardware at
this moment. He defined it as architectures, structures, and functions that can change
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
5/41
4
dynamically and autonomously to perform specific tasks, but with a constant hardware
architecture [79]. Simulated evolution and reconfigurable hardware are two major aspects
of evolable hardware. de Garis [25-27] further divided evolvable hardware into two
categories: extrinsic and intrinsic. The former simulates evolution using software while the
latter with hardware.
Sipper et al. [61] proposed two reconfigurable architectures inspired by evolution and
ontogeny. Higuchi and his colleagues [39,40] have been working on the development of
evolvable hardware chips for different applications; an analog chip for cellular phones, a
clock-timing chip for Giga hertz systems, a chip for autonomous reconfiguration control, a
data compression chip, and a chip for controlling robotic hands and navigation. Murakawa
et al. [55] presented an evolvable hardware for neural network applications by reconfiguring
the network topology and node functions in order to adapt the dynamics for a specific
problem domain. de Garis [25-27] developed an artificial brain that can assemble a great
number of cellular automata-based neural net modules and in the future may control the
behavior of a kitten robot.
It should be noted that connectionist models, including most evolutionary neural
networks and evolvable hardware, emphasize the connections among neurons based on the
Hebbian rule and omit information processing inside the neurons. Roughly, they consider
the neurons to be simple on/off threshold units with a simple firing rule. The intelligence of
these models is mediated primarily by exchanging signals among neurons. In general, these
models have a common underlying structure (i.e., map to one another). When learning is
completed, input patterns are translated into the strength of the connections among the
neurons. The patterns will interfere with one another because they are coded based on the
strength of the connections among the neurons. This has been called the superposition
problem [14]. This problem becomes worse when the number of patterns to be stored in a
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
6/41
5
network increases. Ignoring intraneuronal dynamics is an enormous simplification that
greatly reduces the computational capability of the neurons. The molecular and
neuromolecular models that will be described in this study shift the emphasis to an
intraneuronal form of information processing.
In the early 1970s, Conrad proposed some molecular information processing
architectures motivated from some modes of information processing in the human brain
[10-13]. This line of work was further developed into the idea of molecular computers
[16-18,20,22]. A number of researchers [2,3,42-44,68-70] have tried to develop
carbon-based computing devices (so-called biocomputers) by using actual biological
materials. However, the realization of biocomputers is still in a very early stage for at least
a couple of reasons [42]. First, biological materials have not been considered seriously for
device construction. Secondly, biological materials are too fragile and not durable.
The artificial neuromolecular (ANM) model that we developed earlier [6,7] was
motivated by two molecular architectures [11-13]. This model has three distinguishing
features. The first is that the input-output behavior of the neurons is controlled by complex
internal dynamics that reflect the molecular mechanisms inside real neurons. The second
feature involves neurons that have hierarchical controls that make it possible to manipulate
collections of neurons. Finally, the model is an open evolutionary architecture that has a
rich potential for the evolution of a variety of behaviors that could significantly expand the
problem domains to which neural computing is applicable. In principle, this openness
should allow the model to address a broader class of problems than purely connectionist
models do. However, this is still a virtual machine that runs on top of a serial digital
computer and is therefore subject to practical computational limitations.
Section 2 describes the neuromolecular architecture and the previous experimental
results. Section 3 explains the detailed architecture of the intraneuronal dynamic model
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
7/41
6
along with the biological evidence. Section 4 illustrates the evolutionary learning
mechanisms. Section 5 describes a hardware design of the biologically motivated
neuromolecular architecture with digital circuits. Section 6 is our concluding remarks.
2. The ANM system
2.1 Brief description of the system
The ANM system is an artificial brain that provides a rich platform for evolutionary
learning. The artificial brain is comprised of a network of neuron-like modules with internal
dynamics modeled by cellular automata. The dynamics reflect molecular processes believed
to be operative in real neurons, in particular processes connected with second messenger
signals and cytoskeleton-membrane interactions.
The objective is to create a repertoire of special-purpose pattern processors through an
evolutionary search algorithm and then to use memory manipulation algorithms to select
combinations of processors from the repertoires that are capable of performing coherent
functions. The system, as implemented presently, consists of two layers of memory access
neurons (called reference neurons) and one layer of intraneuronal dynamic neurons (called
cytoskeletal neurons) divided into a collection of functionally comparable subnets.
Evolutionary learning can occur at the intraneuronal level through variation-selection in
the cytoskeletal structures responsible for the integration of signals in space and time. The
memory manipulation algorithms that orchestrate the repertoire of neuronal processors also
use evolutionary search procedures, and are well suited for operating in an associative mode
as well.
2.2 Previous experimental results
By adjusting the input/output interfaces, the ANM system has been linked to a number
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
8/41
7
of problem domains, including maze navigation, bit pattern recognition, Chinese character
recognition, and chronic hepatitis B diagnosis.
Previous investigations on the malleability of this system showed that its function
changes in accordance with changes in the systems structure [9]. The experimental results
also provided the information about the fitness landscape implicit in the systems structure
that facilitates evolutionary learning [4,9]. The evolution friendliness of this system
increases as its structural complexity increases. This was investigated by adding more types
of cytoskeletal fibers, allowing weaker interactions, and increasing redundancy [7].
The integration of intra- and interneuronal information processing also plays a vital role.
These two types of information processing yield significant computational and learning
synergies [6]. The integrated system effectively employs synergies among different levels
of learning [4]. With the above features, the system is able to learn continuously in complex
problem domains and is effective in coping with problem changes [4,9].
Choosing significant features for differentiating data and insignificant features for
tolerating noise is not an easy problem for any intelligent system. Our experimental results
showed that the system exhibits an effective self-organizing capability in striking a balance
between pattern categorization and pattern generalization [5,8]. In the diagnosis of hepatitis
B patient data application, this system showed itself to be well suited for differentiating
chronic hepatitis B patients from healthy individuals and for investigating what would be the
significant parameters in determining if one is infected with chronic hepatitis B [5].
2.3 The ANM architecture
The artificial brain is comprised of two complementary neuromolecular models:
reference neurons and cytoskeletal neurons. The following only explains the connections
among the neurons and their control mechanisms. Intraneuronal information processing will
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
9/41
8
be discussed in section 3.
The neuromolecular architecture has 256 cytoskeletal neurons, divided into eight
comparable subnets. Each subnet consists of 32 cytoskeletal neurons. By comparable
subnets, we mean that the input/output neuronal connections and intraneuronal structures of
each subnet are similar or the same (the detail will be described in the next section). As
shown in Fig. 1, these 256 cytoskeletal neurons are controlled by two layers of reference
neurons (8 high-level reference neurons and 32 low-level reference neurons). Each
high-level reference neuron controls a collection of low-level reference neurons, which in turn
controls a bundle of comparable cytoskeletal neurons. A high-level reference neuron will
therefore control a particular combination of cytoskeletal neurons through low-level reference
neurons.
subnet1
R2 R3
r1 r2 r32
E1 E2 E32
r3low-levelreferenceneurons
high-levelreferenceneurons
R8. . .
. . .
subnet2
E1 E2 E32. . .
subnet8
E1 E2 E32. . .. . .
R1
. . .
cytoskeletal
neurons
Fig. 1. Connections between reference and cytoskeletal neuron layers. Low-level reference
neurons select cytoskeletal neurons in each subnet that have similar cytoskeletal structures.
High-level reference neurons select different combinations of the low-level reference neurons.
The reference neuron scheme [13] is a memory manipulation model. This approach
correlates with some suggested hippocampal function mechanisms. These mechanisms
involve synaptic facilitation, as in Hebbian models. A reference neuron will load all of the
firing neurons that it contacts at the same time. Subsequent firing of a reference neuron will
thus fire (rekindle) all of the neurons that it loaded previously. This mechanism makes it
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
10/41
9
possible to store a single experience rapidly. The reference neuron scheme supports
time-ordered memories, content-addressable memories, associative memories, control of
circuit selection, and neuron orchestration. With these mechanisms it is possible to build up
complex association structures. Circuit selection and neuron orchestration are the only
memory functions that are used in our model (to be explained below).
Reference neurons can be used to control network selection. Signals emanating from
reference neurons inhibit and excite a set of networks in a manner that allows only one to be
active at any instant in time. This feature is important when we need to evaluate the
performance of each comparable subnet individually and alternately.
Orchestration is an adaptive process mediated by varying neurons in the assembly that
selects good performing combinations of neurons. The objective of orchestration is to select
an assembly of neurons that allows for performing input/output pattern transduction. In the
ANM system, orchestration occurs between high-level and low-level reference neurons. We
note that only cytoskeletal neurons selected by reference neurons are allowed to perform
pattern transduction.
2.4 Input-output interface
This system had 64 receptor neurons and 32 effector neurons when first constructed [6].
The neuronal connection patterns of each comparable subnet are the same (Fig. 2). This
ensures that comparable cytoskeletal neurons in each subnet (i.e., neurons having similar
intraneuronal structures) will receive the same inputs from receptor neurons and that the
systems outputs are the same when the firing patterns of each subnet are the same. Each
effector neuron is controlled by eight comparable cytoskeletal neurons (i.e., one from each
comparable subnet). We note that an effector neuron fires when one of its controlling
cytoskeletal neurons fires.
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
11/41
10
In each input pattern, the first firing effector neuron is recorded. The initial effector
neuron group firing is defined as the output associated with an input pattern. When the
initial effector neuron-firing group is the same as the group determined by a particular
problem domain, the system makes a correct response. The greater the number of correct
responses made by the system, the higher its fitness. The overall architecture of the ANM
system is shown in Fig. 3.
. . . . . . . . . .I2
cytoskeletal
neurons
I1 I64
E32E1 E2 E32E1 E2
subnet1 subnet2
effector
neurons O2O1
receptor
neurons
. . .
. . . . . .
I4I3
O32
Fig. 2 Input/output interface of comparable cytoskeletal subnets. The connections between
receptor neuron and cytoskeletal neuron layers are randomly decided initially, but vary as
learning proceeds. The connections between cytoskeletal neuron and effector neuron layers
are fixed.
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
12/41
11
cytoskeletal
effector neuronsI1
I2
O2
O32
ANM
first firingeffector neuron
receptor neurons
ref. neurons
neuronsOk
samegroup
yes
correct
classification
each
pattern
no
group of a pattern
select
.
.
.
wrong
classification
I64
.
.
.
O1
Fig. 3. Overall architecture of the ANM system
3. Intraneuronal dynamics
3.1 Biological evidence
Experimental studies utilizing a variety of techniques suggest that chemical and
molecular processes within neurons play a significant role in controlling neural firing
[28,37,50-53]. Rapid depolarizing effects induced by the microinjection of second
messenger molecules (cAMP) led to the suggestion that the cytoskeletal motions influence
ion channels [52,53]. Presumably cAMP acts on microtubule associated proteins to trigger
signal flow in the cytoskeleton or to alter the flow of signals arising from other sources.
This conclusion is supported by ultrafast electron microscopic studies that correlate ion
channel activity with cytoskeletal dynamics [54].
The cytoskeleton has three major components: microtubules, microfilaments (e.g., actin
filaments), and intermediate filaments (referred to as neurofilaments in neurons).
Microtubules and microfilaments, composed of simple tublin polymers (alpha and beta) and
actins respectively, might interact with one another via microtubule associated proteins
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
13/41
12
[34,35,56,60]. Likewise, intermediate filaments could interact with microtubules and
microfilaments via some of their binding proteins [62,66,67,72].
However, the real interaction among the three major filaments of the cytoskeleton is not
at present well understood. The cytoskeleton extends throughout the cell and underlies the
membrane. It is capable of exhibiting structural changes associated with
polymerization-depolymerization processes [54]. Conformational switching [38],
propagating conformational changes [21], vibratory motions of the sound wave type [53],
electric-dipole oscillations of the Frhlich type [31,37], and membrane mediated interactions
[48] have also been suggested as possibilities. These and other mechanisms could
conceivably coexist, allowing for different modes of signal transmission. Obviously the
cytoskeleton is an extremely complex system.
3.2 The cytoskeletal neuron model
The cytoskeletal neuron is motivated by the biological evidence described above. It is
simulated with a two-dimensional grid. Each grid square is referred to as a compartment.
Signals impinging on a neuron are transduced into the cytoskeletal signal flows. When a
compartment of a cytoskeletal neuron receives an external signal, a cytoskeletal signal will be
generated in a component of the cytoskeleton and transmitted to its neighboring
compartments at a specific rate. In the meantime, the signal will decrease over time. When
a cytoskeletal component is activated and there are some kinases sitting in the same
compartment, a cytoskeletal neuron will fire.
A kinase thus serves as a readout enzyme that can recognize a subset of input patterns.
Adding or deleting a kinase will as a consequence add or delete the set of patterns to which a
neuron responds. All input patterns in space and time that trigger a neuron to fire are
grouped as its recognition set. Relocating a kinase to a neighboring compartment could in
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
14/41
13
some cases hold the set of patterns recognized by a neuron constant, but in general it would
alter the input-output behavior of neurons by advancing or delaying its firing timing. The
power of a cytoskeletal neuron is that it is capable of transducing a set of spatiotemporal input
patterns into temporal output patterns.
The following explains how the signal integration features in the cytoskeleton are
captured. As indicated above, a cytoskeletal signal flow is initiated when an external signal
impinges on the membrane of a neuron. For example, in Fig. 4, the activation of the readin
enzyme at location (2,2) will trigger a cytoskeletal signal flow transmitted along the second
column of the C2 components, starting from location (2,2) and running to location (8,2).
An activated component will affect the state of the various types of neighboring
components if there is a MAP (microtubule associated protein) linking these components
together. For example, in Fig. 4, the activation of the readin enzyme at location (3,7) will
trigger a cytoskeletal signal flow transmitted along the seventh column of the C1 components,
starting from location (3,7) and running to location (6,7). When the signal arrives at
location (4,7), it will activate the component at location (4,8) via the MAP. The activation
of this component will in turn trigger a signal flow travelling along the eighth column. We
assumed that the interactions between two neighboring components are asymmetrical. That
is, the activated component at location (4,8) is not sufficient to activate the component at
location (4,7). The other assumption was that different types of components transmit
signals at different speeds. For example, C1 components transmit signals at the slowest
speed. By contrast, C3 components transmit signals at the fastest speed. The transmittion
speed of the C2 components is intermediate, between that of the C1 and C3 components.
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
15/41
14
MAP' ' ' ' ' 'readout
enzyme
1
2
3
4
5
6
7
8
i
location (i, j)
C1
C1
C1
C1
C1
C2
C2
C2
C2
C2
C1
C1
C1
C1
C1
C3
C3
C3
C3
C3
C2
C2
C1
C1
C1
C1
C1
C2
C2
C2
C2
C1
C1
C1
C1
C3
C3
C3
C3
C3
C3
readin
enzyme
6 71 2 3 4 5 8
Fig. 4. A cytoskeletal neuron. Each grid location, referred to as a site, has at most one of
three types of components: C1, C2, or C3. Some sites may not have any component at all.
Readin enzymes could reside at the same site as any one of the above components. Readout
enzymes are only allowed to reside at the site of a C1 component. Each site has eight
neighboring sites. The neighbors of an edge site are determined in a wrap-around fashion.
Two neighboring components of different types may be linked by a MAP.
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
16/41
15
When a requisite spatiotemporal combination of cytoskeletal signals arrives at a readout
enzyme site, the neuron will fire. For example, in Fig. 4, there are three possible signal
flows that might reach and activate the readout enzyme at location (8,3). The first signal
flow is the one transmitted along the second column, activated either by the readin enzyme at
location (2,2) or by the enzyme at location (3,2). The second signal flow transmits along
the third column, activated by the enzyme at location (4,3). The third signal flow transmits
along the fourth column, activated either by the readin enzyme at location (1,4) or by the
enzyme at location (4,4). When two out of the three signal flows reach location (8,3) within
a short period of time, they will activate the readout enzyme sitting at the same location.
The activation of the latter will in turn cause the neuron to fire. However, the neuron might
fire at different times for two reasons. First, signals are transmitted at different speeds along
different types of components. Secondly, signals may be initiated by different readin
enzymes.
We have explained how to capture the signal integration feature in the cytoskeleton.
The following explains how cytoskeletal dynamics are implemented with cellular automata.
Each cytoskeletal component has six possible states: quiescent (q0), active with increasing
levels of activity (q1, q2, q3, and q4), and refractory (qr). A component in the highly active
state (q3 or q4) will return to the refractory state at the next update time for that component
type. The next state for a less active component (q0, q1, or q2) depends on the sum of all
stimuli received from its active neighboring components (with each component type having
its own update time). The detailed state transition rules are illustrated in Fig. 5. A
component in the refractory state will go into the quiescent state at its next update time. A
component in the refractory state is not affected by its neighboring components until its
refractory period is over.
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
17/41
16
(a) C1 component
q0
q2
q1
q3
q4
s2
s1 s1, s2, s3
s1s3s3
s2
s3
s1, s2
(b) C2 component
q0
q2
q1 q 3,q4
s3
s1, s2 s1, s2, s3
s1, s2, s3
(c) C3 component
q 0q2
q1 q3,q4
s1, s2, s3
s1, s2, s3
s1, s2, s3
Fig. 5. Transition rules of the components. S1, S2, and S3 indicate a signal from a highly
activated component C1, C2, and C3, respectively. For example, if C1 in the state q0 receives
an S2 signal it will enter the moderately activated state q2. If it then receives an S3 signal itwill enter the more activated state q3.
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
18/41
17
4. Multilevel learning
Six levels of evolutionary learning are allowed in this system. They are at the initiating
signal-flow level (controlled by readin enzymes), responding to signal-flow level (controlled
by readout enzymes), controlling signal-flow level (controlled by MAPs), transmission
signal-flow level (controlled by cytoskeletal components), responding to external-stimuli
level (determined by the pattern of connections to receptor neurons), and
cytoskeletal-neurons-group level (controlled by reference neurons). The first four levels are
intraneuronal and occur inside cytoskeletal neurons, whereas the last two levels are
interneuronal.
Intraneuronal evolutionary learning has three major steps (Fig. 6). The performance of
each subnet is evaluated first. Then, the three best-performing subnets are selected.
Finally, the readout enzyme, readin enzyme, MAP, or component patterns are copied (with
variation) from the best-performing subnets to the lesser-performing subnets, depending on
which level of evolution is occurring. Evolutionary learning at the level of responding to
external stimuli comprises three steps, too (Fig. 6). As above, the performance of each
subnet is evaluated first. Then, the three best-performing subnets are selected. Finally, the
connections between receptor neuron and cytoskeletal neuron layer patterns are copied (with
variation) from the best-performing subnets to the lesser-performing subnets.
Evolutionary learning at the reference neuron level also comprises three steps (Fig. 7).
First, cytoskeletal neurons controlled by each high-level reference neuron (through low-level
reference neurons) are activated in sequence for evaluating their performance. Secondly,
the patterns of neural activities controlled by the best-performing reference neurons are
copied to the lesser-performing reference neurons. Finally, the lesser-performing reference
neurons control slight variations in the neural groups controlled by the best-performing
reference neurons, assuming that some errors occur during the copy process.
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
19/41
18
In the current implementation, only one level is opened for learning at a time while the
other levels are turned off. Each level is opened for 16 learning cycles. Our approach is to
turn on each level in an alternating manner until the simulation is terminated. The level
opening learning sequence is shown in Fig. 8. We note that the segregation in time
described above does not mean that the fitness assigned to the reference neurons is
independent of the properties of the cytoskeletal neurons. Evolutionary learning at the
cytoskeletal neuron level alters the performance characteristics of the collection of neurons
(or combination of bundles) that the reference neurons control. This alters the fitness of the
collection and therefore the fitness of the reference neuron that provides access to this
collection. Also, it should be noted that the mechanism of controlling the evolutionary
process does not have to be rigid. Indeed, it would be interesting to investigate the impacts
of varying the number of learning cycles assigned to each level and the level opening
sequence on the learning in the future.
Previous experimental results [4,7] showed that the information processing capability of
this system increases as more levels of learning are allowed. We further examined what
levels of evolution contribute most to the learning. The experimental result [4] showed that
the contributions are made by several levels of evolution in the early stage of learning, and
that fitness increases only at certain levels in the later stage of learning. This suggested that
synergy only occurs in a selective manner. However, it is rather difficult to determine what
kind of contribution made by each individual level of learning for the following two reasons.
First, the significance of each level varies as input data (or problem domains) change.
Secondly, synergies among different levels of learning suggest that learning at one level open
up opportunities for another. It is thus that we are not able to assign appropriate credit to
each individual level.
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
20/41
19
subnet1
a. evaluate
E3 E4E1 E2
subnet2
E3 E4E1 E2
subnet1
b. copy
E3 EE1 E2
subnet2
E3 EE1 E2
readin, readout, MAP, component,connections to receptor neurons
subnet1
c. vary
E3 E4E1 E2
subnet2
E3 E4E1 E2
variant
Fig. 6. Evolutionary learning at the cytoskeletal neuron layer.
R 1 R 2
r1 r2 r3 r4
re fe re nc e ne uro ns
(a )
low-le ve lre fe re nc e ne uro ns
high-le ve l
R 1 R 2
r1 r2 r3 r4
re fe re nc e ne uro ns
(b )
low-le ve lre fe re nc e ne uro ns
high-le ve l
R 1 R 2
r1 r2 r3 r4
re fe re nc e ne uro ns
(c)
low-le ve lre fe re nc e ne uro ns
high-le ve l
vari ant
Fig. 7. Evolutionary learning at the reference neuron layer.
timeref. neuron
(readout)
16 cycles
ref. neuron
cytoskeletal neuron
(receptor neuron)
16 cycles 16 cycles
ref. neuron
cytoskeletal neuron
(MAP)
16 cycles 16 cycles
ref. neuron
cytoskeletal neuron
(component)
16 cycles 16 cycles
16 cycles
cytoskeletal neuron
(readin)
Fig. 8. Sequence of opening of learning levels.
5. Digital hardware
In this section, we will explain a hardware design of the central architecture of the ANM
system (i.e., cytoskeletal neurons and reference neurons) on digital circuits.
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
21/41
20
5.1 Cytoskeletal neurons
As shown in Fig. 4, the cytoskeleton is represented with a 2-D (8X8) grid structure.
Cytoskeletal dynamics were simulated with 2-D cellular automata [76]. Each grid location
is simulated by a clocked sequential circuit (referred to as a processing unit, PU). In total,
there are sixty-four synchronous PUs for each cytoskeletal neuron. Each PU has 8
neighboring PUs. The neighbors of an edge PU are determined in a wrap-around fashion.
For any two neighboring PUs, there are two possible unidirectional connections between
them (i.e., one and its opposite directions). This allows each PU to take signals from and
send outputs to its eight neighboring PUs.
Each PU consists of three departments: input, process, and output (Fig. 9). The input
department receives information from its neighboring PUs and sends its outputs to the process
department. The latter integrates signals from either its input department or receptor neurons
into an output signal for the output department, which in turn sends its outputs to all
neighboring PUs.
As indicated earlier, the cytoskeleton model includes the following components:
microtubules, neurofilaments, microfilaments, microtubule associated proteins (MAPs),
readin enzymes, and readout enzymes. The following explains how to implement each of
these components on digital circuits. It should be noted that our aim in this study was to
provide the basic layout for implementing the ANM with conventional digital circuits. An
optimized circuit design layout has not been completed yet.
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
22/41
21
input
dept.
rocessin unit PU
process
dept.
output
dept./8
/8
/3
/1
neighboring
PUs
neighboring
PUs
process
dept.
bounderaccumulatorDPG /1
/3
input
control
process
control
output
controlcontrol
dept.
. . .
. . .
. . .
. . .
.
.
.
.
.
.
Fig. 9. Conceptual architecture of a cytoskeletal neuron.
5.1.1 Input department
As indicated earlier, the input department plays the role of converting signals from
neighboring PUs into signals for the process department. It has two major functions. The
first is to determine the type of influence a neighboring signal has on the current PU. The
second function is to control the signal conversion timing.
As noted above, each PUhas 8 neighboring PUs. There are eight D-latches designed to
hold information coming from neighboring PUs (one latch for each PU). The information
held in each D-latch is decoded by a corresponding 2x4 decoder for determining the type of
influence a neighboring signal has on the current PU.
As indicated in section 3, biological evidence suggests that the cytoskeleton is comprised
of three types of fibers: microtubules, microfilaments, and neurofilaments. Our assumption
[6] was that the cytoskeletal fibers play the role of signal transmission and integration, which
in turn controls the firing activity of a neuron. In addition, we assumed that signals
transmitted along microtubules (denoted by C1 in Fig. 4) represent major signal flows in the
cytoskeletal neuron and have the greatest impact on the other two types of components. In
contrast, signals transmitted along microfilaments (denoted by C3 in Fig. 4) play the role of
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
23/41
22
modulating major signal flows in the cytoskeletal neuron and have the least impact on the
other two types of components. Neurofilaments also serve as the role of modulating major
signals, but with more impact on the other two types of components than neurofilaments. In
summary, the types of influence for signals from neighboring fibers are divided into three
categories: strong, intermediate, and weak (denoted by S, I, and Win Fig. 10, respectively), as
shown in Table 1.
2 X 4decoder
.
.
.
Dflip-flop
D
flip-flop
PU120
21
E unused
S
I
W
unused
2 X 4decoder
20
21
E
.
.
.
PU8
.
.
.
input
dept.
process
dept.
3 X 8decoderRCTR DIV
counter
S0S1S2
.
.
.clk
input
control
M1 M16
M17 M24I1
I8
evolve at
MAP level
evolve at
component
level
Fig. 10. Input department.
Table 1. Influence type of a neighboring signal on a PU
type of current PUtype of aneighboring PU C1 C2 C3
C1 strong strong strong
C2 intermediate strong strong
C3 weak intermediate strong
For each connection, two bits are used to specify the influence of a neighboring signal on
a processing unit. For eight neighboring connections, sixteen bits are required (denoted by
M1-M16 in Fig. 10). As indicated earlier, we allow evolutionary learning to occur at the
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
24/41
23
cytoskeletal component level. That is, the component type of each PU is allowed to change
as learning proceeds. Indirectly, this would change the signal influence type from and to the
neighboring PUs. For example, lets assume that there is a PUwhose component type is C1.
As shown in Table 1, it has the greatest impact on its neighboring PUs. However, its impact
becomes much smaller if its component type is altered from C1 into C3. This belongs to the
first level of learning in this system.
For any two neighboring PUs, the connection is defaulted if they belong to the same
component type. This would allow signals to transmit along the same component type. If
they belong to different types, the connection is set only when there is a MAP linking them
together. For every possible connection to a neighboring PU, one bit is needed to indicate
whether there is a connection between them. Eight bits are required to setup the MAP
connection pattern to the eight neighbors (denoted by M17-M24). The MAP pattern linking
different types of PUs is allowed to change as evolutionary learning proceeds. This belongs
to the second level of learning.
The other function of the input department is to control the timing of signal conversion
for each neighboring signal arriving at the input department into signals for the process
department. The input department polls these latches in sequence such that only one is
allowed to perform signal conversion at a time. A counter starting from 0 to 7 is used to
control the timing of signal conversion. We note that signal conversion does not have to be
done in a sequential manner. Instead, it might be implemented with parallel digital circuits.
This would speed up the response time, but requires a more complicated circuit design.
5.1.2 Process department
The process department has three components: DPG (digital pulse generator),
accumulator, and bounder (Fig. 11). The DPG is responsible for converting signals from
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
25/41
24
either receptor neurons or neighboring PUs (through input department) into a sequence of
binary signals for the accumulator. The latter adds up these binary signals by using a 3-bit
binary counter and then sends the outputs to the bounder. The bounder will increase by 1 if
it receives a 1 signal from the DPG. The bounder is used to determine whetheror not a
neuron is ready for firing.
output
dept./1
process dept.
bounderaccumulatorDP G /1
/3
S
receptor neurons
IW
M25 M88
evolve at receptor tocytoskeletal neuron
connection level. . . I64I2I1 M26 . . .
M89
evolve at
readin enzymelevel
input
dept.
Fig. 11. Conceptual architecture of the process department.
As indicated above, the DPG receives signals from either receptor neurons or its
neighboring PUs. The pattern of connections between receptor neurons and each PU might
vary during the course of learning. In the current implementation, sixty-four bits (denoted
by M25-M88) are employed to represent the connections between receptor neurons and each
PU(one bit for each receptor neuron). This belongs to the third level of learning.
As mentioned earlier, a cytoskeletal signal is initiated when a readin enzyme receives an
external signal from any one of these 64 receptor neurons. In other words, there will be no
signal initiated if there is no readin enzyme sitting at the same site. As a consequence, the
existence of a readin enzyme will directly determine whether external signals arriving from
receptor neurons are allowed to convert into cytoskeletal signals. Changing the pattern of
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
26/41
25
readin enzymes will thus control the pattern of inputs into the cytoskeletal neurons. This
belongs to the fourth level of learning.
As shown in section 3.2, each cytoskeletal component has six possible states: quiescent
(q0), active with increasing levels of activity (q1, q2, q3, and q4), and refractory (qr). In the
current version of this model, a 3-bit binary counter is employed to represent the state of a
cytoskeletal component. The counter with 0 represents state q0, 1 represents state q1, 2
represents state q2, 3 represents state q3, 4 represents state q4, 5 represents state qr, and the
remaining two values are unused. The counter starts from 0 and increments by one when it
receives a 1 pulse from the DPG. After the count of 4, the counter will stay at the same
state until its next update time. A component in states q3 or q4 will go into the refractory
state (qr) at its next update time, and then go into the quiescent state (q0) at the following
update time.
As shown in Table 1, there are three types of signals that the DPG might receive. In the
current implementation, we assume that the DPG will generate one, two, and three pulses for
the accumulator when it receives a weak, intermediate, and strong signal, respectively. The
DPG has three 6-bit parallel-load/serial-out registers that load data into the registers in
parallel and then send these bits out one at a time. For example, in Fig. 12, the first 6-bit
register will load the data 101010 in parallel (three 1s represent three 1 pulses will be
generated), and then send these bits out one at a time.
As mentioned earlier, the accumulator will send its outputs to the bounder. The latter is
used for determining whether a PU is ready for sending outputs to its neighboring PUs or
firing a neuron. As shown in Fig. 12, the bounder has two inputs: P and Q. Input P takes
signals from the accumulator while Q is a fixed threshold set up by the system in advance.
Currently, the threshold value is set at 011, representing the highly active state q3. There
are three possible cases between P and Q. When P is less than Q, there is no output
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
27/41
26
generated from the bounder. When P is greater than or equal to Q, this means that the PU is
ready for sending outputs to its neighboring PUs. Specifically, the neuron will fire when P
is greater than Q and there is a readout enzyme sitting at the same site. As indicated earlier,
only the first firing effector neuron is recorded as an output associated with each input pattern.
As a consequence, all PUs will be reset to their initial states when there is a cytoskeletal
neuron firing. Through changing the pattern of readout enzymes, we can control the output
pattern of a cytoskeletal neuron. Like readin enzymes, the pattern of readout enzymes is
allowed to change as learning proceeds. This belongs to the fifth level of learning.
In addition to the above three major components, the process department has a
controller with two functions (see and in Fig. 12). First, it will control the accumulator
countdown at discrete instants of time. For example, an accumulator in the moderately
activated state q2 will go to the slightly activated state q1 at the next update time if it receives
no signal. Similarly, an accumulator in the slightly activated state q1 will go to the quiescent
state q0 if it receives no signal. An accumulator in the refractory state is not affected by its
neighbors until its refractory period is over, and will go into the quiescent state at its next
update time. The refractory state is necessary to ensure unidirectional propagation.
Secondly, the process department controls the update timing of the accumulator state.
Indirectly, this controls the signal transfer timing from the accumulator to the bounder, which
in turn determines the PU transmission speed. As indicated earlier, different types of
components transmit signals at different speeds. We assume that C1 and C3 components
(PUs) transmit signals at the slowest and fastest speed, respectively. The transmission speed
of C2 components is intermediate between that of C1 and C3 components. Our somewhat
arbitrary choice is that C3 transmits signals to its neighboring components on the fastest time
scale. C2 transmits slightly slower than twice the C3 rate. C1 transmits slightly slower than
twice the C2 rate.
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
28/41
27
0 1 2 3 4 5
clk
0 1 2 3 4 5
0 1 2 3 4 5
W
I
S
inputdept.
101010
101000
100000
DPG
Up
Clear
accumulator
/1
/1
/1
3-bit
binary
counter/3
/3
/3
3-bit
comparator
PQ
P>Q
P=Q
M90
bounder
firing
clk
output
dept.
011
parallel load/serial out
evolve at readoutenzyme level
Down
Fig. 12. Detailed architecture of the process department.
5.1.3 Output department
As indicated earlier, there are two unidirectional connections between a PU and its
neighboring PUs. In section 5.1.1, we explained that M17-M24 (representing the pattern of
MAPs) controls the pattern of signals from neighboring PUs to a specific PU. Similarly, we
need one bit to indicate whether a PU should send outputs to its neighboring PUs. In total,
eight bits are required (denoted by M91-M98), as shown in Fig. 13. As indicated earlier, the
connection is defaulted for any two neighboring PUs of the same type. If they belong to
different types, the connection is set only if there is a MAP linking them together. As
described in the input department, the MAP pattern is allowed to change as learning proceeds.
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
29/41
28
output dept.
process
dept.
PU1
PU8
..
....
M91 M98evolve at
MAP level. . .
Fig. 13. Output department.
5.1.4 Preliminary result
To evaluate the performance of the above digital circuits, each PU was simulated and
tested with MAX PLUS II system, a digital circuit simulation tool developed by Altera
Corporation (San Jose, CA). The result showed that these circuits function as expected.
The simulation results were consistent with those of the ANM system constructed previously.
At this stage, we have not yet performed a complete set of experiments to report in the present
paper.
5.2 Reference Neurons
As shown in Fig. 1, cytoskeletal neurons are controlled by two levels of reference
neurons. A low-level reference neuron contacts all cytoskeletal neurons in a given class (i.e.,
neurons belonging to the same bundle). A high-level reference neuron contacts subsets of
the low-level reference neurons. In the current implementation, the connections between the
two levels of reference neurons are allowed to change when learning proceeds. This belongs
to the sixth level of learning. We note that the connections between low-level reference
neuron and cytoskeletal neuron layers are held constant.
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
30/41
29
5.3 Learning Mechanisms
We have shown in section 5.1 that each PU is controlled by 98 bits of memory (M1-M16
for determining the type of influence for signals from neighboring PUs, M17-M24 and M91-M98
for setting up the MAP patterns, M25-M88 for choosing stimuli from receptor neurons, M89 and
M90 for deciding the existence of readin and readout enzymes, respectively). For each
cytoskeletal neuron implemented with 8x8 cellular automata, around 6.4 kilobits of memory
are required. As mentioned earlier, the ANM system has 256 cytoskeletal neurons in the
current implementation. In total, this would require slightly more than 1.6 megabits of
memory (i.e., 256x6400), which is around 200k bytes. When learning proceeds at the level
of cytoskeleton neurons, the performance of each subnet is evaluated first. Then, the
variations from the bit positions representing the best-performing subnets are copied to the
lesser-performing subnets. As shown in Fig. 14, the above process is repeated until the
system is terminated.
When learning proceeds at the level of reference neurons, only the connections among
the two levels of reference neurons are allowed to change in the course of learning. That is,
each high-level reference neuron is allowed to change its 32 low-level reference neurons
selection. For each high-level reference neuron, 32 bits are needed to specify the pattern of
connections to the 32 low-level reference neurons. In total, 256 bits (m1-m256) are needed for
the eight high-level reference neurons. As learning proceeds, the cytoskeletal neurons
selected by each high-level reference neuron are evaluated first. We note that the
performance (or fitness) of each high-level reference neuron is determined by the cytoskeletal
neurons it selects. The variations from those bits representing the best-performing
reference neurons are copied to the lesser-performing reference neurons. As shown in Fig.
15, the above process is repeated until the system is terminated.
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
31/41
30
Generate the initial repertoire of cytoskeletal neurons Mijk (i: subnet number; j: neuronnumber; k: memory bit number)
Repeat
Evaluate the performance of each subnet
(For each input pattern, a subnet makes a correct response when the first effectorneuron-firing group is the same as the group determined by a specific problem
domain. The greater the number of correct responses made by a subnet, the
higher its fitness. The detailed procedure of evaluating the performance of a
subnet is shown in Fig. 3.)
Select three best-performing subnets
Copy Mxjk to Myjk(x: best-performing subnet; y: lesser-performing subnet; j: 1,,32; k: 1,,98)
Mutate Myjk(y: lesser-performing subnet; j: 1,,32; k depends on which learning level is
operative. The range of k is:
from 1 to 16 if evolves at the component level
from 17 to 24 and from 91 to 98 if evolves at the MAP level
from 25 to 88 if evolves at the rec/cyto connection level
89 ` if evolves at the readin enzyme level90 if evolves at the readout enzyme level)
Until learnin ob ective com lete or maximum learnin time reaches
Fig. 14. Evolutionary learning at the cytoskeletal neuron level.
Generate the initial repertoire of high-level reference neurons mi (i: memory bit
number)
Repeat
Evaluate the performance of each high-level reference neuron
(The fitness of a reference neuron is determined by the performance of the
cytoskeletal neurons that it selects.)
Select three best-performing high-level reference neurons
Copy mx to my(x: best-performing ref. neurons; y: lesser-performing ref. neurons)
Mutate my (y: lesser-performing ref. neurons)
Until learning objective complete or maximum learning time reaches
Fig. 15. Evolutionary learning at the reference neuron level
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
32/41
31
6. Conclusions
Evolution is the essence of biological systems for high adaptability. Digital machines
are designed to be effectively programmable. The ANM system that we developed [6,7] is a
biologically inspired neuromolecular architecture that attempts to capture certain biological
information processing features. Evolutionary adaptability is one of the significant features
captured in this architecture. Previously, this architecture was implemented using computer
programs. Whereas a computer simulation of such a multilevel parallel network is very
time-consuming, we demonstrated a hardware design of this architecture using conventional
digital circuits. Our ultimate goal is to build actual hardware (or better,
molecularware/neuromolecularware) that is natural to the biological processing mode.
Adaptability is a very broad term. It might be defined as the capacity to continue to
function in an unknown or uncertain environment [15]. Generalization can be said as a
specific kind of adaptability. By generalization, we mean the ability to group different
patterns in a natural way in accordance with some underlying structural or functional
principles [8]. Previous experimental results [4,6,8] demonstrated that this system exhibits
some degrees of effective generalization capability in which intraneuronal dynamics plays a
significant role. However, this was still a very limited approach to generalization since high
level cognitive processes were not taken into account.
In this system, a cytoskeletal neuron with a particular integrative dynamics and readout
enzyme distribution will recognize some families of input patterns (i.e., it will recognize a
family of input patterns that are variant in space and time). The input patterns recognized by
a cytoskeletal neuron will be generalized in a more selective way than a simple threshold
neuron. Furthermore, the manner of generalization can be altered by changing its integrative
dynamics. This capability is advantageous for handling problems with environmental
ambiguity. Cytoskeleltal neurons may be trained to recognize sets of input patterns through
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
33/41
32
an evolutionary learning algorithm. If overgeneralization occurs, the neuron will lose its
pattern processing specificity since every pattern will trigger its firing. Conversely, if a
cytoskeletal neuron is trained to recognize only a single pattern, it will be overly specific and
rigid. In this case it will lose its capability for recognizing input patterns that are variable in
space and time. It is important to strike a balance between these two extremes.
The ability to generalize is clearly necessary for dealing with variable or noisy
environments. The problem is that dynamics that allow for effective generalization of some
classes of environments necessarily preclude effective generalization for other classes. We
call this the interference problem. Dealing with this problem requires an effective
evolutionary learning algorithm. It also requires the learning algorithm to proceed with a
suitable neuronal architecture, including both the internal structures and neuronal dynamics,
and memory mechanisms that link neurons into coherent groups. It is essential that the
architecture allow for the evolution of a repertoire of special purpose neurons with dynamics
that have different generalization properties and a linking mechanism that allows for
orchestration of this repertoire. Our model opens up such a rich evolutionary possibility.
The model is clearly much more complex than conventional connectionist models. We
can regard it on the one hand as a tool for examining the nature of biological processing itself,
and on the other as a tool that is capable of yielding practical benefits. This paper is our first
attempt to develop a neuromolecular architecture on digital circuits. The detailed (or more
effective) design of this hardware is still under investigation. We expect that the realization
of this architecture on digital circuits would allow the system to perform on a real-time basis.
It would indeed expand the application domains. Future work includes widening the
dynamic capabilities of the neurons, utilizing the associative memory capability in
combination with evolutionary learning, porting evolved neurons with useful pattern
processing capabilities to special silicon hardware, using the system as an architectural
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
34/41
33
paradigm for emerging molecular electronic technologies, and employing the system as a
vehicle for obtaining a clearer understanding of the role of intraneuronal mechanisms in brain
functions.
Acknowledgment
This paper is dedicated to the memory of Professor Michael Conrad, a pioneer in the field
of molecular computing.
References
1. H.J. Bremermann, Optimization through evolution and recombination, in: M.C. Yovits,
G.T. Jacobi and G.D. Goldstein, eds., Self-Organizing Systems (Spartan Books,
Washington, D.C., 1962) 93-106.
2. F.L. Carter, ed., Molecular Electronic Devices (Marcel Dekker, New York, 1982).
3. F.L. Carter, ed., Molecular Electronic Devices II (Marcel Dekker, New York, 1987).
4. J.-C. Chen, Problem solving with a perpetual evolutionary learning architecture, Applied
Intelligence 8, 1 (1998) 53-71.
5. J.-C. Chen, Data differentiation and parameter analysis of a chronic hepatitis B database
with an artificial neuromolecular system, BioSystems 57 (2000) 23-36.
6. J.-C. Chen and M. Conrad, Learning synergy in a multilevel neuronal architecture,
BioSystems 32 (1994) 111-142.
7. J.-C. Chen and M. Conrad, A multilevel neuromolecular architecture that uses the
extradimensional bypass principle to facilitate evolutionary learning, Physica D. 75 (1994)
417-437.
8. J.-C. Chen and M. Conrad, Pattern categorization and generalization with a virtual
neuromolecular architecture, Neural Networks 10, 1 (1997) 111-123.
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
35/41
34
9. J.-C. Chen, and M. Conrad, Evolutionary learning with a neuromolecular architecture: a
biologically motivated approach to computational adaptability, Soft Computing 1, 1
(1997) 19-34.
10. M. Conrad, Information processing in molecular systems, Currents in Modern Biology
(now BioSystems) 5 (1972) 1-14.
11. M. Conrad, Evolutionary learning circuits, J. Theor. Biol. 46 (1974) 167-188.
12. M. Conrad, Molecular information structures in the brain, J. Neurosci. Res. 2 (1976)
233-254.
13. M. Conrad, Complementary molecular models of learning and memory, BioSystems 8
(1976) 119-138.
14. M. Conrad, Principle of superposition-free memory, J. Theor. Biol. 67 (1977) 213-219.
15. M. Conrad, Adaptability: The Significance of Variability from Molecule to Ecosystem,
(Plenum Press, New York, 1983).
16. M. Conrad, On design principles for a molecular computer, Commun. ACM 28 (1985)
464-480.
17. M. Conrad, The lure of molecular computing, IEEE Spectrum 23 (1986) 55-60.
18. M. Conrad, Molecular computing: a synthetic approach to brain theory, in: J. Casti and A.
Karlqvist, eds., Real Brains, Artificial Minds (North Holland, New York, 1987) 197226.
19. M. Conrad, The brain-machine disanalogy, BioSystems 22 (1989) 197-213.
20. M. Conrad, Molecular computing, in: M.C. Yovits, ed., Advances in Computers 31
(Academic Press, San Diego, 1990) 235-324.
21. M. Conrad, Electronic instabilities in biological information processing, in: P.I. Lazarev,
ed., Molecular Electronics (Kluwer Academic Publishers, Amsterdam, 1991) 41-50.
22. M. Conrad, Integrated precursor architecture as a framework for molecular computer
design, Microelect. J. 24 (1993) 263-285.
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
36/41
35
23. M. Conrad, R.R. Kampfner, and K.G. Kirby, Neuronal dynamics and evolutionary
learning, in: M. Kochen and H. Hastings, eds., Advances in Cognitive Science: Steps
Toward Convergence 104 (Westview Press, Boulder, CO, 1988) 169-189.
24. M. Conrad, R.R. Kampfner, K.G. Kirby, E.N. Rizki, G. Schleis, R. Smalz, and R. Trenary,
Towards an artificial brain, BioSystems 23 (1989) 175-218.
25. H. de Garis, An artificial brain: ATRs cam-brain project aims to build/evolve an artificial
brain with a million neural net modules inside a trillion cell cellular automata machine,
New Generation Computing Journal 12, 2 (1994).
26. H. de Garis, LSL evolvable hardware workshop report, ATR, Japan, Tech. Rep. (Oct.
1995).
27. H. de Garis, Review of proceedings of the first NASA/Dod workshop on evolvable
hardware, IEEE Trans. Evol. Comput. 3, 4 (1999) 304-306.
28. G.I. Drummond, Cyclic nucleotides in the nervous system, in: P. Greengard and G.A.
Robinson, eds., Advances in Cyclic Nucleotide Research (1983) 373-494.
29. L. Fogel, A. Owens, and M. Walsh, Artificial Intelligence through Simulated Evolution
(Wiley, New York, 1966).
30. A.S. Fraser, Simulation of genetic systems by automatic digital computers, Australian J.
of Biol. Sci. 10 (1957) 484-491.
31. H. Frhlich, Evidence for coherent excitation in biological systems, Int. J. Quantum
Chem. 23 (1983) 1589-1595.
32. K. Fukushima, S. Miyake, and T. Ito, Neocognitron: a neural network model for a
mechanism of visual pattern recognition, IEEE Trans. Syst., Man, Cybern. 13 (1983)
826-834.
33. K. Fukushima, Neocognitron: a hierarchical neural network capable of visual pattern
recognition, Neural Networks 1 (1988) 119-130.
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
37/41
36
34. L.M. Griffith and T.D. Pollard, Evidence for actin filament-microtubule interaction
mediated by microtubule-associated proteins, J. Cell Biol. 78 (1978) 958-965.
35. L.M. Griffith and T.D. Pollard, The interaction of actin filaments with microtubules and
microtubule-associated proteins, J. Biol. Chem. 257 (1982) 9143-9151.
36. S. Grossberg, How does a brain build a cognitive code, Psychological Review 87 (1980)
1-51.
37. S.R. Hameroff, Ultimate Computing (North-Holland, Amsterdam, 1987).
38. S.R. Hameroff, J.E. Dayhoff, R. Lahoz-Beltra, A. Samsonovich, and S. Rasmussen,
Conformational automata in the cytoskeleton: models for molecular computation,
Computer 25, 11 (1992) 30-39.
39. T. Higuchi, M. Iwata, D. Keymeulen, H. Sakanashi, M. Murakawa, I. Kajitani, E.
Takahashi, K. Toda, M. Salami, N. Kajihara, and N. Otsu, Real-world applications of
analog and digital evolvable hardware, IEEE Trans. Evol. Comput. 3, 3 (1999) 220-235.
40. T. Higuchi and N. Kajihara, Evolvable hardware chips for industrial applications,
Commun. ACM 42, 4 (1999) 60-66.
41. J. Holland, Adaptation in Natural and Artificial Systems (University of Michigan Press,
Ann Arbor, MI., 1975).
42. F.T. Hong, Intelligent materials and intelligent microstructures in photobiology,
Nanobiology 1 (1992) 39-60.
43. F.T. Hong, Bacteriorhodopsin as an intelligent material: a nontechnical summary, MEBC
(1992) 13-17.
44. F.T. Hong, Biomolecular computing, in: R.A. Meyeres, ed., Molecular Biology and
Biotechnology: A Comprehensive Desk Reference (Weinheim and Cambridge, New York,
1995) 194-197.
45. J. Hopfield, Neural networks and physical systems with emergent collective
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
38/41
37
computational abilities, Proc. Nat. Acad. Sci. 79 (1982) 2554-2558.
46. R. Kampfner and M. Conrad, Sequential behavior and stability properties of enzymatic
neuron networks, Bull. Math. Biol. 45 (1983) 969-980.
47. K. Kirby and M. Conrad, Intraneuronal dynamics as a substrate for evolutionary learning,
Physica D. 22 (1986) 205-215.
48. F.H. Kirkpatrick, New models of cellular control: membrane cytoskeletons, membrane
curvature potential, and possible interactions, BioSystems 11 (1979) 85-92.
49. T. Kohonen, A principle of neural associative memory, Neuroscience 2 (1977)
1065-1076.
50. E.A. Liberman, S.V. Minina, and K.V. Golubtsov, The study of the metabolic synapse II:
comparison of cyclic 3',5'-AMP and cyclic 3',5'-GMP effects, Biophysics 22 (1975)
75-81.
51. E.A. Liberman, S.V. Minina, N.E. Shklovsky-Kordy, and M. Conrad, Microinjection of
cyclic nucleotides provides evidence for a diffusional mechanism of intraneuronal control,
BioSystems 15 (1982) 127-132.
52. E.A. Liberman, S.V. Minina, N.E. Shklovsky-Kordy, and M. Conrad, Change of
mechanical parameters as a possible means for information processing by the neuron (in
Russian), Biophysics 27 (1982) 863-870.
53. E.A. Liberman, S.V. Minina, O.L. Mjakotina, N.E. Shklovsky-Kordy, and M. Conrad,
Neuron generator potentials evoked by intracellular injection of cyclic nucleotides and
mechanical distension, Brain Res. 338 (1985) 33-44.
54. G. Matsumoto, S. Tsukita, and T. Arai, Organization of the axonal cytoskeleton:
differentiation of the microtubule and actin filament arrays, in: F.D. Warner and J.R.
McIntosh, eds., Kinesin, Dynein, Cell Movement, Microtubule Dynamics (Alan R. Liss,
New York, 1989) 335-356.
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
39/41
38
55. M. Murakawa, S. Yoshizawa, I. Kajitani, X. Yao, N. Kajihara, M. Iwata, and T. Higuchi,
The GRD chip: genetic reconfiguration of DSPs for neural network processing, IEEE
Trans. Comput. 48, 6 (1999) 628-639.
56. T.D. Pollard, S.C. Selden, and P. Maupin, Interaction of actin filaments with microtubules,
J. Cell Biol. 99 (1984) 33-37.
57. I. Rechenberg, Evolutionsstrategie: Optimierung Technischer Systeme nach Prinzipien
der Biologischen Evolution (Frommann-Holzboog, Stuttgart, Germany, 1973).
58. G.N. Reeke and G.M. Edelman, Selective networks and recognition automata, In: M.
Kochen and H.M. Hastings, eds., Advances in Cognitive Science (Westview Press,
Boulder, CO., 1988) 50-71.
59. H.P. Schwefel, Numerical Optimization of Computer Models (Wiley, Chichester, 1981)
60. S.C. Selden and T.D. Pollard, Phosphorylation of microtubule-associated proteins
regulates their interaction with actin filaments, J. Biol. Chem. 258 (1983) 7064-7071.
61. M. Sipper, D. Mange, and E. Sanchez, Quo Vadis evolvable hardware, Commun. ACM
42, 4 (1999) 50-56.
62. O. Skalli and R.D. Goldman, Recent insights into the assembly, dynamics, and functions
of intermediate filament networks, Cell Motil. Cytoskel. 19 (1991) 67-79.
63. R. Smalz and M. Conrad, A credit apportionment algorithm for evolutionary learning
with neural networks, in: A.V. Holden and V.J. Kryukov, eds., Neurocomputers and
Attention II: Connectionism and Neurocomputers, Proceedings in Nonlinear Science
(Manchester University Press, Manchester, 1991) 663-673.
64. R. Smalz and M. Conrad, Combining evolution with credit apportionment: a new
learning algorithm for neural nets, Neural Networks 7 (1994) 341-351.
65. P. Spiessens and J. Torreele, Massively parallel evolution of recurrent networks: an
approach to temporal processing, In: F.J. Varela and P. Bourgnine, eds., Neurocomputers
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
40/41
39
and Attention II: Connectionism and Neurocomputers (Manchester University Press,
Manchester, UK, 1991) 663-673.
66. P. Stair, Cytoplasmic matrix: old and new questions, J. Cell Biol. 99 (1984) 235-238.
67. P.M. Steinert, J.C.R. Jones, and R.D. Goldman, Intermediate filaments, J. Cell Biol. 99
(1984) 22-27.
68. A. Tamulis, S. Janusonis, and S. Bazan, Selection rules for self-formation in the
molecular nanotechnology, Makromol. Chem., Marcromol. Symp. 46 (1991) 181-185.
69. A. Tamulis and L. Bazhan, Quantum chemical investigations of photoactive
supermolecules and supramolecules, their self-assembly and design of molecular devices,
Synthetic Metals (1993) 4685-4690.
70. A. Tamulis, E. Stumbrys, V. Tamulis, and J. Tamuliene, Quantum mechanical
investigations of photoactive molecules, supermolecules, supramolecules and design of
basic elements of molecular computers, in: F. Kajzar and V.M. Agranovich, eds.,
Photoactive Organic Materials (Kluwer Acacdemic Publishers, Netherlands, 1996) 53-66.
71. A.M. Turing, Computing machinery and intelligence, Mind 59 (1950) 433-460.
72. R.B. Vallee, G.S. Bloom, and W.E. Theurkauf, Microtubule-associated proteins: subunits
of the cytomatrix, J. Cell Biol. 99 (1984) 38-44.
73. P. Werbos, Beyond regression: new tools for prediction and analysis in the behavioral
sciences, Ph.D. Thesis, Harvard University (1974).
74. P. Werbos, Backpropagation and neurocontrol: a review and prospectus, in: Proc. Int.
Joint Conf. Neural Networks (1989) 209-216.
75. D. Whitley and T. Hanson, Optimizing neural networks using fast, more accurate genetic
search, in: Proc. of the 3rd
Int. Conf. Genetic Algorithms (Kaufmann, Palo Alto, CA, 1989)
157-255.
76. S. Wolfram, Cellular automata as models of complexity, Nature 311 (1984) 419-424.
8/3/2019 Jong-Chen Chen and Ruey-Dong Chen- Toward an evolvable neuromolecular hardware: a hardware design for a m
41/41
77. X. Yao, A review of evolutionary artificial neural networks, Int. J. Intell. Syst. 8, 4 (1993)
539-567.
78. X. Yao, Evolutionary artificial neural networks, Int. J. Neural Systems 4, 3 (1993)
203-222.
79. X. Yao, Following the path of evolvable hardware, Commun. ACM 42, 4 (1999) 47-49.
80. X. Yao and T. Higuchi, Promises and challenges of evolvable hardware, IEEE Trans.
Syst., Man, Cybern. 29, 1 (1999) 8797.
81. X. Yao and Y. Liu, A new evolutionary system for evolving artificial neural networks,
IEEE Trans. Neural Networks 8, 3 (1997) 694713.
82. X. Yao and Y. Liu, Making use of population information in evolutionary artificial neural
networks, IEEE Trans. Syst., Man, Cybern. 28, 3 (1998) 417-425.