InTech-Self Organization and Human Robots

Embed Size (px)

Citation preview

  • 8/11/2019 InTech-Self Organization and Human Robots

    1/7

    064

    SSEELLFF--OORRGGAANNIIZZAATTIIOONNSelf-Organization and Human Robots

    Chris LucasCALResCo Grouphttp://www.calresco.org

    IntroductionHumans are rather funny things, we often tend to imagine that we are so special, so divorced by oursupposed intelligence from the influences of the natural world and so unique in our abstracting abilities.We have this persistent delusion, evident since ancient Greek times, that we are rational, that we can

    behave as disinterested observers of our world, which manifests in AI thought today in a belief that, in alike manner, we can design, God like, from afar, our replacements, those super-robots that will doeverything that we can imagine doing, but in much better ways than we can achieve, and yet can avoiddoing anything nasty, i.e. can overcome our many human failings - obeying, I suppose, in the process,Asimovs three laws of robotics. Such human naivet proves, in fact, to be quite amusing, at least to thoseof us schooled in AI history. When we look at the aspirations and the expectations of our early pioneers,and compare them to the actual reality of today, then we must, it seems, re-discover the meaning of the word

    humility. Enthusiasm, good as it may be, needs to be moderated with a touch of common sense, and ifour current ways of doing things in our AI world dont really work as we had hoped, then perhaps it is timeto try something different (Lucas, C., 1999)?

    From Control to FreedomThe traditional AI approach, being a top-down process, echoes the general behaviours seen in our worldtoday, which attempt to centralise power and to have one designer (or a small group of them) create arobot or system based upon a specification of some sort or another. In other words we need to first decidewhat someone wantsto achieve and then to implement or to impose a way of arriving there. Unfortunatelythe success of this method has been rather slight in practice, we still dont know enough about the basis of

    intelligence to design it effectively - especially if we wish to mimic what humans actually do well, ratherthan what they do badly (which we can, rather irrelevantly, just manage, it seems, to do artificially!).

    As a way of overcoming the limitations of this outsider method, an alternative has been proposed, i.e. thesubsumption architecture of (Brooks, R., 1990). Here we concentrate our attention on a number of relativelysimple operations, for example moving forward or turning. Each of these is implemented in anautonomous module and these are then arranged into a layered hierarchy, with the most primitive at the

    bottom. Each module can then inhibit the higher (more valued) modules, whenever their lower function isneeded or is necessary, in other words we have priority interrupts. In this way we can avoid the need to

    plan out exactly what should happen in every possible scenario, instead we leave it to the environmentalfeedback to select (evolutionary fashion) which specific module needs to be operational at any time andfor how long. Whilst this has proved quite successful, allowing the emergence of unexpected behaviours that

    rather look intentional - although these systems are teleonomic not teleological, (so it is only the observer

  • 8/11/2019 InTech-Self Organization and Human Robots

    2/7

    065

    that imputes intention to them), we find that significant limitations still exist, e.g. in the need for explicitlydesigned operations.

    Given that this architecture relies on the environment, but still requires that each module be hand-crafted,can we go yet a stage further and dispense with that designer stage, allowing feedback itself to sculpt therobot entirely, i.e. in the same manner as is thought to happen in evolutionary biology? In traditional neo-Darwinian evolution (Futuyma, D.J., 1986) we rely on genetic mutations to generate the variety on whichselection then acts. This however is a very long term process, it took billions of years before single celledorganisms (e.g. bacteria, which reproduce every 20 minutes or so) achieved multicellularity, and a billionmore years were needed before intelligent creatures arrived as a reality. It is most unlikely that any of us inthe AI community could live long enough to achieve even a basic prototype!

    Fortunately we can take advantage of some short cuts. One of these acts by using computers, in order tomodel the multi-generational phylogenetic evolution scenario just outlined. We can, using high speedcomputers and a technique called genetic algorithms (Holland, J., 1992; Lucas, C., 2000a), operate atspeeds of hundreds of generations per second - increasing as computers become faster. In this way we areable to evolve some structure in a reasonable length of time, but what this happens to contain depends very

    much upon the constraints we apply to the system. These constraints mimic the selection phase of naturalevolution, but what should they be? Once again, in order to get what we want, so that we can arrange toselect for it, we need to know what it isin advance - so how can we avoid designing our fitness functionappropriately, but so deterministically, and then suffering the problem that evolution stops altogether oncethe population has converged on our fitness optimum?

    One way around this new problem is to use coevolution. This means that we let one organism act as thefitness function for the other, and vice-versa. By using this sort of technique we can indeed generateimproved functionality, for example in deriving efficient sort programs where the list of numbers to besorted also evolves in difficulty (Hillis, W.D., 1991). But we have to start somewhere! We must create atleast prototypes of program and list, before anything at all can happen. So we seem yet again to be forced

    back to stage one - deliberate design. We can minimise the extent of this to some degree by simply

    designing-in the ability to adapt to unpredictable environments, as in reinforcement learning (Ackley, D.H. & Littman, M. L., 1992), but the basic intelligence to so act must still be crafted by our outsider.

    Growing from ScratchBut if wecant find a way to get over this problem then how can nature possibly do so, do we need a Godas some seem to think? Here we come to the crux of the matter, in considering a process that has been leftout of standard neo-Darwinian evolution, that of development or ontogeny. Every human robot startslife as a single cell (like our bacteria), this then grows into an embryo and thus eventually is born as a (moreor less) functioning human child. But that is not the end of the matter, life experiences then develop,initially, the brain (our neural network wiring) and later (in interactions with others) our mind orpersonality, i.e. our behavioural range. It is this latter state we hope to duplicate in AI, so looking at how

    this process is understood in developmental biology should prove to be a useful indicator as to how wemight achieve, artificially, much the same result, and without the need for any form of external intelligentdesigner.

    Unfortunately it isnt understood very well at all as yet, at least in overall terms! Embryogenesis, as it iscalled (Bard, J., 1990; Slack, J.M., 1991; Wolpert, L., 1998), consists of three stages. Firstly growth (theduplication of cells by asexual or mitotic reproduction), secondly differentiation (the spliting of cells intodifferent tissue types) and thirdly morphogenesis (the creation of form or structure). The first stageexpands possibility space, for every doubling of the number of cells we have a combinatorial explosion, e.g.for just 64 cells we have over 1089permutations (64!), and this escalates rapidly during the growth process.But let us look more closely now at the second stage, after all, these permutations only become reallyinteresting if the cells can be distinguished from each other. What isknown about this is that there are manygenetic regulatory networks involved, each cell type (and there are hundreds) activates a different set ofgenes. These regulatory controls do not however operate in a linear hierarchy, 1:N, in the way that we

  • 8/11/2019 InTech-Self Organization and Human Robots

    3/7

    066

    often regard human control networks, nor do they have fixed functions, independently maintained, as weoften consider in AI subroutines.

    What happens is that they are arranged into a web, in N:M fashion, where each gene inter-links with manyothers, switching them on and off and being controlled in a like manner (Lucas, C., 2004b). This is acircular feedback form of causality, called polygeny and pleiotropy, which actually operates on severaldifferent levels. The same processes of activations and inhibitions also take place between cells (the thirdstage of embryogenesis), and this includes the inter-neuron connections in our mind. We can even, if wewish, go beyond the mind as an entity in itself and venture out into the wider world, and we shall see exactlythe same phenomenon, in society and in ecology both, i.e. a systems viewpoint of some type is applicablein all areas of our world.

    Self-Organization ArrivesWhat is common then to all these processes and levels? Well, it is the idea of self-organization which isour focus in this article, which we can define like this (Lucas, C., 2004d):

    The essence of self-organization is that system structure often appears without explicitpressure or involvement from outside the system. In other words, the constraints on form

    (i.e. organization) of interest to us are internal to the system, resulting from the

    interactions among the components and usually independent of the physical nature of

    those components. The organization can evolve in either time or space, maintain a stable

    form or show transient phenomena. General resource flows within self-organized systems

    are expected (dissipation), although not critical to the concept itself.

    The field of self-organization seeks general rules about the growth and evolution of

    systemic structure, the forms it might take, and finally methods that predict the future

    organization that will result from changes made to the underlying components. The

    results are expected to be applicable to all other systems exhibiting similar network

    characteristics.

    In other words, we now have a method of generating demonstrable structure for free (Heylighen, F.,1999), of getting over (maybe) our primary design problem. To see just how significant this is, let us add afew numbers. Suppose we have 10,000 randomly connected, 2 input, logic gates (what is called a randomBoolean network of size N - Lucas, C., 2002b), in other words the number of different possibilities is 210,000- a staggering number. On average (with considerable variance) we would expect any area of the self-organized system to only visit (cycle through) 100 (square root of N) different dynamical states - this is lessthan 27, and there will be (again, with considerable variance) only about 100 disjoint areas of activity, 2 7again (root N). So the ratio of initial disorder to final order is a massive 29986or so ! This is not quite sucha magic solution as we may wish however, we still must have something initially to work with (the gateshere), but this proves to be our second short-cut. If the lower levels also emerged (as we think) in this

    way, then we can cut to the chase as it were, and ignore how they actually got there. We thus can start offwith parts of appropriate types for our AI purposes, called agents, and let a collection of these coevolveand self-organize, bottom-up, to meet our needs. All we have to do then is to sit back and watch it happen...

    Studies of the dynamics of such scenarios (Kauffman, S., 1995) show that three general results are possible.In the first, the agents are insufficiently connected (too cool), they dont interact much at all, so the systemquickly settles into a fixed state, we have convergence to a static result (akin to the traditional singleanalytical solution in science). In the second the agents are highly connected (too hot) and each affectsmany others constantly, here the system cannot settle, it is always perturbed (Lucas, C., 2000b) andexhibits a chaotic behaviour (those insoluble systems usually ignored in science). In the third state,which I call Type 4 Complexity (Lucas, C., 1999), we have a (just right) behaviour that modularises thesystem, with some sets of agents proving to be static, some chaotic, and some dynamic - many of which willswap places over time as the system evolves between the possible (semi-stable or multistable) attractors(Lucas, C., 2002a). In this scenario we find that the maximum fitness can be achieved, the best overall

  • 8/11/2019 InTech-Self Organization and Human Robots

    4/7

    067

    performance is within reach. For larger systems, the dynamics will achieve a fractal or power law spreadof properties, called Self-Organized Criticality or Edge-of-Chaos, (Bak, P., 1996; Lucas, C., 2000b),which can give a somewhat emergent multi-layered or hierarchical structure, with inherent cooperative

    behaviour between the parts becoming apparent (nsal, C., 1993).

    This idea of appropriate connectivity is proving to be highly important in many areas of our world, fromthe social ones related to anarchy, democracy and totalitarianism, via medical ones related toepidemics, through to ecological ones related to diversity and freedom of combination, not to mentionthe physics or mathematical ones related to spin glasses or percolation. By arranging connectivitysuitably we can enable our important self-organizational processes. This is the communications aspect ofagent interaction, but two other aspects also need to be included here if we are to achieve success in ourself-organizational scenarios. The first is appropriate size, relating to decentralisation. Systems must besmall enough to be self-contained - if they are too big then the inertia of bureaucracy inhibits therecognition of any improvement; but they should not be too small either - else they will have insufficientvariety with which to make improvements. The other aspect is stress, a desire or a need for improvement.Again if this proves to be too high the system will disintegrate, we will have rapid breakdown; but if thestress is too low then the status-quo cannot be overcome and a static state will persist. Given that these

    middle-way conditions are met, then self-organization should occur and the system will generate ourrequired novelty or emergence.

    A Competitive ProblemAlthough this explanation sounds very glib and easy, in practice there are a number of problems, forexample in the (very visible) social and environmental destructiveness that we can see around us, resultingfrom the unfettered individualism driving the self-organization (invisible hand) of the (over-stressed) freemarket. Experiments using Multi-Agent Systems (MAS), which operate using these ideas of self-organization, have also not so far achieved those higher levels of structure that we so desire and expect(seen in nature in the progression atoms-molecules-cells-organisms-societies-ecologies), and which arecommonly to be found in the behaviour of swarms, for example, in insect societies (Bonabeau, E. et al.,

    2002), where stigmergy (environmentally mediated communication) also has an important effect (Holland,O. & Melhuish, C., 1999). These current failures may well be because of the assumptions embedded withinthe agent structures typically used. In so many current systems, there is an inherent competitiveness -echoing the belief behind the phrase survival of the fittest often employed by Darwinists (and capitalists).Yet let us consider cellular development once more, what would be the effects of such competition? Theanswer seems pretty clear, it is the same as what happens when we suffer from cancer - the competitionfrom rogue cells eventually destroys the host. Thus it is not competition that we need, but cooperation.We need to find a way for the agents to work together, since only in this way can organisms (and/orsocieties) function and persist at higher levels.

    The principle we are looking for, which we wish to employ within our self-organizing systems, is calledsynergy (Corning, P., 1995; Lucas, C., 2004c). Here, when two or more agents come together, a new

    functionality arises, they gain combined powers greater than the sum of their separate powers, oftenillustrated with the phrase the whole is greater than the sum of the parts. But how can this possibly work?In essence, by a form of combinatorial trial and error - in which, in the processes of interaction, this newhigher level functionality arises. Thus there is initially a diversity and an ongoing novelty, as seen in the

    pairwise encounters of the heterogeneous agents, but in some way these agents then associate. This, likethe sexual crossover experienced in evolution, allows new building blocks to arise, new combinations offunctions which may, perhaps, operate in an entirely new way - we have potential emergence (Lucas, C.,2004b). In the operation of the typical MAS the agents interact and learn (at least partly) in a randomfashion, and their individual behaviours change, but they do so not only as a result of their own experiences- we find instead that the higher level places new constraints upon them (Epstein, J.M. and Axtell, R.,1996). These constraints, called downward causation (Campbell, D.T., 1974; Lucas, C., 2004b) add newvalues to the system, new environmental relevances at a group level that imply selective forces beyondthose of the individual. Although such ideas have been resisted within biology for some decades, only

  • 8/11/2019 InTech-Self Organization and Human Robots

    5/7

    068

    recently making a comeback (Wilson, D.S. & Sober, E., 1994), they do prove to be valid from bothcomputing and complexity science perspectives, e.g. (Sloman, A. & Chrisley, R., 2003).

    For this to prove useful however there must be a possibility of a dynamic from the agents to the end result,a way of searching current state space (Lucas, C., 2002a), and expanding it in ways that enables such newfunctionality. Yet this aspect of emergence is very much under researched so far, we have very little idea asyet as to how we can arrange systems such that anythingpredictable will emerge, let alone to achieve whatwe, ideally, would wish to see. This is perhaps the greatest challenge to be met in the future by thecomplexity science community. But given this limitation, can the ideas we have outlined contribute alreadyin any way to current robotic research ? We shall see that they can and they do.

    Enter the RobotAn implied embryogenesis perspective provides some, highly scaleable, advantages for robot designers(Bentley, P.J. & Kumar, S., 1999). These include adaptability, the ability to respond to context (Quick, T.et al., 1999); compactness, the ability to code large structures in an efficient form; and repetitiveness, theability to reuse the same structures or subroutines for many different functions. By using these techniques,

    in for example the evolutionary design of neural networks (Astor, J.C & Adami, C., 2000), we can evolvefunctional robot controllers (Jacobi, N., 1995), which potentially can interface with humans (Kanada, Y. &Hirokawa, M., 1994). Self-organization is also a useful biological technique which can be used for evolvingrobotic functionality (Nolfi, F. & Floreano, D., 2000; Kim, D.H., 2004), thus by combining the two

    perspectives (low-level agent development and higher level agent interactions) we may, advantageously,enable an embodied form of autopoietic (Lucas, C., 2004a) emergence, a coevolution of situation andactor. Note that we have two opposing drives here, the first (embryogenesis) expands state space, it add new

    possibilities, new options or combinations to the mix, the second (self-organization) reduces this diversity, itselects from those many options only those possibilities than can persist, the functionally stable states of thesystem. It moves from the systems starting points in state space to its attractors. But these should not beviewed in isolation, they are only stable in terms of the current environment, if the context changes then thatstability can be lost and another, alternative, stable state must then be found. This is, in fact, what we mean

    by learning or epigenesis, the move from one stable state in a certain context to another stable state in adifferent context. If our robot cannot do this, if it fails to adapt, becoming what we term fragile, then ithas insufficient options (requisite variety in cybernetic terminology) to cope with the diversity of itsenvironmental perturbations.

    Designing robots that can overcome this tendency to be highly domain restricted has been a major headachein AI history, so how can our new perspective help? If we are to grow some form of robot from scratchthen four aspects are necessary. Firstly we must have a part (or a set of different parts) which can increase innumber, secondly those parts must be able to associate (communicate and/or stick together) in some waysuch that they can form aggregates (some equivalent to cell-adhesion or morphoregulatory molecules -Edelman, G., 1992), thirdly they must have the freedom to self-organize, (i.e. to change their configurationsand communications dynamically), and finally we need to allow for environmental influences to be able to

    trigger these reorganizations (allowing adaptability). To see the latter two aspects in a different light, self-organization restricts (canalizes) the possibilities open to the system, it is a form of internal selection. Theenvironment puts stress or bias on the system to achieve a viable function, causing it to escape poorattractors and flip to better ones, it is a form of external selection (if our system cant so adapt it simplydies). But as we add new units to the mix, as we grow the self-organizing system (Fritzke, B,. 1996),

    perhaps creating a 3D morphology e.g. (Eggenberger, P., 1997), then we both add to and reconfigure itsattractors, so that in this way we can increase the requisite variety until our system can, in fact, cope withthe target environment. This is similar to the way in which we make additional synaptic neuronalconnections with learning, we increase the complexity of the system by creating additional concepts orideas, new options or associations.

    Incorporating these four aspects into a real robot system is however a very demanding task, and not one thathas yet been attempted. Firstly we cannot grow artificial parts, they do not reproduce in any sense. The

    best we can do is either to simulate such systems (perhaps eventually implementing the end result, e.g.

  • 8/11/2019 InTech-Self Organization and Human Robots

    6/7

    069

    Bentley, 2004), or to have our robot (somehow) pick up and incorporate extra parts that happen to be lyingaround in the environment. Secondly it is unclear how we should have the parts interact in a way suitablefor stimulating development, such that it can possibly start to self-organize itself. Thirdly we have the

    problem of how to allow the environment to change the configuration, to disturb the robot in some way, insuch a manner as to force it to re-design itself. Even if these three major obstacles are overcome, then westill do not know how to use our parts to self-assemble critters with specific functions - but of course westill dont know how nature does that either (Raff, R.A., 1996). This relates to understanding how the three

    processes (phylogeny, ontogeny and epigenesis) interact, but once we can do this then we have the potentialto build what have been called POEtic machines (Teuscher, C., 2001). For the future, perhaps all we cansay, is that we live in interesting times...

    ReferencesAckley, D. H. and Littman, M. L. (1992). Interactions between learning and evolution. In Langton, C.,

    Farmer, J., Rasmussen, S. and Taylor, C. (Eds.), Artificial Life II: Proceedings Volume of Santa FeConference, Vol. XI. Addison Wesley.

    Astor, J.C and Adami, C. (2000) A Developmental Model for the Evolution of Artificial Neural Networks.

    Artificial Life 6: 189-218. http://www.krl.caltech.edu/~adami/Alife2000.pdfBak, P. (1996). How Nature Works: The Science of Self-Organized Criticality. Copernicus.Bard, J. (1990). Morphogenesis. Cambridge University Press.Bentley, P. J. (2004) Controlling Robots with Fractal Gene Regulatory Networks. In de Castro, L. & von

    Zuben, F. (Eds.), Recent Developments in Biologically Inspired Computing. Idea Group Inc.http://www.cs.ucl.ac.uk/staff/P.Bentley/BECH5.pdfBentley, P. J. and Kumar, S. (1999). Three Ways to Grow Designs: A Comparison of Embryogenies for an

    Evolutionary Design Problem. Genetic and Evolutionary Computation Conference (GECCO '99), July14-17, 1999, Orlando, Florida USA, pp.35-43. RN/99/2.http://www.cs.ucl.ac.uk/staff/P.Bentley/BEKUC1.pdf

    Bonabeau, E., Dorigo, M. & Theraulaz, G. (2002). Swarm Intelligence: From Natural to Artificial Systems,Oxford University Press.

    Brooks, R.A. (1990). Elephants Don't Play Chess. Robotics and Autonomous Systems Vol. 6, pp. 3-15.http://www.ai.mit.edu/people/brooks/papers/elephants.ps.ZCampbell D.T. (1974). Downward causation in Hierarchically Organized Biological Systems. Studies in the

    Philosophy of Biology, Ayala F.J. & Dobzhansky, T. (Ed.). Macmillan Press.Corning, Peter. (1995), Synergy and Self-organization in the Evolution of Complex Systems. Systems

    Research 12(2): 89-121. John Wiley & Sons Ltd.Edelman, G. (1992). Bright Air, Brilliant Fire: On the Matter of the Mind. Basic Books.Eggenberger, P. (1997). Evolving Morphologies of Simulated 3d Organisms Based on Differential Gene

    Expression. 1997. In Husbands, P. and Harvey, I. (Eds.), Proceedings of the 4th European Conferenceon Artificial Life (ECAL97). MIT Press. ftp://ftp.ifi.unizh.ch/pub/institute/ailab/techreports/97.20.ps.gz

    Epstein, J.M. and Axtell, R. (1996). Growing Artificial Societies. Brookings Institution Press.Fritzke, B. (1996). Growing Self-organizing Networks - Why?. In Verleysen, M. (Ed.), ESANN'96:

    European Symposium on Artificial Neural Networks, p. 61-72. D-Facto Publishers, Brussels.http://pikas.inf.tu-dresden.de/~fritzke/ftppapers/fritzke.esann96.ps.gzFutuyma, D.J. (1986). Evolutionary Biology, 2nd Edition. Sinauer Associates, Inc.Heylighen, F. (1999). The Science of Self-Organization and Adaptivity. Encyclopedia of Life Support

    Systems.http://pespmc1.vub.ac.be/Papers/EOLSS-Self-Organiz.pdfHillis, W.D. (1991). Co-Evolving Parasites Improve Simulated Ecolution as an Optimization Procedure. InLangton, C., Farmer, J., Rasmussen, S., & Taylor, C. (Eds.), Artificial Life II: Proceedings Volume of Santa

    Fe Conference, Vol. XI. Addison Wesley.Holland, J. (1992). Adaptation in Natural and Artificial Systems. MIT Press.Holland, O. and Melhuish, C. (1999). Stigmergy, Self-Organization, and Sorting in Collective Robotics.Artificial Life 5: 173202. http://robotics.usc.edu/~maja/teaching/stigmergy.pdfJakobi, N. (1995). Harnessing Morphogenesis. International Conference on Information Processing in Cells

    and Tissues, Liverpool. ftp://ftp.informatics.sussex.ac.uk/pub/reports/csrp/csrp423.ps.Z

  • 8/11/2019 InTech-Self Organization and Human Robots

    7/7

    070

    Kanada, Y. and Hirokawa, M. (1994). Stochastic Problem Solving by Local Computation based on Self-organization Paradigm. IEEE 27th Hawaii International Conference on System Sciences, pp. 82-91.

    http://www.rwcp.or.jp/people/yk/CCM/HICSS27/paper/CCM-ProblemSolving.htmlKauffman, S. (1993). The Origins of Order - Self-Organization and Selection in Evolution. Oxford

    University Press.Kim, D.H. (2004). Self-Organization for Multi-Agent Groups. International Journal of Control, Automation,

    and Systems, vol. 2, no. 3, pp. 333-342, September 2004.http://ijcas.com/admin/paper/files/IJCAS_v2_n3_pp333-342.pdf

    Lucas, C. (1999). Complexity Philosophy as a Computing Paradigm. Self-Organising Systems - FutureProspects for Computing Workshop. UMIST 28/29 October 1999.http://www.calresco.org/lucas/compute.htm

    Lucas, C. (2000a). Genetic Algorithms - Nature's Way. CALResCo introduction. Online WWW.http://www.calresco.org/genetic.htmLucas, C. (2000b). Perturbation & Transients - The Edge of Chaos. CALResCo introduction. Online

    WWW.http://www.calresco.org/perturb.htmLucas, C. (2002a). Attractors Everywhere - Order from Chaos. CALResCo introduction. Online WWW.

    http://www.calresco.org/attract.htmLucas, C. (2002b). Boolean Networks - Dynamic Organisms. CALResCo introduction. Online WWW.http://www.calresco.org/boolean.htm

    Lucas, C. (2004a). Autopoiesis and Coevolution. CALResCo introduction. Online WWW.http://www.calresco.org/lucas/auto.htm

    Lucas, C. (2004b). Emergence and Evolution - Constraints on Form. CALResCo introduction. OnlineWWW. http://www.calresco.org/lucas/quantify.htm

    Lucas, C. (2004c). Fitness and Synergy. CALResCo introduction. Online WWW.http://www.calresco.org/lucas/fitness.htm

    Lucas, C. (Ed.). (2004d). The Self-Organizing Systems FAQ. Usenet comp.theory.self-org-sys. OnlineWWW. http://www.calresco.org/sos/sosfaq.htm

    Nolfi, F. and Floreano, D. (2000). Evolutionary Robotics: The Biology, Intelligence, and Technology of

    Self-Organizing Machines. MIT Press.Quick, T., Dautenhahn, K., Nehaniv, C.L. & Roberts, G. (1999). The Essence of Embodiment: AFramework for Understanding and Exploiting Structural Coupling Between System and Environment.Proc. Third International Conference on Computing Anticipatory Systems, Lige, Belgium.http://www.cs.ucl.ac.uk/staff/t.quick/papers/quick_casys99.ps.gz

    Raff, R.A. (1996). The Shape of Life: Genes, Development and the Evolution of Animal Form. TheUniversity of Chicago Press.

    Slack, J. M. (1991). From Egg to Embryo. Cambridge University Press.Sloman A and Chrisley R. Virtual Machines and Consciousness. Journal of Consciousness Studies, 10, No.

    4-5, 2003. http://www.cs.bham.ac.uk/research/cogaff/sloman-chrisley-jcs03.pdfTeuscher, C. (2001). On the State of the Art of POEtic Machines. Technical Report 01/375, Swiss Federal

    Institute of Technology.

    http://www.teuscherresearch.ch/download/christof/papers/teuscher_techrep01.pdfnsal, C. (1993). Self-Organisation in Large Populations of Mobile Robots. Masters Thesis. Virginia

    Polytechnic Institute and State University. http://armyant.ee.vt.edu/unsalWWW/cemsthesis.htmlWilson, D.S. and Sober, E. (1994). Reintroducing group selection to the human behavioral sciences.

    Behavioral and Brain Sciences 17 (4): 585-654.http://www.bbsonline.org/documents/a/00/00/04/60/bbs00000460-00/bbs.wilson.html

    Wolpert, L. (1998). Principles of Development. Oxford University Press.