31
Mind Driven Environments Matas Ubarevicius Theory Essay

Mind Driven Environments

Embed Size (px)

DESCRIPTION

Theory essay on possible futures of our environments

Citation preview

Page 1: Mind Driven Environments

Mind Driven Environments

Matas Ubarevicius

Theory Essay

Page 2: Mind Driven Environments

Contents

Summary ................................................................................................................................................. 1

Introduction ............................................................................................................................................ 2

Brain-Machine Interface ......................................................................................................................... 3

Extended Reach of Evolving Species ....................................................................................................... 7

Human Nature in Motion ...................................................................................................................... 13

Synthetic Matter ................................................................................................................................... 18

Architectural Cyborg ............................................................................................................................. 22

By Way of Conclusion ........................................................................................................................... 24

Bibliography .......................................................................................................................................... 27

Page 3: Mind Driven Environments

1

Summary

While arguing for architecture being an extension of human I reveal two underpinning revolutions in

science of the human body and synthetic matter to stress the importance of brain-machine interface

technologies in the field of architecture. It was necessary to show how neuroscience and

biotechnology can intervene in slow biological evolution of human nature. Moreover, programmable

matter was identified as primary candidate to form advanced hybrid spaces. Enhanced human body

and technological man-made reality were explained to form coherent system in which cyborg of

changed human condition can emerge. Speculation was made that such cyborg will be able to have

thought control over dependent and semi-dependent categories of architecture through all scales of

scientifically mastered physical and virtual reality.

In my main argument I develop a notion of technologically enriched architecture that can be directly

or partially affected through brain-machine interface. Analysis of advanced material science within

fields of robotics and biotechnology was made, touching on themes of pario media, artificial life and

synthetic reality. These topics draw an important character of today’s physical environment which is

not yet fully understood, but, through science, already conceptually empowered to serve as a

framework in building high-tech hybrid spaces. History, present and future of brain-machine

interface research was uncovered to identify initial and ultimate goals, current state and later

prospects of such technology in relation to architecture.

Mind driven environments are suggested to be a natural outcome of advances in science and

interactive architecture. These environments were conceptually interpreted as belonging to unified

human body–brain circuitry. Various perspectives on technological progress were given by following

ideas of scientists, futurists, philosophers and architects to stress the fact that humans were seeking

to increase their influence over environment throughout our history and prehistory. This allowed

putting mind driven environments and idea of architectural cyborg into wider evolutionary context.

Keywords: Mind Driven Environments, Brain-Machine Interface, Synthetic Matter, Architecture,

Cyborg, Human Body, Hybrid Space, Robotics, Neuroscience, Biotechnology, Environment.

Page 4: Mind Driven Environments

2

Introduction

In the famous BBC Horizon documentary “Human Version 2.0”, first broadcasted in the year 2006,

highly controversial perspectives on our evolution were discussed by leading scientists. Despite the

clash of opinions between optimists, pessimists and even fatalists, everyone agreed on one thing –

man is being continuously changed and even enhanced through modern technology. In architecture

we can talk about the very similar process that is happening to space. Elizabeth Sikiaridi and Frans

Vogelaar in their article ‘Idensity’ write that ‘Hybrid space is the product of alliances between

physical objects and information-communication networks, between architectural and media

space.’1 It is important to recognize that human and space are both being transformed by the same

process of technological interference and as we speak about “Human v2.0”, “Space v2.0”2 is being

born.

Architectural techniques in this context incorporate ideas of “software” that let us manipulate our

“hardware” environments from within by using our own influence and intelligence. Architecture3,

being part of the technological environment, thus becomes susceptive to multidimensional control

and behavior. As computers can incorporate different kinds of operating systems, not to mention

programs, our buildings4 are subject to being preprogrammed with various forms of logics and

interfaces. In this essay I will argue that by giving this sort of multifunctional plasticity to architecture

we will be able to directly influence its output of spatial, structural, functional or even aesthetic

behaviors that pleases our comfort and purpose through brain-machine interface (BMI).

It is natural that architects, while speculating about the future of the discipline, mostly focus on

evolution of built material environments, methods of production and socio-economic variables, but

in this essay I will pay tribute to conceptual and physical changes that happen in human body as

well, which, according to my main argument, is becoming very important part of the architectural

discourse. It will be seen that as technologies in medical science advance we are getting increasingly

accurate output of meaningful information about the inner workings of human brain. Today’s BMI

technologies are a good example of this – gaining and extracting data of thought processes allows

human mind to be externalized from within the body towards its surroundings. Input of signals to

the brain can also be used to sense information embedded in surroundings. This mechanically

imposed loop of communication between human and environment illustrates one of the most

important outcomes of new reality brought by “Human v2.0” and “Space v2.0” – it is no longer

possible to say that limits of direct brain influence ends at boundaries of our body, human brain is

now free to reach and influence architectural content. Empowered architecture extends its previous

capabilities and categories by becoming more exo-human. Mind driven environments indicate how

technologically advanced human embodies space by externalizing its inner processes and integrating

them to wider virtual, synthetic or natural systems. This kind of integral physical operation will give

us control over synthetic, man-made reality and at the same time will change our nature by driving

human techno-evolution.

1 Elizabeth Sikiaridi and Frans Vogelaar, ‘Idensity,’ in Cognitive architecture, ed. Deborah Hauptmann and

Warren Neidich (Rotterdam: 010 Publishers, 2010), 523. 2 Space v2.0 should be understood as hybrid space, explained by Elizabeth Sikiaridi and Frans Vogelaar.

3 In this article “architecture” is understood as our built, virtual or augmented environment.

4 I suggest thinking about robotic buildings or at least smart ones.

Page 5: Mind Driven Environments

3

Architect Philippe Rahm indicates that the ‘primary reason that architecture exists is related to the

enzymes necessary for the biochemical reactions of the body’s metabolism. Thus, if we want to

know the essence of architecture, we have to return to our endothermic condition, which carries the

necessity of maintaining the human body temperature between 35 and 37.6°C. (…) In this sense

architecture is not autonomous, as it must address a range of means to maintain our endothermic

condition close to the 37°C necessary for biophysical survival.’5 If not for this reason buildings would

look very different indeed. We see Rahm indicating primary purpose of architecture as deeply

related to human body; through BMI technologies this relationship increases further to the point

where some categories of architecture can become an integral part of human body–brain circuitry.

Rahm further writes about the need to recognize condition and relevance of human body in

architecture: ‘the contemporary forms of architecture should conceptually accept the participation

of the body by acting, for instance on the neurons, on the neurotransmitters, by chemically

stimulating desire and mood. Modern biology draws a distinction between corporeal and

extracorporeal space. The first is the communication space managed by neurons and hormones,

inter alia. The second is the space outside the body, that which informs us through the senses.’6

Throughout the essay I will argue that participation of the body in architecture extends beyond the

architectural impact on the body. The vice versa is also true and although modern biology separates

this boundary between corporeal and extracorporeal spaces it becomes increasingly dynamic and

unstable. BMI technologies allow human body to be integrated into architecture not only

conceptually, but literally. Biotechnologies and neuroscience open up completely new domains for

communication between hybrid space and altered human, thus “Space v2.0” and “Human v2.0”, by

creating the matrix of possibilities with many dimensions that can lead architecture to various

directions. These notions deserve analysis and reflection.

Brain-Machine Interface

Human body performs and exists in space, but is naturally limited in size and ability to influence

extracorporeal environment. Tools were invented to operate beyond these limits. Today’s digital

technologies relate humans and machines more than ever, allowing us to transcend spatial borders

by providing shortcuts for communication. Information technologies (IT) are mostly used for this

purpose. Nicholas Negroponte writes that ‘In the same ways that hypertext removes the limitations

of the printed page, the post-information age will remove the limitations of geography. Digital living

will include less and less dependence upon being in a specific place at a specific time, and the

transmission of place itself will start to become possible.’7 It seems that mentioned revolution, to

some extent, has already happened, but there is more to this wormhole of space-time. When digital

medium of IT is being used to perform a physical action at a distance by creating a bridge of

communication for operator and a machine, human extends his body by broadcasting nervous

impulses through various interfaces and networks to the robot. In this cooperation of biology and

technology different material makeup does not play a decisive role for communication. Marvin

5 Philippe Rahm, ‘Edible Architecture,’ in Cognitive architecture, ed. Deborah Hauptmann and Warren Neidich

(Rotterdam: 010 Publishers, 2010), 387. 6 Rahm, Edible Architecture, 389.

7 Nicholas Negroponte, Being Digital, (New York: Alfred A. Knopf, Inc, 1995), 165.

Page 6: Mind Driven Environments

4

Minsky indicates that: ‘today it is widely recognized that behavior of a complex machine depends

only on how its parts interact, but not on the “stuff” of which they are made (except for matters of

speed and strength). In other words, all that matters is the manner in which each part reacts to the

other parts to which it is connected. For example, we can build computers that behave in identical

ways, no matter if they consist of electronic chips or of wood and paper clips – provided that their

parts perform the same processes, so far as the other parts can see.’8 The system joining biological

human operator and a machine made out of steel can thus already be viewed as coherent entity.

BMI steps into this cooperation and brings event of information exchange into more natural level by

allowing nervous signals to be transferred directly to the robot from the brain.

BMI research was initiated in 1970 at University of California Los Angeles (UCLA). In the article

‘Toward Direct Brain-Computer Communication’, published in 1973, Jacques J. Vidal wrote: ‘The

Brain Computer Interface project (…) was meant to be a first attempt to evaluate the feasibility and

practicality of utilizing the brain signals in a man-computer dialogue while at the same time

developing a novel tool for the study of the neurophysiological phenomena that govern the

production and the control of observable neuroelectric events.’9 Scientist explained that: ‘The long-

range implications of systems of that type can only be speculated upon at present. To provide a

direct link between the inductive mental process used in solving problems and the symbol-

manipulating, deductive capabilities of the computer, is, in a sense, the ultimate goal in man-

machine communication. It would indeed elevate the computer to a genuine prosthetic extension of

the brain. To achieve that goal with adequate generality is a formidable task that will require

considerable advances in neurophysiology (to identify appropriate correlates of mental states and

decisions in external signals), in signal analysis techniques (to sort and identify the relevant

information carriers from the garbled and diffuse mixture that reaches the scalp), and in computer

science (to develop appropriate software within the constraints introduced by the nature of brain

messages).’10 Ultimate goal of BMI research is still unachieved, but extensive research is already

giving results that can be practically applied.

Of course, BMI followed earlier ideas related to biofeedback technologies. Term biofeedback implies

that biological system is sending information to and receiving it from external, mechanical objects.

This information loop is used to influence both parties through constant processing and re-

interpretation of input signals; in BMI case those two parties are human brain and a machine. First

scientific experiments in biofeedback could be traced to cybernetics. Hybrid spaces, human-machine

interaction and all sorts of chimera systems relates deeply with this field. According to Andrew

Pickering, in his book The Cybernetic Brain, ‘The first meeting of biofeedback professionals took

place at Snowmass, Colorado, in 1968’11 This meeting was in great extent inspired by work of Grey

Walter in the field of electroencephalography (EEG). One of the first biofeedback experiments was

created by Harold Shipton working under Walter’s leadership in 1945. Flickering light bulb was

introduced to EEG experiments as a feedback loop to impose epileptic symptoms on patients. ‘The

strobe stimulated the brain, the emergent brainwaves stimulated the feedback circuit, the circuit

8 Marvin Minsky, Emotion Machine, (New York: Simon & Schuster, 2006), 22.

9 Jacques J. Vidal, ‘Toward Direct Brain-Computer Communication,’ Annual Review of Biophysics and

Bioengineering Vol. 2 (1973): 157-158. 10

Vidal, Toward Direct Brain-Computer Communication, 158. 11

Andrew Pickering, The Cybernetic Brain, (Chicago: The University of Chicago Press, 2010), 83.

Page 7: Mind Driven Environments

5

controlled the strobe, which stimulated the brain, and so on around the loop. We could say that the

brain explored the performative potential of the material technology (in an entirely nonvoluntary,

nonmodern fashion), while the technology explored the space of brain performance.’12 This led

Pickering to conclude that this kind of loop ‘offers us a more symmetric ontological spectacle, lively

on both sides – dance of agency between the human and the nonhuman. What acted in these

experiments was genuinely a cyborg, a lively decentered combination of human and machine.’13

Biofeedback to Pickering is philosophically important phenomenon. It allows human to develop a

new kind of relationship with machines he himself produces.

We see from previous examples in history that fascination directed towards cyborg future of human

was seriously, thus scientifically, considered throughout second part of the last century. This

fascination is still growing and BMI technologies are gaining more interest. In 2002, Rodolfo R. Llinas

and Valeri A. Makarov in the article called ‘Brain-Machine Interface via a Neurovascular Approach’

indicated that ‘The issue of brain-machine (computer) interface is, without doubt, one of the central

problems to be addressed in the next two decades when considering the role of neuroscience in

modern society. Indeed, our ability to design and build new information analysis and storage

systems that are sufficiently light to be easily carried by a human, will serve as a strong impetus to

develop such peripherals.’14 Potential of BMI is gaining more attention by scientists. Authors make it

clear that this technology is relevant for the wider public and even the future of society.

For architects, belonging to wider public and being active members of society, pressure is rising to

speculate about relevance and implications of these technologies. Current tensions and fluctuations

in the field indicate that critical reflection is needed to deal with high-tech ideas. As will be seen

from later examples, provided by leading BMI experts, spaces and environments (objects of

architecture), are not immune to developments in neuroscience, robotics or virtual reality. Scientists

themselves are taking a position to suggest new concepts for architects to contextualize and explore:

neurobiologist Miguel A.L. Nicolelis (Duke University Medical Center) with mechanical engineer

Mandayam A. Srinivasan (MIT) together published an article for the book Converging Technologies

for Improving Human Performance where they discuss how BMI will transform our abilities to affect

spaces and environments: ‘BMI could also lead to a major paradigm shift in the way normal healthy

subjects can interact with their environment. Indeed, one can envision a series of applications that

may lead to unprecedented ability to augment perception and performance in almost all human

activities. These applications would involve interactions with either real or virtual environments.’15

Nicolelis and Srinivasan go on further to give us four categories of these environments which would

be affected by human subject:

1 - Local, real environment: Restoration of the motor function in a quadriplegic patient.

Using a neurochip implanted in the subject’s brain, neural signals from healthy motor brain 12

Pickering, The Cybernetic Brain, 77-78. 13

Pickering, The Cybernetic Brain, 78. 14

Rudolfo R. Llinas and Valeri A. Makarov, ‘Brain-Machine Interface via a Neurovascular Approach,’ in Converging Technologies for Improving Human Performance, ed. Michael C. Roco and William Sims Bainbridge (Dordrecht: Kluwer Academic Publishers, 2003), 244. 15

Miguel A.L. Nicolelis and Mandayam A. Srinivasan, ‘Human-Machine Interaction: Potential Impact of Nanotechnology in The Design of Neuroprosthetic Devices Aimed at Restoring or Augmenting Human Performance,’ in Converging Technologies for Improving Human Performance, ed. Michael C. Roco and William Sims Bainbridge (Dordrecht: Kluwer Academic Publishers, 2003), 253.

Page 8: Mind Driven Environments

6

areas can be used to control an exoskeleton or prosthetic robotic arm used to restore

fundamental motor functions such as reaching, grabbing, and walking.

2 - Remote, real environment: Superhuman performance, such as clearing heavy debris by a

robot, controlled by the brain signals of a human operator located far away from the danger

zone. Recent results by the P.I. and his collaborators have demonstrated that such remote

control could be achieved even across the internet.

3 - Realistic virtual environment: Training to learn a complex sequence of repair operations

by the trainee’s brain directly interacting with a virtual reality program, with or without the

involvement of the trainee’s peripheral sensorimotor system.

4 - Unrealistic virtual environment: Experiencing unrealistic physics through a virtual reality

system for a “what if” scenario, in order to understand deeply the consequences of

terrestrial physics.16

Scientists suggest that our body could be extended to various forms of environments. That would

give rise to another hybrid of “Human v2.0” and “Space v2.0”. ‘By establishing direct links between

neuronal tissue and machines, these devices could significantly enhance our ability to use voluntary

neuronal activity to directly control mechanical, electronic, and even virtual objects as if they were

extensions of our own bodies.’ 17

At the time Nicolelis and Srinivasan were publishing their article Emotiv Systems were founded, a

company that develops brain-computer interfaces based on the same EEG technology that Walter

used in his experiments. By now it has successfully formed the market for its products and ships

relatively cheap devices to customers around the world. Nicolelis and Srinivasan were talking about

future nanoscale implants to the brain, but EEG technology is already available and it lets us create

BMI applications that are functionally similar to those discussed in their article without the need to

be implanted under the skull. Of course intrusive method to establish BMI connection to the brain is

also possible. Scientists Llinas and Makarov indicate:

One of the most attractive possibilities that come to mind in trying to solve the hardware

problem concerns the development of a vascular approach. The fact that the nervous

system parenchyma is totally permeated by a very rich vascular bed that supplies blood gas

exchange and nurturing to the brain mass makes this space a very attractive candidate for

our interface. The capillary bed consists of 25,000 meters of arterio-venous capillary

connections with a gauge of approximately 10 microns. At distance more proximal to the

heart, the vessels increase rapidity in diameter, with a final dimension of over 20

millimeters. Concerning the acquisition of brain activity through the vascular system, the use

of n-wire18 technology coupled with n-technology electronics seems very attractive. It would

allow the nervous system to be addressed by an extraordinary large number of isolated n-

16

Nicolelis and Srinivasan, Human-Machine Interaction, 253. 17

Nicolelis and Srinivasan, Human-Machine Interaction, 251. 18

“n” stands for nano.

Page 9: Mind Driven Environments

7

probes via the vascular bed, utilizing the catheter-type technology used extensively in

medicine and in particular in interventional neuro-radiology.19

Such combinations of brain and machine would change the field of architecture profoundly, because

both the architectural object (environment) and the subject (human) would transcend each other.

Deborah Hauptmann, while thinking about the similar kind of hybrid creatures, conceptualized by

Donna Haraway, concludes that ‘Such theories are about collectivity and individuality, but at the

same time deal with biotechnology, microelectronics and the human body. Such thought models in

architecture and urbanism have yet to be explored critically.’20

BMI will allow humans to communicate with advanced architectural environments and various

mechanical objects simply by thinking. The limit of our mental and physical grasp in this case will be

precisely the limit of our hybrid space to which we will have access to and the remaining part will

function as independent environment. Following this model of influence there should be three

possible categories of architecture:

Independent - Environments that do not hold qualities of hybrid spaces, they are not enhanced

through information-communication networks. Also spaces that can be considered hybrid spaces,

but we do not have access to them or they function completely on their own artificial intelligence

systems.

Semi-dependent – Environments that interpret our commands by following their own logic and

intelligence, evaluating other factors and responding critically to our influence.

Dependent – Environments of objects that are actuated directly as our body parts. They process

signals reflexively.

As we see in last two categories there exists a sleek line between what we consider to be human

body and surrounding environment. Both of these systems can join to form one and split into

several. Advanced architectural systems can also accommodate various forms of BMI technologies in

many different ways allowing for broad spectrum of applications. It needs to be stressed here that

BMI technologies in architecture should be seen as enriching factor that does not diminish diversity

or otherwise wide range of ideas and practices. It is reflected in new dependent and semi-

dependent categories of architecture that include before non-existent possibilities for mind driven

environments.

Extended Reach of Evolving Species

The purpose of biofeedback is to increase potential of both biological and technological systems.

Such organization allows for greater performance when compared to individual possibilities of bio

and techno composites. Sharing medium of communication allows two worlds of human and robot

to influence each other through bidirectional interactions. Biofeedback is thus a wide concept and

19

Llinas and Makarov, Brain-Machine Interface, 244-245. 20 Deborah Hauptmann, introduction to The Body in Architecture, ed. Deborah Hauptmann (Rotterdam: 010

Publishers, 2006), 11.

Page 10: Mind Driven Environments

8

BMI is only one of its implementations, but, at the same time, it is a very important one, extending

capabilities of human being far beyond the limits of his natural condition.

One of the most important qualities of human brain that allowed BMI to be created is ability to

adapt and learn. Pickering explains that: ‘In the 1960s, biofeedback came to refer to a species of self-

training, in which subjects learned to control aspects of their EEG spectrum (without ever being able

to articulate how they did it).’21 The same aspect is true today. Subjects wearing EEG helmets22

control virtual and real objects/machines without exactly knowing how. It seems that brain adapts

easily to its new body parts.

Natural human motor control acts similarly. Without thinking about all the subtleties that are going

on in the nervous system and the body itself we can perform purposeful tasks. In the article

‘Behavior, Purpose and Teleology’, written in 1943 by Norbert Wiener, Arturo Rosenblueth and

Julian Bigelow we see this same point being mentioned: ’When we perform a voluntary action what

we select voluntarily is a specific purpose, not a specific movement. Thus, if we decide to take a glass

containing water and carry it to our mouth we do not command certain muscles to contract to a

certain degree and in a certain sequence; we merely trip the purpose and the reaction follows

automatically.’23 Skill used to unconsciously perform a task, to some extent, helps when one is

learning to control a robot by thought alone. Brain’s ability to do this was unexpected for scientists,

but it seems that these effects are mostly caused by neuroplasticity of the human cerebrum. It was

explained by Hauptmann in the introduction to the book Cognitive Architecture that:

‘Neuroplasticity, accordingly, provides a key function with respect to the evolution of the human

brain both within an individual lifetime and during the evolution of the species over time. The

potential of neuroplasticity (and the neurochemical mechanisms that support it) also indicates the

adaptive function of neurons to supplement selective processes (for instance the rerouting of visual

with auditory input in the auditory cortex).’24

Although effects of neuroplasticity and already existing EEG technologies allow human reach to be

extended, it is still hard to imagine what it would mean for a brain to be directly connected to

robotic building/machine, communicate with its systems or actuators and sense its environment.

The scale of body’s physical influence in this case begins to vary, because brain can be detached

from one system and attached to another, a bigger/smaller one, complex one or not so much. If this

environmentally extended body is to be understood as one system, it seems that size and inner

qualities of human physique with scale and parameters of independent space (extracorporeal

environment) that surrounds it should be seen as dynamic. By allowing brains to reach previously

inaccessible spatial domains through BMI we increase human abilities by extending his body and

thus decreasing extracorporeal (unreachable) environment. Here, again, we are reminded of terms

“Human v2.0” and “Space v2.0” that were mentioned earlier. But as those terms do not imply an

aspect of transcendence between an architectural object and a subject we lose important quality of

unity when dealing with human-environment interactions. Futurist Ray Kurzweil has a name for this

21

Pickering, The Cybernetic Brain, 83. 22

Devices produced by Emotiv Systems or other companies. 23

Norbert Wiener et. al., ‘Behavior, Purpose and Teleology’, Philosophy of Science 10 (1943): 19. Note: this article is considered to be foundational for cybernetics as a science. 24

Deborah Hauptmann, ‘Introduction: Architecture & Mind in the Age of Communication and Information’, in Cognitive Architecture, 20-21.

Page 11: Mind Driven Environments

9

twist; he calls it “Singularity”. When talking about the technological evolution in one of his

bestselling books The Singularity Is Near he states that ‘The Singularity will represent the

culmination of the merger of our biological thinking and existence with our technology, resulting in a

world that is still human, but that transcends our biological roots. There will be no distinction, post-

Singularity, between human and machine or between physical and virtual reality. If you wonder

what will remain unequivocally human in such a world, it’s simply this quality: ours is the species

that inherently seeks to extend its physical and mental reach beyond current limitations.’25 To

Kurzweil unity between human and environment means that our physical and mental reach will be

extended at will. Thus, in principle, there is no difference between manipulation of robotic building

and control of infrastructure that constitutes entire city through BMI technologies. We can already

see implications of this interaction not only at a scale of architecture, but also urbanism, geography,

astrophysics or even atom.

Last statement needs further explanation: parameters of urban spaces might be influenced by

collective BMI inputs of the users while software would discriminate between public and individual

demands. Various public environments in cities could be changed by following individual or

collective needs and commands. Surgery in medicine can be performed by remotely controlled

robots through BMI, thus geographic borders would be irrelevant for a doctor’s mind. One can

imagine nanobot with robotic actuators and some simple sensory apparatus to be influenced by

thought alone while traveling, orienting or performing tasks in micro scale environments. Controlling

a space probe directly might also be beneficial to some extent. Brains in these cases would influence

a world which was never before accessible because of natural human condition. Nicolelis and

Srinivasan write:

What real advantages might we obtain from future BMI based devices, compared to more

conventional interfaces such as joysticks, mice, keyboards, voice recognition systems and so

forth? Three possible application domains emerge:

1. Scaling of position and motion, so that a “slave” actuator, being controlled directly by

the subject’s voluntary brain activity, can operate within workspaces that are either far

smaller (e.g., nanoscale) or far bigger (e.g., space robots; industrial robots, cranes, etc.)

than our normal reach

2. Scaling of forces and power, so that extremely delicate (e.g., microsurgery) or high-force

tasks (e.g., lifting and displacing a tank) can be accomplished

3. Scaling of time, so that tasks can be accomplished much more rapidly than normal

human reaction time, and normally impossible tasks become possible (e.g., braking a

vehicle to a stop after seeing brake lights ahead; catching a fly in your hand, catching

something you dropped; responding in hand-to-hand combat at a rate far exceeding that

of an opponent)26

Interestingly, as stated by Bruce Wexler in his paper ‘Shaping the Environments that Shape Our

Brains’, our influence towards environment was always increasing throughout biological evolution:

25

Ray Kurzweil, The Singularity Is Near - When Humans Transcend Biology, (London: Penguin Books Ltd., 2006), 25. 26

Nicolelis and Srinivasan, Human-Machine Interaction, 253-254.

Page 12: Mind Driven Environments

10

‘The shaping of the environment was a long process throughout human prehistory and history, with

long periods of limited and other periods of marked increase in the impact of human activity.’27 This

activity of increasing impact is exactly the quality of extended reach that Kurzweil defines as

unequivocally human. Merging enhanced human with enhanced space, thus, is only a step forward

in this process. It also seems that technological innovation was always behind the scenes of this

progress. ‘The wide tool set implied a general view that the environment can be altered and

increased the time and variety of ways in which individuals acted on the environment in a manner

that would not have been possible without the tools they created.’28

Although we know from Wexler that environments are able to wire our brains in remarkable ways, it

is hard to predict how unenhanced human intelligence would cope with controlling varying degrees

of freedom in his extended body. For sure, enhanced dependent environment, as well as

independent one, acting through neuroplasticity, would train one’s mind to appreciate its qualities

and behavior, especially if this learning process would start at an early age ‘Intensive practice of

string instruments leads to selective increase in volume of the right somatosensory and motor areas

associated with the rapid, fine motor movements of the fingers of the left hand that provide

intricate and fast moving sequences of pressure to the strings. The changes in the brain are greater

in adults who practiced more hours and began practicing at younger ages.’29

Some scientists identify that enhancements in human cognition can also be made. Brian M. Pierce

writes that: ‘Improvements in human cognition and communication will also follow a path of higher

integration and increased functionality. The exciting prospect is that the convergent technologies

encompass the three major improvement paths: external, human-machine interface, and internal.

This breadth should make it possible to pursue a more complete system solution to a particular

need. (…) Memory enhancement is an important element of improving human cognition, and

perhaps convergent technologies could be used to build on work that reports using external

electrical stimulation or infusion of nerve growth factor to improve/restore memory in aged rats.’30

Possibility to seriously increase human ability to learn is still generally a speculation, but initial ideas

are already developed and intensive research is taking place. By positively manipulating cognitive

abilities human reach would be extended even further, because it would be possible to comprehend

extended body more accurately and increase overall performance of biofeedback circuitry.

Kurzweil, while discussing brain’s ability to act on extended body parts, points to a very important

experiment made by before mentioned Nicolelis:

Miguel Nicolelis and his colleagues at Duke University implanted sensors in the brains of

monkeys, enabling the animals to control a robot through thought alone. The first step in

the experiment involved teaching the monkeys to control a cursor on a screen with a

joystick. The scientists collected a pattern of signals from EEGs (brain sensors) and

27 Bruce Wexler, ‘Shaping the Environments that Shape Our Brains,’ in Cognitive architecture, ed. Deborah

Hauptmann and Warren Neidich (Rotterdam: 010 Publishers, 2010), 158. 28

Wexler, Shaping the Environments, 161. 29

Wexler, Shaping the Environment, 157. 30

Brian M. Pierce, ‘Sensor System Engineering Insights on Improving Human Cognition and Communication,’ in Converging Technologies for Improving Human Performance, ed. Michael C. Roco and William Sims Bainbridge (Dordrecht: Kluwer Academic Publishers, 2003), 119.

Page 13: Mind Driven Environments

11

subsequently caused the cursor to respond to the appropriate patterns rather than physical

movements of the joystick. The monkeys quickly learned that the joystick was no longer

operative and that they could control the cursor just by thinking. This “thought direction”

system was then hooked up to a robot, and the monkeys were able to learn how to control

the robot’s movements with their thoughts alone. By getting visual feedback on the robot’s

performance, the monkeys were able to perfect their thought control over the robot. The

goal of this research is to provide a similar system for paralyzed humans that will enable

them to control their limbs and environment.31

Already, a company called Touch Bionics produces one of the most advanced active prosthesis

solution named “i-limb”32 that is, at least in part, a byproduct of this experiment. Car manufacturer

Honda is investing large amounts of money to create exoskeletons that would help people to walk

and increase their natural performance at workplace or everyday life33. Kurzweil suggests that

eventually from prosthesis we will move on to extend our influence towards all sorts of machines,

including hybrid spaces. Full integration between human and technological environment is a

moment of “Singularity” for him.

In a way Marcos Novak shares views with Kurzweil on issue of “Singularity” by stating that, ‘The

death of Man does not suggest some sort of literal, alarmist and paranoid apocalyptic fear. Rather, it

implies that Man is an ongoing project and, moreover, that the cladogenetic speciation of Man

necessarily leads to cladogenetic specialization of all of Man’s categories and taxonomies’34. If

architecture here is just one of Man’s categories, transformation of Man seamlessly implies

transformation of architecture. One can easily find parallels between Novak’s notions and Nietzche’s

Übermensch: ‘I teach you the Superman. Man is something that is to be surpassed.’35 For Nietzche

‘the man is a bridge and not a goal.’36 Zarathustra speaks: ‘All beings hitherto have created

something beyond themselves: and ye want to be the ebb of that great tide, and would rather go

back to the beast than surpass man? What is the ape to man? A laughing-stock, a thing of shame.

And just the same shall man be to the Superman: a laughing stock, a thing of shame. Ye have made

your way from the worm to man, and much within you is still worm. Once were ye apes, and even

yet man is more of an ape than any of the apes. Even the wisest among you is only a disharmony and

hybrid of plant and phantom. But do I bid you become phantoms or plants? Lo, I teach you the

Superman!’37 It is philosophically clear to Nietzche and Novak that nature of Man is changing in time,

thus the ongoing project of future Man will be very different – a Superman (Übermensch) for

Nietzche or Alien for Novak: ‘The birth of Man eventually led to the collapse of theocentrism, which

Nietzche characterized as the ‘Death of God’, thus, I suggest, beginning a series: the production of

God (PoG) is followed by the production of Man (PoM); the production of Man leads to the death of

God (DoG); the production of Man is followed by the production of the Alien (PoA), which, in turn

31

Kurzweil, The Singularity Is Near, 164. 32

Read more at http://www.touchbionics.com/products/active-prostheses/. 33

Read more at http://corporate.honda.com/innovation/walk-assist/. 34

Marcos Novak, ‘Speciation, Transvergence, Allogenesis: Notes on the Production of the Alien,’ Architectural Design 72 (2002): 67. 35

Friedrich Nietzche, Thus Spake Zarathustra, (New York: Random House, 1928), 6. 36

Nietzche, Thus Spake Zarathustra, 220. 37

Nietzche, Thus Spake Zarathustra, 6.

Page 14: Mind Driven Environments

12

leads to the death of Man (DoM).’38 Novak’s Alien might still be seen as being conceptually different

from Nietzchean Superman in that it does not suggest linear perfection-oriented evolution, but

simply recognizes and appreciates natural diversity and change of human condition.

Novak explains human technological evolution and progress further: ‘As technology opens new or

previously inaccessible spatial domains to traversal, inhabitation and dwelling, the scope of

investigation of architecture and the spatial arts is expanded far beyond the purview of ordinary

theories and practices. New transgressions demand new architecture.’ 39 Thus meaning of

architecture, according to Novak, shifts together with human techno-evolution. It is easy to

recognize two-way relationship here between human and architecture - man transforms his

environment that changes his own nature by purpose or accident. A broken imaginary line that

separated our biology and machine is very important in this “ongoing project”. Biotechnology is here

already with all the implications that follow. Jim Spohrer writes that:

In the past million years, human performance has primarily been improved in two ways:

evolution (physical-cognitive-social changes to people) and technology (human-made

artifacts and other changes to the environment). For example, approximately one hundred

thousand generations ago, a physical-cognitive-social evolution resulted in widespread

spoken language communication among our ancestors. About 500 generations ago, early

evidence of written language existed. Then the pace of technological progress picked up:

400 generations ago, libraries existed; 40 generations ago, universities appeared; and 24

generations ago, printing of language began to spread. Again, the pace of technological

advancements picked up: 16 generations ago, accurate clocks appeared that were suitable

for accurate global navigation; five generations ago, telephones were in use; four, radios;

three, television; two, computers; and one generation ago, the Internet.

In the next century (or in about five more generations), breakthroughs in nanotechnology

(blurring the boundaries between natural and human-made molecular systems), information

sciences (leading to more autonomous, intelligent machines), biosciences or life sciences

(extending human life with genomics and proteomics), cognitive and neural sciences

(creating artificial neural nets and decoding the human cognome), and social sciences

(understanding “memes” and harnessing collective IQ) are poised to further pick up the pace

of technological progress and perhaps change our species again in as profound a way as the

first spoken language learning did some one hundred thousand generations ago.’40

Ideas of Kurzweil, Novak, Nietzche, Spohrer and Wexler show that evolving species extend physical

and mental reach constantly throughout a history. This brings advantage of being better prepared

for various possible threats facing humanity, thus one can make a prediction that the trend will

continue. BMI technologies in this case will be an important part of the next step in human

technological evolution.

38

Novak, Production of the Alien, 67. 39

Novak, Production of the Alien, 66. 40

Jim Spohrer, ‘NBICS (Nano-Bio-Info-Cogno-Socio) Convergence to Improve Human Performance: Opportunities and Challenges,’ in Converging Technologies for Improving Human Performance, ed. Michael C. Roco and William Sims Bainbridge (Dordrecht: Kluwer Academic Publishers, 2003), 101-102.

Page 15: Mind Driven Environments

13

Human Nature in Motion

Extended reach of humans, in evolutionary terms, in many cases resonates with changes in biological

body. Anthropologists see a relationship between evolutionary cognitive improvements of humans

and their ability to affect the environment, although Thomas Wynn indicates that: ‘It is heartening

that some correlation exists between changes in brains and changes in cognition as revealed in stone

tools. But it is also not surprising that there is not a tight fit, given our current limited ability to relate

changes in gross brain anatomy to changes in behavior. Together, cognitive archeology and human

paleontology may eventually be able to paint a coherent picture of the evolution of the brain and

cognition, but there is still a very long way to go.’41 One can nevertheless see differences in between

healthy and damaged brain’s ability to perform cognition related tasks. Human behavioral changes

can also be linked to deterioration in the brain structures. It is thus possible to reason that changes

in anatomy of human brain can lead to improved or reduced cognitive abilities of the individual.

“New Scientist” recently published an article called ‘Rat Cyborg Gets Digital Cerebellum’. The name

speaks for itself, but few lines are of interest here: ‘Cochlear implants and prosthetic limbs have

already proved that it is possible to wire electrical devices into the brain and make sense of them,

but such devices involve one-way communication, either from the device to the brain or vice-versa.

Now Matti Mintz of Tel Aviv University in Israel and his colleagues have created a synthetic

cerebellum which can receive sensory inputs from the brainstem – a region that acts as a conduit for

neuronal information from the rest of the body. Their device can interpret these inputs, and send a

signal to a different region of the brainstem that prompts motor neurons to execute the appropriate

movement.’42 Scientist Francesco Sepulveda, working in the University of Essex in Colchester, UK,

was cited in the same article while commenting on Mintz’s team achievements by saying that ‘This

demonstrates how far we have come towards creating circuitry that could one day replace damaged

brain areas and even enhance the power of the healthy brain (…) The circuitry mimics functionality

that is very basic. Nonetheless, this is an exciting step towards enormous possibilities.’43 EEG

helmets that were discussed earlier makes it possible to control objects by using our brains, but, as

mentioned in this article, it is still a one-way communication. Particular research by Mintz and his

team opens up a path for complete interaction between human and environment with possibilities

of additional sensations, enhanced cognitive capabilities, etc.

Rudy Burger in the article ‘Enhancing Personal Area Sensory and Social Communication through

Converging Technologies’ write: ‘We understand the input systems to the brain – the sensory

systems – better than the rest of the brain at this time. Therefore, we start with ways of fooling the

senses by means of electronic media, which can be done now, using our present understanding of

senses.’44 Here are few examples:

Non-invasive, removable sensory enhancements (eyeglasses and contact lenses) are used

now and are a useful first step. But why not go the second step and surgically correct the

41

Thomas Wynn, ‘Archeology and Cognitive Evolution,’ Behavioral and Brain Sciences 25 (2002): 431. 42

Linda Geddes, ‘Rat Cyborg Gets Digital Cerebellum,’ New Scientist, September 24, 2011, 25. 43

Geddes, Rat Cyborg Gets Digital Cerebellum, 25. 44

Rudy Burger, ‘Enhancing Personal Area Sensory and Social Communication through Converging Technologies,’ in Converging Technologies for Improving Human Performance, ed. Michael C. Roco and William Sims Bainbridge (Dordrecht: Kluwer Academic Publishers, 2003), 167.

Page 16: Mind Driven Environments

14

eyeball? Even better, replace the eyeball. As with artificial hips and artificial hearts, people

are happy to get a new, better component; artificial sensory organs will follow. We can look

at binoculars, night-vision goggles, and Geiger counters (all currently external to the body) to

get an idea of what is possible: better resolution, better sensitivity, and the ability to see

phenomena (such as radioactivity) that are normally imperceptible to humans. Electronic

technology can be expected to provide artificial sensory organs that are small, lightweight,

and self-powered. An understanding of the sensory systems and neural channels will enable,

for example, hooking up the high-resolution electronic eyeball to the optic nerve. By the

time we have a full understanding of all human sensory systems, it is likely we will have a

means of performing the necessary microsurgery to link electronic signals to nerves.45

Aesthetics and wider concepts of architecture were always related to stable human sensory

apparatus, thus mentioned enhancements can cause serious changes in perception of existing

architectural environments. Nothing is stable in this case, because cognitive improvements may also

affect architectural design and realization methods. Discussed BMI augmentation can reshape ways

in which we interact with buildings, virtual or augmented environments. Deeper implications may

follow if fundamental changes in human body take place.

If, in this context, transformation of man still implies immediate transformation of architecture, the

outcomes become hardly predictable. Biology interacts with technology fully as being part of the

same “software” and “hardware” (domain of bioinformatics). At the same time our virtual worlds

exist as extremes46 of hybrid spaces; they are fully simulated in computers and interpreted in our

brains. These computational environments, according to N. Katherine Hayles in her article ‘Virtual

Bodies and Flickering Signifiers’, also interact with human nature in affective ways: ‘Working with a

VR47 simulation, the user learns to move her hand in stylized gestures that the computer can

accommodate. In the process, changes take place in the neural configuration of the user’s brain,

some of which can be long-lasting. The computer molds the human even as the human builds the

computer.’48 Last statement is true, but effects that virtual or augmented environments imposed on

us were always limited by our biological ability to adapt, in this case, cognitive – neuronal

configurations. Now we see through research of Mintz that these limitations are about to be broken

by advances in biotechnology which makes direct physical interference into our own bodies, sensors,

and brain’s circuitry, thus overall human nature, possible.

It was comfortable to think that virtual reality is somehow less real than physical one. It was thought

to be “detached” from material environment and from Hayles insight we see how such thinking

might be misleading. Furthermore, augmented reality crossed the line of virtual completely and

simulated is now on top of physical as extra quality. It is important, at this moment, to mention

three more research projects made in neuroscience that are relevant when discussing this

controversy of virtuality.

45

Burger, Enhancing Personal Area Sensory and Social Communication, 167-168. 46

Although virtual environments could be already considered as having separate spatial dimension in our physical world, simulation is still being run on physical hardware, thus in this essay, I suggest thinking that it does not cross borders of hybrid space. 47

Virtual Reality 48

N. Katherine Hayles, ‘Virtual Bodies and Flickering Signifiers,’ October 66 (1993): 90.

Page 17: Mind Driven Environments

15

First one is made by team of scientists Shinji Nishimoto, An T. Vu, Thomas Naselaris, Yuvai Benjamini,

Bin Yu, and Jack L. Gallant. This group recently published an article in Current Biology named

‘Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies’. Team explains

what they have been able to achieve: ‘In this study, we developed an encoding model that predicts

BOLD49 signals in early visual areas with unprecedented accuracy. By using this model in a Bayesian

framework, we provide the first reconstructions of natural movies from human brain activity. This is

a critical step toward the creation of brain reading devices that can reconstruct dynamic perceptual

experiences. Our solution of this problem rests on two key innovations. The first is a new motion-

energy encoding model that is optimized for use with fMRI50 and that aims to reflect the separate

contributions of the underlying neuronal population and hemodynamic coupling. This encoding

model recovers fine temporal information from relatively slow BOLD signals. The second is a

sampled natural movie prior that is embedded within a Bayesian decoding framework. This

approach provides a simple method for reconstructing spatio-temporal stimuli from the sparsely

sampled and slow BOLD signals.’51

This makes visual mind reading possible and creates further opportunities for brain-machine

interaction: ‘Quantitative models of dynamic mental events could also have important applications

as tools for psychiatric diagnosis and as the foundation of brain machine interface devices.’52

Scientists also say that: ‘this modeling framework might also permit reconstruction of dynamic

mental content such as continuous natural visual imagery. In contrast to earlier studies that

reconstruct visual patterns defined by checkerboard contrast, our framework could potentially be

used to decode involuntary subjective mental states (e.g., dreaming or hallucination), though it

would be difficult to determine whether the decoded content was accurate. One recent study

showed that BOLD signals elicited by visual imagery are more prominent in ventral-temporal visual

areas than in early visual areas. This finding suggests that a hybrid encoding model that combines

the structural motion-energy model developed here with a semantic model of the form developed in

previous studies could provide even better reconstruction of subjective mental experiences.’53

Research raises serious ethical considerations, but idea of mind reading is starting to look very

realistic.

The other research was made by Edward S. Boyden as a co-author, who, in his TED talk “A Light

Switch for Neurons”, while talking about targeted brain manipulation to cure disease asks: ‘Can we

dial-in information precisely where we want it to go?’54 And the answer to this question seems to be

positive. Individual switching of neurons can be achieved by using techniques of optogenetics.

Boyden with his colleagues wrote an article called ‘A Wirelessly Powered and Controlled Device for

Optical Neural Control of Freely-Behaving Animals’ where they present technology needed to

manipulate animal’s behavior by intervening into mice brain’s circuitry: ‘We demonstrate use of the

technology to wirelessly drive cortical control of movement in mice. These devices may serve as

prototypes for clinical ultra-precise neural prosthetics that use light as the modality for biological

49

Blood Oxygen Level-Dependent. 50

Functional Magnetic Resonance Imaging. 51

Nishimoto et al., ‘Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies,’ Current Biology 21 (2011): 3-4, http://www.cell.com/current-biology/abstract/S0960-9822(11)00937-7. 52

Nishimoto et al., Reconstructing Visual Experiences, 1. 53

Nishimoto et al., Reconstructing Visual Experiences, 5. 54

http://www.ted.com/talks/lang/eng/ed_boyden.html at 4:35.

Page 18: Mind Driven Environments

16

control.’55 In the beginning of the article scientists explain that ‘Optogenetics, the ability to use light

to activate and silence specific neuron types within neural networks in vivo and in vitro, is

revolutionizing neuroscientists’ capacity to understand how defined neural circuit elements

contribute to normal and pathological brain functions.’56 It is useful to see more precisely how their

technique works:

In order to satisfy the power and control requirements for freely-behaving optogenetic

experiments, we have developed a supercapacitor-based headborne device which can

control multiple headborne LEDs, receiving power (and optionally, real-time delivered

stimulation protocol information) in a fully wireless fashion. The device is small (<1 cm3), and

weighs approximately 2 g when operated autonomously with preprogrammed modulation

protocols, or 3 g when equipped with optional wireless telemetry, both implementations

being appropriate for use in small animals such as mice. In this paper we present the design,

which centers around a high-efficiency resonant wireless power transfer system and

headborne supercapacitor-based energy storage, appropriate to support the reliability and

high-power operation requirements of the optogenetic research. The power transmitters are

low-profile devices that can fit under behavioral arenas or cages. We show that such

systems can sustain multi-watt power delivery both continually and in burst mode, and

demonstrate control of behavior in untethered mice expressing ChR257 in motor cortex

pyramidal cells. Such systems will only enable a number of fundamentally new kinds of

experiment, but may also serve as prototypes for a new generation of clinical neural

prosthetics that achieve great precision through the use of light targetable molecules as

transducers of cell-type-specific neural control.58

Changes in behavior of mice were obvious: ‘We demonstrate the applicability of this technology to

untethered freely-behaving animal optogenetics, using a well-validated and easily quantified

behavioral paradigm of cortically-driven unilateral motor control in the mouse. A simple paradigm in

which 470 nm light was delivered unilaterally to M1 motor cortex at 30 Hz with 15 ms pulse width at

250 mW LED input power in 30 s epochs followed by 90 s periods of rest resulted in increased

rotation to the contralateral side during stimulation as compared to the epoch before stimulation.

Thus, we can drive behaviorally-relevant protocols at sufficiently high power levels to elicit robust

behavioral changes.’59

Third research is concentrated on visual prostheses that are supposed to restore sight to blind

people. John S. Pezaris and Emad N. Eskandar in 2009 published an article ‘Getting Signals Into the

Brain: Visual Prosthetics Through Thalamic Microstimulation’ where they discuss and compare

various intrusive methods that can be applied to restore visual sensation to people with damaged

eyes: ‘Common causes of blindness are diseases that affect the ocular structures, such as glaucoma,

retinitis, pigmentosa, and macular degeneration, rendering the eyes no longer sensitive to light. The

visual pathway, however, as a predominantly central structure, is largely separated in these cases. It

55

Edward S. Boyden et al., ‘A Wirelessly Powered and Controlled Device for Optical Neural Control of Freely-Behaving Animals,’ Journal of Neural Engineering 8 (2011): 1, http://stacks.iop.org/JNE/8/046021. 56

Boyden et al., A Wirelessly Powered and Controlled Device for Optical Neural Control, 1. 57

Light-gated ion transport protein “Channelrhodopsin-2”. 58

Boyden et al., A Wirelessly Powered and Controlled Device for Optical Neural Control, 2. 59

Boyden et al., A Wirelessly Powered and Controlled Device for Optical Neural Control, 8.

Page 19: Mind Driven Environments

17

is thus widely thought that a device-based prosthetic approach to restoration of visual function will

be effective and will enjoy similar success as cochlear implants have for restoration of auditory

function.’60 Scientists go on to present how visual prosthetics work: ‘The fundamental idea

underpinning visual prosthetics is to create an imaging device that, through some artificial means,

injects appropriately processed signals into the visual stream. While some of the retinal approaches

seek to create device that does little or no image processing or to have no device at all by

photosensitizing normal cells, most projects have a device that performs a function that is akin to

normal retinal image processing. As such, visual prosthetic devices are not unlike bionic eyes: they

focus photons onto a light-sensitive surface to create an image, extract salient features from that

image, and transmit those features to the brain.’61 At the moment main targets for implants are as

follows: ‘In cases in which the early visual system is intact, 6 distinct structures along the pathway

from retina to primary visual cortex provide potential targets for a device-based approach: the

retina, the optic nerve, the optic tract, the LGN62, the optic radiation, and the primary visual

cortex.’63 Without going into accurate technical details it is enough to cite few lines here about this

technology:

In a visual prosthesis device, each electrode contact is typically intended to generate 1

phosphene. If the phosphenes are small and tightly focused, they can be thought of as

pixels, although they will likely not be close-packed like in a computer or camera display, but

more probably separated by an unstimulated background. Mapping the visual scene to these

pixels can be thought of as looking through an opaque screen through which holes have

been punched, somewhat like looking through a kitchen colander, although each pixel in a

prosthesis will be solid in appearance, or nearly so, and each hole in an opaque screen will

show some detail of the scene beyond within the diameter of the hole. Nevertheless, a

prosthetic image can be constructed from a collection of pixels where each has been

adjusted according to the brightness of the original image, even if there are far fewer pixels

in the prosthetic image than in the original, and even if the prosthetic pixels are not

arranged in a perfect grid.64

These comments draw an important concept of our brain. It is the place where our world exists.

Tweaking the brain can change the world, in this case – shed light onto darkness for blind people.

Theoretically, similar methods also allow for improved vision in healthy individuals. Although

scientists do not suggest this, as current research is in early stages, digitally augmented prosthetics

can be created in the future as stated by Burger.

Now it is time to get back to the theme of virtual controversy. By looking at selected research

projects we see that scientists are now starting to understand how to decode perception directly

from the brain’s spatial changes in BOLD fluctuations using fMRI, how it is possible, more precisely

than ever, to manipulate neurons to change behavior of animals by using optogenetics instead of

60

John S. Pezaris and Emad N. Eskandar, ‘Getting Signals into the Brain: Visual Prosthetics through Thalamic Microstimulation,’ Neurosurgical Focus 27, no. 1 (2009): 1, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2848996. 61

Pezaris and Eskandar, Getting Signals into the Brain, 5. 62

Lateral Geniculate Nucleus. 63

Pezaris and Eskandar, Getting Signals into the Brain, 10. 64

Pezaris and Eskandar, Getting Signals into the Brain, 5.

Page 20: Mind Driven Environments

18

DBS65 implants. It is also possible that through advances in visual prostheses, interference into visual

stream processing structures between retina and primary visual cortex can be used to restore sight

to blind people. Add up techniques that allow scientists to intervene into brain’s circuitry by using

man made systems like digital cerebellum and we get a full spectrum of developing ideas which will

be needed to attach virtual environment directly to the brain. It is possible to speculate that

knowledge needed to read high resolution dreams will be used to precisely stimulate neuronal

structures by imposing visual imagery (simulated dreaming) directly on human brain. This will be

used in technologies of augmented and virtual reality, but at this point reality will be enhanced not

only through visual layer of information. Additional senses like ultrasonic hearing, infrared vision,

more delicate smell, taste, etc., can be included. Virtual has a potential to become a richer

environment for our experience than that of a natural-physical one by opening wider frequencies of

reality. And this is a controversy of being “less” real.

Augmentation of perceivable reality through brain stimulation provides risky framework for our

future and there already exists many difficult questions to be addressed by humanity, but we should

recognize from what was said about virtuality and neuroscience that, although human nature was

slowly changing throughout our evolution, technological advances can increase the rate of this

process. This also affects field of architecture. Internal and external changes in human body can

influence not only perception of architecture or methods of building, but also needs of the user and

thus values of the architect. It is clear that consequences for architecture in this case are

fundamental and need to be analyzed further.

Synthetic Matter

Neri Oxman together with colleagues in the article ‘Programming Reality: From Transitive Materials

to Organic User Interfaces’ indicates that: ‘Computation today still remains an entity of information

which is overlaid on top of the passive physical world, with little regard for how material and

computationally-driven behaviors can operate together. Transitive materials attempt to blur such

boundaries by supporting the design of integrated structures that are themselves capable of

input/output, computation, and ultimately of interactivity and personalization.’66 As of today this

potential is still largely unexplored, but even existing transitive materials allow creation of first

hybrid spaces. Authors of the article explain computationally enhanced physical reality as follows:

‘Transitive materials combine the transient qualities of smart, composite, and computational

materials, and encapsulate the ability to function as frame, skeleton, sensor, actuator and/or

processor. The multifaceted nature of transitive materials provides a link between computational

devices and physical material elements.’67 Sophisticated hybrid spaces would revolutionize our

material world: ‘In the future, we will observe increasing integration of computation and the physical

environment to the point where basic material properties will be computationally controlled. In this

brave new world, we will be programming not only computers or devices, but the fabric of reality

65

Deep Brain Stimulation. 66

Marcelo Coelho et al., ‘Programming Reality: From Transitive Materials to Organic User Interfaces,’ Ext. Abstracts CHI 2009 (Boston: ACM Press, 2009): 4760. 67

Marcelo Coelho et al., Programming Reality, 4760.

Page 21: Mind Driven Environments

19

itself.’68 It is also important to recognize that interfaces for human-environment communication will

become increasingly organic: ‘Organic User Interfaces (OUI) explore future interactive designs as

computationally controlled materials become commonplace. The OUI vision is based on an

understanding that physical shape of display devices will become non-flat, potentially arbitrary and

even fluid or computationally controlled. This allows display devices and entire environments to take

on shapes that are flexible, dynamic, modifiable by users or self-actuated.’69 BMI should also be

considered organic as it holds some of the most important qualities of this vision by suggesting

direct biofeedback communication.

Programming reality and building a bridge of communication through OUI will allow our brains and

bodies to communicate with their surroundings in many different ways, but it is still very important

to recognize current advances in material science to better understand future possibilities of such a

revolution. Two research projects will be discussed to explain actual reality of programmable matter.

Claytronics – a project made in Carnegie Mellon University aims at creating small modular robots

(catoms acting like material voxels) that can orient and organize themselves in spatial reality to form

physical models. Seth C. Goldstein together with Todd C. Mowry in the article called ‘Claytronics: A

Scalable Basis for Future Robots’ states that: ‘One of the primary goals of claytronics is to form the

basis of a new media type, pario. Pario, a logical extension of audio and video is a media type used

to reproduce moving 3D objects in the real world. A direct result of our goal is that claytronics must

scale to millions of micron-scale units. Having scaling (both in number and size) as primary design

goal impacts the work significantly.’70 New media, called pario, should be understood as one type of

programmable matter that could shape future hybrid spaces. ‘The long term goal of our work is to

render physical artifacts with such high fidelity that our senses will easily accept the reproduction for

the original. When this goal is achieved we will be able to create an environment, which we call

synthetic reality, in which a user can interact with computer generated artifacts as if they were the

real thing. Synthetic reality has significant advantages over virtual reality or augmented reality.‘71

Authors indicate that synthetic objects could be sensed, experienced and even used more naturally

without wearing additional glasses or haptic feedback devices. Technology is explained further:

‘Claytronics is our name for an instance of programmable matter whose primary function is to

organize itself into the shape of an object and render its outer surface to match the visual

appearance of that object. Claytronics is made up of individual components, called catoms – for

Claytronic atoms – that can move in three dimensions (in relation to other catoms), adhere to other

catoms to maintain a 3D shape, and compute state information (with possible assistance from other

catoms in the ensemble). Each catom is a self-contained unit with CPU, an energy store, a network

device, a video output device, one or more sensors, a means of locomotion, and a mechanism for

adhering to other catoms.’72 As fictional as it sounds, the authors say that ‘the core concepts in

claytronics are hardly new; from science fiction to realized reconfigurable robots, to proposed

modular robots, scientists and writers have contemplated the automatic synthesis of 3D objects.

68

Marcelo Coelho et al., Programming Reality, 4762. 69

Marcelo Coelho et al., Programming Reality, 4760. 70

Seth C. Goldstein and Todd C. Mowry, ‘Claytronics: A Scalable Basis For Future Robots,’ Computer Science Department Paper 770 (2004): 1, http://repository.cmu.edu/compsci/770. 71

Seth Goldstein and Todd Mowry, Claytronics: A Scalable Basis For Future Robots, 1. 72

Seth Goldstein and Todd Mowry, Claytronics: A Scalable Basis For Future Robots, 1.

Page 22: Mind Driven Environments

20

However, technology has finally reached a point where we can for the first time realistically build a

system guided by design principles which will allow it to ultimately scale to millions of sub-millimeter

catoms. The resulting ensemble can be viewed as either a form of programmable matter suited for

implementing pario or as a swarm of modular robots.’73 For sure working prototypes, at the

moment, have pretty low resolution, but they are strong proof of the concept and will be scaled

down (in size) as research goes on. This is explained further: ‘In addition to capabilities, the different

regimes (macro, micro, and nano) have significantly different economics. Macro-scale catoms

require the assembly of multiple parts into a single unit. We expect that this will make the

realization of life-size synthetic reality prohibitive due to the cost per catom. The micro-scale catoms

may also require assembly, but with many fewer parts, e.g., photolithography. Just as VLSI-based

computers are commonplace (as opposed to vacuum tube based computers), in this regime, catoms

are inexpensive enough that synthetic reality, though expensive, becomes viable.’74 This means that

higher resolution claytronics might be cheaper than low resolution equivalent.

It needs to be stressed here that term synthetic, which describes catomic reality, does not suppose a

low sophistication level of the system when compared to nature. Synthetic can reach and even

surpass capabilities of biological systems, although a lot of work still needs to be done. J. Craig

Venter with colleagues, in the article ‘Creation of a Bacterial Cell Controlled by a Chemically

Synthesized Genome’, writes: ‘… in 1995, our team was able to read the first complete genetic code

of a self-replicating bacterium, Haemophilus influenzae. Reading the genetic code of a wide range of

species has increased exponentially from these early studies. Our ability to rapidly digitize genomic

information has increased by more than eight orders of magnitude over the past 25 years. Efforts to

understand all this new genomic information have spawned numerous new computational and

experimental paradigms, yet our genomic knowledge remains very limited.’75 But even with this

limited knowledge scientists were able to create first synthetic life form: ‘We now have combined all

of our previously established procedures and report the synthesis, assembly, cloning, and successful

transplantation of the 1.08-Mbp76 M. mycoides JCVI-syn1.0 genome, to create a new cell controlled

by this synthetic genome.’77 Advances in biotechnology made this possible and it is already obvious

that 3D mimicry of virtual objects using claytronics is just one method of making synthetic reality

between many. We can organize matter in order to create living creatures. This is a great example

on how synthetic life forms can already extend biological ones:

In 1995, the quality standard for sequencing was considered to be one error in 10000 bp and

the sequencing of a microbial genome required months. Today, the accuracy is substantially

higher. Genome coverage of 30-50X is not unusual, and sequencing only requires a few days.

However, obtaining an error-free genome that could be transplanted into a recipient cell to

create a new cell controlled only by the synthetic genome was complicated and required

many quality control steps. Our success was thwarted for many weeks by a single base pair

deletion in the essential gene dnaA. One wrong base out of over one million in an essential

gene rendered the genome inactive, while major genome insertions and deletions in non-

73

Seth Goldstein and Todd Mowry, Claytronics: A Scalable Basis For Future Robots, 1. 74

Seth Goldstein and Todd Mowry, Claytronics: A Scalable Basis For Future Robots, 3. 75

J. Craig Venter et al., ‘Creation of a Bacterial Cell Controlled by a Chemically Synthesized Genome,’ Science Vol 329 no. 5987 (2010), http://www.sciencemag.org/content/329/5987/52.full. 76

Mega base pairs. 77

J. Craig Venter et al., http://www.sciencemag.org/content/329/5987/52.full.

Page 23: Mind Driven Environments

21

essential parts of the genome had no observable impact on viability. The demonstration that

our synthetic genome gives rise to transplants with the characteristics of M. mycoides cells

implies that the DNA sequence upon which it is based is accurate enough to specify a living

cell with the appropriate properties.78

It is a huge leap towards programmable matter that can grow and perform life like behaviors.

Genetic engineering extends possibilities of synthetic reality and pario media. Scientists explain that

future synthetic life forms can be produced purely by coding genetic information which would

determine rules for organizing matter to create artificial biology: ‘The properties of the cells

controlled by the assembled genome are expected to be the same as if the whole cell had been

produced synthetically (the DNA software builds its own hardware)’. Statement that DNA software

makes its own hardware is of great importance here as it relates bioengineering methods to hybrid

spaces in reversed angle. Hardware environments, in this context, are not only being passively

enriched with software layers of information 79 to manipulate pre-given set of geometrical

possibilities of a system, but are lively constructed from material components, driven and affected

by these data layers themselves. It seems, after all, that DNA-like mechanisms producing bio-like

synthetic matter can shape our enhanced spaces.

Synthetic biology is defined as ‘the design and construction of new biological parts, devices, and

systems, and the re-design of existing, natural biological systems for useful purposes.’80 Fabulous

efforts are put by BioBricks Foundation81 to collect, store and openly share DNA sequences

(BioBricks) of known functions and effects that can be used to build synthetic reality. On the index

page of foundation’s website main column states that ‘We envision a world in which scientists and

engineers work together using BioBrick™ parts — freely available standardized biological parts — to

create safe, ethical solutions to the problems facing humanity. We envision synthetic biology as a

force for good in the world. We see a future in which architecture, medicine, environmental

remediation, agriculture, and many other fields are using the technology of synthetic biology.’82

Bricks that will be shared are coding blocks of biology. Having these freely available artificial genes

makes biotechnology accessible to architectural research in order to create synthetic materials and

environments. Of course it might also be that DNA is not necessary to produce artificial biology or

life. Good examples of these are protocells, explained by Martin Hanczyc in his TED talk83, that

exhibit life-like behaviors without having any genes, they are driven by elementary chemical

interactions, but it is still thought that for more complex organisms there needs to be some sort of

information carrier containing genetic instructions (like DNA). Such physical systems that can build

themselves and contain man-made functions fall under the concept of programmable matter.

Synthetic reality is reality of hybrid environments. Both catomic and genetically driven, self-

organizing matter can be coded to interact with human subject through various interfaces.

Programmed matter in this context becomes as sophisticated as biological one, thus human-

environment interaction modes can be made simple and natural. Scientists experimenting with

78

J. Craig Venter et al., http://www.sciencemag.org/content/329/5987/52.full. 79

As discussed by Neri Oxman and colleagues. 80

http://syntheticbiology.org. 81

http://biobricks.org/. 82

http://biobricks.org/. 83

http://www.ted.com/talks/martin_hanczyc_the_line_between_life_and_not_life.html.

Page 24: Mind Driven Environments

22

robots controlled through EEG helmets are just beginning to tackle the world of brain controlled

objects, but outcomes already seem promising and far reaching. It is obvious that developments in

neuroscience, biotechnology and robotics affect architectural discourse; these scientific influences

may cause deep theoretical and practical impact as human mind becomes opened to and merged

with its surroundings throughout various scales. Qualities and quantities of synthetic architecture,

like form, function, aesthetics, structure, size, composition, organization, etc., can be transformed

through robotic84 synthetic media. Plugging brain as control mechanism into this media by two-way

communication feedback loop will thus extend potential of both space and human.

Architectural Cyborg

In such a reality where humans are able to create their own domains of existence, be it virtual,

augmented or physical, there is a need to suggest integral topics for architectural research and

practice. At the epicenter of merged “Human v2.0” and “Space v2.0” is a concept of interaction.

Gordon Pask in the article ‘The Architectural Relevance of Cybernetics’ that was published in the

year 1969 said: ‘The high point of functionalism is the concept of a house as a “machine for living in.”

But the bias is towards a machine that acts as a tool serving the inhabitant. This notion will, I believe,

be refined into the concept of an environment with which the inhabitant cooperates and in which he

can externalize his mental processes.’85 It is clear from previous parts of the essay that there already

exists a possibility to externalize mental processes literally through brain-machine interaction. Thus

cooperation with environment can be achieved not only through usual sensors and visual output

devices, but through OUI that allow for two-way communication biofeedback loops like BMI.

Field of Interactive Architecture although exists as a part of very wide architectural and

interdisciplinary scientific discourse, inherited its ideas mostly from cybernetics and is now deeply

related with fields of intelligent environments (IE) and embedded computing. The fact that it deals

with various forms of communication between human, built, virtual, augmented and natural

environments makes it perfect for further research in biofeedback mechanisms and applications,

including BMI technologies.

In the book Interactive Architecture Michael Fox and Miles Kemp write that: ‘Advancement will only

be accomplished when interactive architectural systems are addressed not primarily or singularly,

but as an integral component of a larger vision that takes advantage of today’s pervasive, constantly

unfolding, and far-reaching technology.’ 86 Authors stress that today’s technologies deserve

recognition by architects as they can have significant influence to our built and simulated

environments. They also advocate for architecture which would be integrated to wider scientific

perspective. This is important when dealing with theories of biofeedback and synthetic reality in

order to bring speculations into research and application level.

Fox in his article ‘Beyond Kinetic’ writes that ‘we are rapidly approaching a time where the

integration of embedded computation and kinetic function becomes a practical and feasible

84

Including various scales and kinds (like bio, nano, etc.). 85

Gordon Pask, ‘The Architectural Relevance of Cybernetics,’ Architectural Design 39 (1969): 496. 86

Michael Fox and Miles Kemp, Interactive Architecture (New York: Princeton Architectural Press, 2009), 12.

Page 25: Mind Driven Environments

23

reality.’87 Embedded computation, deeply integrated with hybrid responsive spaces, forms an

important part of future brain driven environments. It is easy to see that interdisciplinary approach

is at the heart of such a research: ‘From an architectural standpoint, embedded computation has

taken an interesting foothold. Work in embedded computation has arisen primarily out of the field

of computer science, reaching into the sub disciplines of both artificial intelligence and robotics. The

research has come out of both academia and the corporate world and there are currently numerous

precedent examples of embedded computation in that have begun to define a field now known as

intelligent environments.’88 IE in all cases are independent, semi-dependent or fully dependent. In

one way or another IE interact with human or other systems (natural or artificial). Such an intelligent

and interactive hybrid spaces through embedded computing and active control mechanisms will

allow our brain to cooperate with the environment by externalizing human mental processes as

envisioned by Pask.

Architect Kas Oosterhuis writes about interactive architecture in one of his recent books Towards a

New Kind of Building: ‘The second paradigm shift leading architecture towards new horizons is the

step from static to interactive architecture. Exactly the same prerequisite that allows for customized

CNC production also allows for dynamic behavior of the constructs. Once the building components

possess their unique numbers, once they are tagged, they can be addressed as individuals. When the

individual components are continually addressed in a streaming mode in real time, and when the

building components are capable of making moves, then that component may be said to be

responsive, adaptive.’89 Tagged building components described by Oosterhuis are perhaps the first

practical and feasible applications of programmable matter in architectural scale. Oosterhuis

explains that: ‘From responsiveness to interaction is another step. Responding to incoming

information is based on information streaming in one direction from the sender to the receiver, then

the receiver responding back to the sender. But this is still far from the bi-directional dialogue that

characterizes the interactivity paradigm. To have interactivity, the receiver must send back new

information; it must process the received information and send it back in a slightly adjusted form.

Some parameters must have changed. A dialogue is a two-way communication in which each actor is

somewhat changed after having sent back its response.’90 We see that interactive architectural

system is initiated by having a loop of communication between objects with integrated sending,

receiving and processing functions. Given description of this communication resembles biofeedback

circuitry discussed before in this essay. Two-way BMI could thus be used in such a way as to

integrate architectural objects with human body.

Current robotic technologies in building industry are slowly starting to be recognized, appreciated

and implemented. Nevertheless, as seen from discussed advances in fields of material science,

neuroscience, IT and biotechnology there is a huge potential for architectural application research

and even fundamental experimentation in the scale of built or scope of virtual environments. When

thinking about materiality of architecture, Fox and Kemp write that:

87

Michael Fox, ‘Beyond Kinetic,’ MIT Kinetic Design Group, 2004, 3, http://robotecture.com/Papers/Pdf/beyond.pdf. 88

Fox, Beyond Kinetic, 4. 89

Kas Oosterhuis, Towards a New Kind of Building, (Rotterdam: NAi Publishers, 2011), 114. 90

Oosterhuis, Towards a New Kind of Building, 114.

Page 26: Mind Driven Environments

24

The prevalence of the organic paradigm is beginning to alter the conceptual model that we

apply in order to comprehend our environment and, consequently, design within our

environment. Consequentially, the organic paradigm of kinetic adaptation has driven a

profound set of developments in both robotics and new materials whereby the adaptation

becomes much more holistic, and operates on a very small internal scale. Technology has

provided recent unprecedented insight into the workings of microscopic natural

mechanisms and advanced manufacturing of high-quality kinetic parts with new materials

such as fabrics, ceramics, polymers and gels, shape-memory alloy compounds, and

composites. In the same vein, we cannot ignore those structures and systems being explored

at even smaller scales, such as the nano. Nanocomposite materials are being developed that

are self-sensing and self-actuating to improve strength, reliability, and performance. The

combination of new materials and robotics at a very small scale opens up a fascinating area

that is relevant to interactive architecture in bio-nanotechnology. Interactive architecture

could greatly benefit from the integration of biological functions and nanoscale precision.91

We saw from various selected research projects that this bio-nanotechnologically driven revolution

affects not only synthetic hybrid spaces, but can also change or even enhance human nature. This

revolution ties together progress in two spheres of “Human v2.0” and “Space v2.0”. Slowly it brings

us to reality of human-environment integration where boundary between our biological bodies and

created machines begin to vanish. When bidirectional BMI technologies will be integrated to

projects of biodigital interactive architecture, new kind of architectural cyborg will emerge. This

mode of human being will be very different from the ones imagined in most sci-fi movies. If

integration of human and synthetic environment increases to the degree when destruction of

biofeedback loop can have serious existential consequences for both communicating sides, it will be

hard to distinguish between biological bodies and dependent or semi-dependent physical, virtual or

augmented environments as they would form mutually continuous mental and physical experience

of the individual. Nervous signals in such a world would communicate not only with human limbs,

but wider organizations of synthetic environments holding both physical and virtual dimensions.

Mutual dependency between “Human v2.0” and “Space v2.0” creates condition where living

organism is able to influence its own realms of performance in between physical and simulated

levels. It is at this point that nothing could be taken for granted as upgrade in human cognition or

extended body can cause changes in perception, thinking, behavior or physical and mental needs.

One can draw a vision that future object of architecture - “Space v2.0”, will be able to merge with its

subject - “Human v2.0”, and thus can become equal to object of future medicine – mind driven

hybrid space – our new, extended, fluctuating and enhanced body. Architects, in this case, would be

dealing with very different sets of issues than today as the field would have been changed and,

nevertheless, still moving forward.

By Way of Conclusion

Throughout this essay I discussed “Human v2.0” and “Space v2.0” by linking developments in

neuroscience, robotics, IT and biotechnology together with philosophy of science to show how

human brain and synthetic reality can interact in the future. Mind driven environments are

91

Fox and Kemp, Interactive Architecture, 19.

Page 27: Mind Driven Environments

25

supposed to be seen as indicators and catalysts of development in ever changing field of

architecture. As stated in my main argument, BMI is going to join human body with synthetic,

robotically and computationally enhanced categories of architecture by redefining borders of

corporeal and extracorporeal environments.

Developments in BMI propose idea of architectural cyborg which may cause field to face serious

transformations. Dependent and semi-dependent categories of architecture were shown to find

themselves moving closer to the field of biotechnology, which will hold methods and techniques to

handle or avoid erroneous malfunctions in advanced biofeedback systems. This particular aspect of

development directly, thus technically, integrates human body into architecture’s discourse.

Position to introduce a perspective of architecture which is deeply related to hard science was taken

on purpose. As essay suggestively proposed, there is a need to extend theoretical technologically-

oriented background in order to promote interdisciplinary high-tech architectural research and

practice. It was indeed necessary to speculate further into the future, where current disciplinary

stereotypes become less relevant, to show that instead of searching for golden rules in architecture,

field’s dynamism, unpredictability and constant change should be recognized. Novak’s term of

“ongoing project” and ideas of “Singularity” were used to describe such a variable condition of both

human and architecture in order to stress particular aspect of their nature while keeping its richness

intact.

Critique addressed to “naïve”, “over optimistic” and “pop-neuroscientific” neuroarchitectural

approaches in the article ‘Designing the Lifeworld: Selfhood and Architecture from a Critical

Neuroscience Perspective’, written by Lukas Ebensperger and colleagues, although deserves a

thorough and detailed response, cannot deny the importance of factual effects brought by

neuroscience and already existing BMI technologies for dependent and semi-dependent

environments. In this context advances in material science and biotechnology forming a basis of

synthetic reality are also significant to architectural domain. It is partly because an idea of mind

driven environments goes beyond the range of neuroarchitecture that speculative architectural

cyborg, under extent of given critique, in no way should be recognized as denying or “reducing”

wider scopes of human selfhood and architecture. Such themes were not developed in this essay.

Critics resist an idea that ‘cognitive neuroscience will fully be able to explain mental processes,

emotions, moods, behavior and consciousness’92 and that such ‘knowledge can and will be fruitfully

implemented in architecture.’93 All of this, for sure, is yet to be seen, but it could be recognized from

the contents of this essay that complete white-boxed understanding of the human brain and

consciousness is not needed to create useful mind driven environments. One will be left with the

quote from Daniel L. Akins’s article ‘Mind over Matter in an Era of Convergent Technologies’:

Within the next 10 to 15 years, economically viable activities connected with nanoscience,

bioscience, information technology and cognitive science (NBIC) will have interlaced

themselves within ongoing successful technologies, resulting in new and improved

commercial endeavors. The impact of such eventualities would be enormous even if the

92 Lukas Ebensperger, Suparna Choudhury and Jan Slaby, ‘Designing the Lifeworld: Selfhood and Architecture from a Critical Neuroscience Perspective,’ in Cognitive architecture, ed. Deborah Hauptmann

and Warren Neidich (Rotterdam: 010 Publishers, 2010), 245. 93

Ebensperger et al., Designing the Lifeworld, 245.

Page 28: Mind Driven Environments

26

emerging activities were developing independently, but with a range of synergies, their

overlapping emergence and transitioning into the applied engineering arena promises to

result in industrial products and technologies that stretch our imaginations to the point that

they appear fanciful. Indeed, it is becoming more widely acknowledged that the potential of

the new convergent NBIC technologies for influencing and defining the future is unlimited

and likely unimaginable.94

94

Daniel L. Akins, ‘Mind Over Matter in an Era of Convergent Technologies,’ in Converging Technologies for Improving Human Performance, ed. Michael C. Roco and William Sims Bainbridge (Dordrecht: Kluwer Academic Publishers, 2003), 410.

Page 29: Mind Driven Environments

27

Bibliography

Akins, Daniel L. ‘Mind Over Matter in an Era of Convergent Technologies.’ In Converging

Technologies for Improving Human Performance, edited by Michael C. Roco and William Sims

Bainbridge, 410-412. Dordrecht: Kluwer Academic Publishers, 2003.

Boyden, S. Edward, Christian T. Wentz, Jacob G. Bernstein, Patrick Monahan, Alexander Guerra, Alex

Rodriguez. ‘A Wirelessly Powered and Controlled Device for Optical Neural Control of Freely-

Behaving Animals.’ Journal of Neural Engineering 8 (June 23, 2011),

http://stacks.iop.org/JNE/8/046021.

Burger, Rudy. ‘Enhancing Personal Area Sensory and Social Communication through Converging

Technologies.’ In Converging Technologies for Improving Human Performance, edited by Michael C.

Roco and William Sims Bainbridge, 164-166. Dordrecht: Kluwer Academic Publishers, 2003.

Coelho, Marcelo, Joanna Berzowska, Ivan Poupyrev, Leah Buechley, Sajdi Sadi, Pattie Maes, Roel

Vertegaal, Neri Oxman. ‘Programming Reality: From Transitive Materials to Organic User Interfaces,’

Ext. Abstracts CHI 2009, Boston: ACM Press, 2009: 4759-4762.

Ebensperger, Lukas, Suparna Choudury, Jan Slaby. ‘Designing the Lifeworld: Selfhood and

Architecture from a Critical Neuroscience Perspective.’ In Cognitive architecture, edited by Deborah

Hauptmann and Warren Neidich, 232-246. Rotterdam: 010 Publishers, 2010.

Fox, Michael. ‘Beyond Kinetic.’ MIT Kinetic Design Group, 2004, http://robotecture.com/Papers/Pdf/

beyond.pdf

Geddes, Linda. ‘Rat Cyborg Gets Digital Cerebellum.’ New Scientist, September 24, 2011.

Gibson, Daniel G., John I. Glass, Carole Lartigue, Vladimir N. Noskov, Ray-Yuan Chuang, Mikkel A.

Algire, Gwynedd A. Benders, Michael G. Montague, Li Ma, Monzia M. Moodie, Chuck Merryman,

Sanjay Vashee, Radha Krishnakumar, Nacyra Assad-Garcia, Cynthia Andrews Pfannkoch, Evgeniya A.

Denisova, Lei Young, Zhi-Qing Qi, Thomas H. Segall-Shapiro, Christopher H. Calvey, Prashanth P.

Parmar, Clyde A. Hutchison III, Hamilton O. Smith, J. Craig Venter. ‘Creation of a Bacterial Cell

Controlled by a Chemically Synthesized Genome.’ Science Vol 329 no. 5987 (2010),

http://www.sciencemag.org/content/329/5987/52.full.

Goldstein, Seth C. and Todd C. Mowry. ‘Claytronics: A Scalable Basis For Future Robots.’ Computer

Science Department Paper 770 (2004), http://repository.cmu.edu/compsci/770.

Hauptmann, Deborah. Introduction to Cognitive Architecture, by editors Deborah Hauptmann and

Warren Neidich, 10-43. Rotterdam: 010 Publishers, 2006.

― Introduction to The Body in Architecture, by editor Deborah Hauptmann, 9-25. Rotterdam: 010

Publishers, 2006.

Hayles, N. Katherine. ‘Virtual Bodies and Flickering Signifiers.’ October 66 (1993): 69-91.

John S. Pezaris and Emad N. Eskandar. ‘Getting Signals into the Brain: Visual Prosthetics through

Thalamic Microstimulation.’ Neurosurgical Focus 27, no. 1 (July, 2009),

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2848996.

Page 30: Mind Driven Environments

28

Kemp, Miles and Michael Fox. Interactive Architecture. New York: Princeton Architectural Press,

2009.

Kurzweil, Ray. The Singularity Is Near - When Humans Transcend Biology, London: Penguin Books

Ltd., 2006.

Llinas, Rudolfo and Valeri A. Makarov. ‘Brain-Machine Interface via a Neurovascular Approach.’ In

Converging Technologies for Improving Human Performance, edited by Michael C. Roco and William

Sims Bainbridge, 244-251. Dordrecht: Kluwer Academic Publishers, 2003.

Minsky, Marvin. Emotion Machine, New York: Simon & Schuster, 2006.

Negroponte, Nicholas. Being Digital, New York: Alfred A. Knopf, Inc., 1995.

Nicolelis, A.L. Miguel and Mandayam A. Srinivasan. ‘Human-Machine Interaction: Potential Impact of

Nanotechnology in The Design of Neuroprosthetic Devices Aimed at Restoring or Augmenting

Human Performance.’ In Converging Technologies for Improving Human Performance, edited by

Michael C. Roco and William Sims Bainbridge, 251-255. Dordrecht: Kluwer Academic Publishers,

2003.

Nietzche, Friedrich. Thus Spake Zarathustra. New York: Random House, 1928.

Nishimoto, Smith, An T. Vu, Thomas Naselaris, Yuvai Benjamini, Bin Yu, and Jack L. Gallant.

‘Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies.’ Current Biology 21

(September 22, 2011), http://www.cell.com/current-biology/abstract/S0960-9822(11)00937-7.

Novak, Marcos. ‘Speciation, Transvergence, Allogenesis: Notes on the Production of the Alien.’

Architectural Design 72: 63-71.

Oosterhuis, Kas. Towards a New Kind of Building. Rotterdam: NAi Publishers, 2011.

Pask, Gordon. ‘The Architectural Relevance of Cybernetics.’ Architectural Design 39 (1969): 494-6.

Pickering, Andrew. The Cybernetic Brain, Chicago: The University of Chicago Press, 2010.

Pierce, M. Brian. ‘Sensor System Engineering Insights on Improving Human Cognition and

Communication.’ In Converging Technologies for Improving Human Performance, edited by Michael

C. Roco and William Sims Bainbridge, 117-119. Dordrecht: Kluwer Academic Publishers, 2003.

Rahm, Philippe. ‘Edible Architecture.’ In Cognitive architecture, edited by Deborah Hauptmann and

Warren Neidich, 386-401. Rotterdam: 010 Publishers, 2010.

Sikiaridi, Elizabeth, and Frans Vogelaar. ‘Idensity.’ In Cognitive architecture, edited by Deborah

Hauptmann and Warren Neidich, 522-537. Rotterdam: 010 Publishers, 2010.

Spohrer, Jim. ‘NBICS (Nano-Bio-Info-Cogno-Socio) Convergence to Improve Human Performance:

Opportunities and Challenges.’ In Converging Technologies for Improving Human Performance,

edited by Michael C. Roco and William Sims Bainbridge, 101-117. Dordrecht: Kluwer Academic

Publishers, 2003.

Page 31: Mind Driven Environments

29

Vidal, J. Jacques. ‘Toward Direct Brain-Computer Communication.’ Annual Review of Biophysics and

Bioengineering Vol. 2 (1973): 157-180.

Wexler, Bruce. ‘Shaping the Environments that Shape Our Brains.’ In Cognitive architecture, edited

by Deborah Hauptmann and Warren Neidich, 142-167. Rotterdam: 010 Publishers, 2010.

Wiener, Norbert., Arturo Rosenblueth, and Julian Bigelow. ‘Behavior, Purpose and Teleology.’

Philosophy of Science 10 (1943): 18-24.

Wynn, Thomas. ‘Archeology and Cognitive Evolution.’ Behavioral and Brain Sciences 25 (2002): 389-

438.