15

Click here to load reader

Brain

Embed Size (px)

Citation preview

Page 1: Brain

ORIGINAL PAPER

The Mind and the Machine. On the Conceptual and MoralImplications of Brain-Machine Interaction

Maartje Schermer

Received: 5 November 2009 /Accepted: 5 November 2009 /Published online: 1 December 2009# The Author(s) 2009. This article is published with open access at Springerlink.com

Abstract Brain-machine interfaces are a growingfield of research and application. The increasingpossibilities to connect the human brain to electronicdevices and computer software can be put to use inmedicine, the military, and entertainment. Concretetechnologies include cochlear implants, Deep BrainStimulation, neurofeedback and neuroprosthesis. Theexpectations for the near and further future are high,though it is difficult to separate hope from hype. Thefocus in this paper is on the effects that these newtechnologies may have on our ‘symbolic order’—onthe ways in which popular categories and conceptsmay change or be reinterpreted. First, the blurringdistinction between man and machine and the idea ofthe cyborg are discussed. It is argued that the morallyrelevant difference is that between persons and non-persons, which does not necessarily coincide with thedistinction between man and machine. The concept ofthe person remains useful. It may, however, becomemore difficult to assess the limits of the human body.Next, the distinction between body and mind isdiscussed. The mind is increasingly seen as a function

of the brain, and thus understood in bodily andmechanical terms. This raises questions concerningconcepts of free will and moral responsibility thatmay have far reaching consequences in the field oflaw, where some have argued for a revision of ourcriminal justice system, from retributivist to conse-quentialist. Even without such a (unlikely andunwarranted) revision occurring, brain-machine inter-actions raise many interesting questions regardingdistribution and attribution of responsibility.

Keywords Brain-machine interaction .

Brain-computer interfaces . Converging technologies .

Cyborg . Deep brain stimulation .Moral responsibility .

Neuroethics

Introduction

Within two or three decades our brains will have beenentirely unravelled and made technically accessible:nanobots will be able to immerse us totally in virtualreality and connect our brains directly to the Internet.Soon after that we will expand our intellect in aspectacular manner by melting our biological brainswith non-biological intelligence. At least that is theprophecy of Ray Kurzweil, futurist, transhumanistand successful inventor of, amongst other things, the

Nanoethics (2009) 3:217–230DOI 10.1007/s11569-009-0076-9

M. Schermer (*)Medical Ethics and Philosophy of Medicine, ErasmusMC,Room AE 340, PO Box 2040, 3000 Rotterdam,The Netherlandse-mail: [email protected]

Page 2: Brain

electronic keyboard and the voice-recognition system.1

He is not the only one who foresees great possibilitiesand, what’s more, has the borders between biologicaland non-biological, real and virtual, and human andmachine, disappear with the greatest of ease. Some ofthese possibilities are actually already here. On 22 June2004, a bundle of minuscule electrodes was implantedinto the brain of the 25-year-old Matthew Nagel (whowas completely paralysed due to a high spinal cordlesion) to enable him to operate a computer by meansof his thoughts. This successful experiment seems tobe an important step on the way to the blending ofbrains and computers or humans and machines, thatKurzweil and others foresee. With regard to the actualdevelopments in neuroscience and the convergenceof neurotechnology with information, communication-and nanotechnology in particular it is still unclear howrealistic the promises are. The same applies to themoral and social implications of these developments.This article offers a preliminary exploration of thisarea. The hypothesis is that scientific and technologicaldevelopments in neuroscience and brain-machineinterfacing challenge—and may contribute to shiftsin—some of the culturally determined categories andclassification schemes (our ‘symbolic order’), such asbody, mind, human, machine, free will and responsi-bility (see the Introduction to this issue: ConvergingTechnologies, Shifting Boundaries)

Firstly I will examine the expectations regardingthe development of brain-machine interfaces and theforms of brain-machine interaction that alreadyactually exist. Subsequently, I will briefly point outthe moral issues raised by these new technologies,and argue the debate on these issues will be influencedby the shifts that may take place in our symbolicorder—that is, the popular categories that we use in oureveryday dealings to make sense of our world—as aresult of these developments. It is important to considerthe consequences these technologies might have for ourunderstanding of central organizing categories, moralconcepts and important values. Section four thenfocuses on the categories of human and machine: arewe all going to become cyborgs? Will the distinctionbetween human and machine blur if more and moreartificial components are built into the body and brain? Iwill argue that the answer depends partly on the context

in which this question is asked, and that the concept ofthe person may be more suitable here than that of thehuman. Section five is about the distinction betweenbody and mind. I argue that as a result of our growingneuroscientific knowledge and the mounting possibili-ties for technological manipulation, the mind is increas-ingly seen as a component of the body, and thereforealso more and more in mechanical terms. This put theconcept of moral responsibility under pressure. I willillustrate the consequences of these shifts in conceptsand in category-boundaries with some examples of themoral questions confronting us already.

Developments in Brain-Machine Interaction

Various publications and reports on convergingtechnologies and brain-machine interaction speculateheatedly on future possibilities for the direct linkageof the human brain with machines, that is: some formof computer or ICT technology or other. If the neuro-sciences provide further insight into the precise workingof the brain, ICT technology becomes increasinglypowerful, the electronics become more refined and thepossibilities for uniting silicones with cells moreadvanced, then great things must lie ahead of us—orso it seems. The popular media, but also seriousgovernmental reports and even scientific literature,present scenarios that are suspiciously like sciencefiction as realistic prospects: the expansion of memoryor intelligence by means of an implanted chip; the directuploading of encyclopaedias, databases or dictionariesinto the brain; a wireless connection from the brain tothe Internet; thought reading or lie detection via theanalysis of brain activity; direct brain-to-brain com-munication. A fine example comes from the reporton converging technologies issued by the AmericanNational Science Foundation:

‘Fast, broadband interfaces directly between thehuman brain and machines will transform workin factories, control automobiles, ensure militarysuperiority, and enable new sports, art forms andmodes of interaction between people. [...] Newcommunication paradigms (brain-to-brain,brain-machine-brain, group) could be realizedin 10–20 years.’ [39]

It is not easy to tell which prospects are realistic,which to a certain extent plausible and which are total

1 See his website www.kurzweilAI.net for these and otherfuture forecasts

218 Nanoethics (2009) 3:217–230

Page 3: Brain

nonsense. Some scientists make incredible claimswhilst others contradict them again. These claimsoften have utopian characteristics and seem to gobeyond the border between science and sciencefiction. Incidentally, they are frequently presented insuch a way as to create goodwill and attract financialresources. After all, impressive and perhaps, from thescientific point of view, exaggerated future scenarioshave a political and ideological function too—theyhelp to secure research funds2 and to create a certainimage of these developments, either utopian ordystopian, thus steering public opinion.

Uncertainty about the facts—which expectationsare realistic, which exaggerated and which altogetherimpossible—is great, even amongst serious scientists[12]. Whereas experts in cyberkinetic neurotechnol-ogy in the reputable medical journal, The Lancet,are seriously of the opinion that almost naturally-functioning, brain-driven prostheses will be possible,the editorial department of the Dutch doctors’ journal,Medisch Contact, wonders sceptically how manylight-years away they are [1, 23]. It is precisely theconvergence of knowledge and technology from verydifferent scientific areas that makes predictions sodifficult. Although claims regarding future develop-ments sometimes seem incredible, actual functioningforms of brain-machine interaction do in fact exist,and various applications are at an advanced stage ofdevelopment. Next, I will look at what is currentlyalready possible, or what is actually being researchedand developed.

Existing Brain-Machine Interactions

The first category of existing brain-machine inter-action is formed by the sensory prostheses. Theearliest form of brain-machine interaction is thecochlear implant, also known as the bionic ear, whichhas been around for about 30 years. This technologyenables deaf people to hear again, by convertingsound into electrical impulses that are transmitted toan electrode implanted in the inner ear, whichstimulates the auditory nerve directly. While therehave been fierce discussions about the desirability of

the cochlear implant, nowadays they are largelyaccepted and are included in the normal arsenal ofmedical technology (e.g. [5]). In this same category,various research lines are currently ongoing to developan artificial retina or ‘bionic eye’ to enable blindpeople to see again.

A second form of brain-machine interaction isDeep Brain Stimulation (DBS). With this techniquesmall electrodes are surgically inserted directly intothe brain. These are connected to a subcutaneouslyimplanted neurostimulator, which sends out tinyelectrical pulses to stimulate a specific brain area.This technology is used for treatment of neurologicaldiseases such as Parkinson’s disease and Gilles de laTourette’s syndrome. Many new indications are beingstudied experimentally, ranging from severe obsessive-compulsive disorders, addictions, and obesity to Alz-heimer’s disease and depression. The use of thistechnique raises a number of ethical issues, likeinformed consent from vulnerable research subjects,the risks and side-effects, including effects on thepatient’s mood and behaviour [38].

More spectacular, and at an even earlier stage ofdevelopment, is the third form of brain-machineinteraction in which the brain controls a computerdirectly. This technology, called neuroprosthetics,enables people to use thought to control objects inthe outside world such as the cursor of a computer ora robotic arm. It is being developed so that peoplewith a high spinal cord lesion, like Matt Nagelmentioned in the introduction, can act and communi-cate again. An electrode in the brain receives theelectrical impulses that the neurons in the motorcerebral cortex give off when the patient wants tomake a specific movement. It then sends theseimpulses to a computer where they are translated intoinstructions to move the cursor or a robot that isconnected to the computer. This technology offers theprospect that paraplegics or patients with locked insyndrome could move their own wheelchair with theaid of thought-control, communicate with othersthrough written text or voice synthesis, pick up thingswith the aid of artificial limbs et cetera.

In future, the direct cortical control describedabove could also be used in the further developmentof artificial limbs (robotic arms or legs) for peoplewho have had limbs amputated. It is already possibleto receive the signals from other muscles and controla robotic arm with them (a myoelectrical prosthesis);

2 A lot of research in the field of brain-machine interaction andother converging technologies is carried out by DARPA, theAmerican Ministry of Defence research institute. In 2003, forexample, DARPA subsidized research into brain-machineinterfaces to the tune of 24 million dollars [9, 35].

Nanoethics (2009) 3:217–230 219

Page 4: Brain

whether the patient’s own remaining nerves can beconnected directly to a prosthesis to enable it to moveas though it is the patient’s own arm is now beingexamined. Wireless control by the cortex would be agreat step in prosthetics, further enabling patientrehabilitation. Next to motor control of the prosthesis,tactile sensors are being developed and placed inartificial hands to pass on the feeling to the patient’sown remaining nerves, thus creating a sense of touch.It is claimed that this meeting of the (micro)technolog-ical and (neuro)biological sciences will in the futurelead to a significant reduction in invalidity due toamputation or even its total elimination [23, 24].

In the fourth form of brain-machine interaction, useis made of neurofeedback. By detecting brain activitywith the aid of electroencephalography (EEG) equip-ment, it can be made visible to the person involved.This principle is used, for instance, in a new methodfor preventing epileptic attacks with the aid of VagalNerve Stimulation (VNS). Changes in brainwaves canbe detected and used to predict an oncoming epilepticattack. This ‘warning system’ can then generate anautomatic reaction from the VNS system whichstimulates the vagal nerve to prevent the attack. Intime, the detection electrodes could be implantedunder the skull, and perhaps the direct electricalstimulation of the cerebral cortex could be used instead ofthe vagal nerve [17]. Another type of feedback system isbeing developed by the American army and concerns ahelmet with binoculars that can draw a soldier’sattention to a danger that his brain has subconsciouslydetected enabling him to react faster and more ade-quately. The idea is that EEG can spot ‘neuralsignatures’ for target detection before the consciousmind becomes aware of a potential threat or target [50].

Finally, yet another technology that is currentlymaking rapid advances is the so-called exoskeleton.Although this is not a form of brain-machine interactionin itself, it is a technology that will perhaps be eminentlysuitable for combination with said interaction in thefuture. An exoskeleton is an external structure that isworn around the body, or parts of it, to provide strengthand power that the body does not intrinsically possess. Itis chiefly being developed for applications in the armyand in the health care sector.3 Theoretically, the move-

ments of exoskeletons could also be controlled directlyby thought if the technology of the aforementioned‘neuroprostheses’ was to be developed further. If, inthe future, the exoskeleton could also give feedback onfeelings (touch, temperature and suchlike), the possi-bilities could be expanded still further.

Ethical Issues and Shifts in Our Symbolic Order

The developments described above raise variousethical questions, for instance about the safety, possiblerisks and side effects of new technologies. There arealso speculations as to the moral problems or dangersthat may arise in connection with further advances inthis type of technologies. The European Group onEthics in Science and New Technologies (EGE), aninfluential European advisory body, warns for the riskthat ICT implants will be used to control and locatepeople, or that they could provide third parties withaccess to information about the body and mind of thoseinvolved, without their permission EGE [9]. There arealso concerns about about the position of vulnerableresearch subjects, patient selection and informedconsent, the effects on personal identity, resourceallocation and about the use of such technologies forhuman enhancement Over the past few years, theneuroethical discussion on such topics has beenbooming (e.g. [2, 7, 10, 12, 14, 46, 20]).

It has been argued that while these ethical issuesare real, they do not present anything really new [8].The point of departure of this article, however, is thatit is not so easy to deal adequately with the moralquestions raised by these new technologies becausethey also challenge some of the central concepts andcategories that we use in understanding and answer-ing moral questions. Hansson, for example, states thatbrain implants may be “reason to reconsider ourcriteria for personal identity and personality changes”[20; p. 523]. Moreover, these new technologies mayalso change some elements of our common moralityitself, just like the birth control pill once helped tochange sexual morality [28]. In brief: new technolo-gies not only influence the ways we can act, but alsothe symbolic order: our organizing categories and theassociated views on norms and values.

The concepts and categories we, as ordinary people,use to classify our world to make it manageable andcomprehensible are subject to change. These categories

3 In the former case, these applications might enable soldiers tocarry heavy rucksacks more easily, in the latter they could, forexample, help a nurse to lift a heavy patient on his or her own.

220 Nanoethics (2009) 3:217–230

Page 5: Brain

also play an important part in moral judgement,since they often have a normative next to a descriptivedimension. Categories such as human and machine,body and mind, sick and healthy, nature and culture, realand unreal are difficult to define precisely and theboundaries of such notions are always vague andmovable. Time and again it takes ‘symbolic labour’ toreinterpret these categories and to re-conceptualise themand make them manageable in new situations. In part,this symbolic labour is being done by philosophers whoexplicitly work with concepts and definitions, refiningand adjusting them; in part it is also a diffuse socio-cultural process of adaptation and emerging changes insymbolic order.4 Boundaries are repeatedly negotiatedor won, and new concepts arise where old ones nolonger fit the bill An example is the new concept of‘brain dead’ which arose a few decades ago as aconsequence of the concurrent developments in elec-troencephalography, artificial respiration and organtransplantation. Here the complex interplay of tech-nology, popular categories of life and death, andscientific and philosophical understandings of theseconcepts is clearly demonstrated [27; p. 16].

Morality, defined as our more or less shared systemof norms, values and moral judgements, is also subjectto change It is not a static ‘tool’ that we can apply to allkinds of new technologically induced moral problems.Technological and social developments influence andchange our morality, although this does not applyequally to all its elements. Important values such asjustice, well-being or personal autonomy are reasonablystable, but they are also such abstract notions that theyare open to various and changing interpretations. Thenorms we observe in order to protect and promote ourvalues depend on these interpretations and may requireadjustment under new circumstances. Some norms arerelatively fixed, others more contingent and changing[6]. The detailed and concrete moral rules of conductderived from the general norms are the most contin-gent and changeable. The introduction of the notionbrain death, for example, led to adaptations in ethicalnorms and regulations. Likewise, the new develop-ments in genomics research are now challenging

and changing existing rules of informed consent aswell as notions of privacy and rules for privacyprotection [33].

In the field of brain-machine interaction we cantherefore also expect that certain fixed categories thatwe classify our world with and that structure ourthinking, will evolve alongside the new technologies.This will have consequences for the ethical questionsthese technologies raise and for the way in which wehandle both new and familiar moral issues. A firstshift that can be expected concerns the distinctionbetween human and machine. This distinction mightfade as more parts of the body can be replaced withmechanical or artificial parts that become more andmore ‘real’. Secondly, we might expect a blurring ofboundaries between our familiar concepts of bodyand mind when neuroscience and neurotechnologiesincreasingly present the brain as an ordinary part ofour body and the mind as simply one of its ‘products’.The following sections analyse these possible shifts inthe symbolic order and the associated moral questionsin more detail.

Symbolic Order in Motion: The Human Machine

The blurring of the boundary between human andmachine brought about by brain-machine interactionforms the first challenge to the familiar categorieswith which we think. The more artificial parts areinserted in and added to the human body, the moreuncertainty there is about where the human stops andthe machine begins. Instead of human or machine, weincreasingly seem to be looking at cyborgs: humanand machine in a single being.

For a long time it was easy to distinguish betweenpeople and the tools, machines or devices that theyused. Gradually, however, our lives have becomeincreasingly entangled with machines—or, in thebroader sense, with technology—and we have becomedependent on them for practically every facet of ourdaily lives. Increasingly, parts of the human body itselfare replaced or supplemented by technology.5 Of

4 How these processes interact with one another and how socio-cultural changes influence philosophical thinking and viceversa is an interesting and complicated question, that I cannotstart to answer here.

5 Not that this is an entirely new phenomenon, seeing that allsorts of bodily prostheses have existed for centuries; the firstartificial leg dates back to 300 before Christ. Other prosthesesthat we more or less attach to our bodies are, for instance,spectacles, hairpieces and hearing aids. But the insertion ofexternal parts into the human body is more recent.

Nanoethics (2009) 3:217–230 221

Page 6: Brain

course, the notion that the human body works as amachine has been a leitmotiv in western culture sinceDescartes; this vision has enabled modern medicinewhile the successes achieved substantiate the underlyingbeliefs about the body at the same time. The emergenceof transplantation medicine was a clear step in thedevelopment of popular views on the body as a machine.Since the first kidney transplantation in 1954 and the firstheart transplantation in 1967, lungs, liver, pancreas andeven hands and faces have become transplantable, thusenforcing the image of the human body as a collection ofreplaceable parts. Some have criticised transplantationmedicine because of the ensuing mechanization andcommodification of the human body.

Besides organs, more and more artefacts are nowbeing implanted in the human body: artificial heartvalves, pacemakers, knees, arterial stents and subcutane-ous medicine pumps. Prostheses that are attached to thebody, such as artificial limbs, are becoming increasinglyadvanced, and are no longer easy to detach—unlikethe old-fashioned wooden leg. Experiences of patientswho wear prostheses seem to indicate that peoplerapidly adapt to using them and fuse with them to theextent that they perceive them as natural parts ofthemselves. Artificial parts are rapidly included in thebody scheme and come to be felt as ‘ones own’.6

In a certain sense, then, we are familiar with theperception of the body as a sort of machine, and withthe fact that fusing the human body with artificialparts is possible. Do technologies like neuroprostheses,artificial limbs and exoskeletons break through theboundary between human and machine in a fundamen-tally new, different fashion? Should the conceptualdistinction between human and machine perhaps berevised? Many publications, both popular and moreacademic, suggest that the answer has to be yes. A

notion that is often used in this connection is that of thecyborg: the human machine.

Cyborgs

The term ‘cyborg’—derived from cybernetic organism—was coined in 1960 by Manfred Clynes and NathanKline, American researchers who wrote about theways in which the vulnerable human body could betechnologically modified to meet the requirements ofspace travel and exploration. The figure of the cyborgappealed to the imagination and was introduced intopopular culture by science fiction writers, filmmakers,cartoonists and game designers; famous cyborgsinclude The Six Million Dollar Man, Darth Vaderand RoboCop. In the popular image, the cyborg thusstands for the merging of the human and the machine.

In recent literature, both popular and scientific, thecyborg has come to stand for all sorts of man-machinecombinations and all manner of technological andbiotechnological enhancements or modifications ofthe human body. With the publication of books likeI, Cyborg or Cyborg Citizen, the concept now coversa whole area of biopolitical questions. Everything thatis controversial around biotechnological interven-tions, that raises moral questions and controversy,that evokes simultaneous horror and admiration, is nowclustered under the designation ‘cyborg’ [18, 25, 48].

The concept of the cyborg indicates that somethingis the matter, that boundaries are transgressed,familiar categorizations challenged, creating uneaseand uncertainty. For Donna Haraway, well-known forher Cyborg Manifesto [21], the concept of the cyborgstands for all the breaches of boundaries anddisruptions of order, not merely for the specificbreaking through of the distinction between humanand machine which concerns me here. The term cyborgcan thus be used to describe our inability to categorizesome new forms of human life or human bodies. Theuse of the term compels us to delay categorization—infamiliar terms of human or machine—at least for themoment and so creates a space for further exploration.

Monsters

Following Mary Douglas, Martijntje Smits has calledthese kind of entities that defy categorization andchallenge the familiar symbolic order monsters [44].Smits discusses four strategies for treating these

6 More attention has recently been paid to the importance of thebody for the development and working of our consciousness. Inthe Embodied Mind model, body and mind are seen far more asinterwoven than in the past (e.g. [15]). This can have even furtherimplications for brain-machine interaction; if for example,neuroprostheses change our physical, bodily dealings with theworld, this may also have consequences for the development ofthe brain and for our consciousness. Neuroscientists have evenclaimed that: ‘It may sound like science fiction but if humanbrain regions involved in bodily self-consciousness were to bemonitored and manipulated online via a machine, then not onlywill the boundary between user and robot become unclear, buthuman identity may change, as such bodily signals are crucial forthe self and the ‘I’ of conscious experience’ [4]

222 Nanoethics (2009) 3:217–230

Page 7: Brain

monsters, four ways of cultural adaptation to thesenew entities and the disruption they bring about.

The first strategy, embracing the monster, is clearlyreflected in the pronouncements of adherents of thetranshumanist movement. They welcome all mannerof biotechnological enhancements of humans, believein the exponential development of the possibilities tothis end and place the cyborg, almost literally, on apedestal. The second strategy is the opposite of thefirst and entails exorcizing the monster. Neo-Ludditesor bioconservatives see biotechnology in general andthe biotechnological enhancement of people in particu-lar, as a threat to the existing natural order. Theyfrequently refer to human nature, traditional categoriesand values and norms when attacking and trying to curbthe new possibilities. To them, the cyborg is a realmonster that has to be stopped and exorcized.

The third strategy is that of adaptation of themonster. Endeavours are made to classify the newphenomenon in terms of existing categories after all.Adaptation seems to be what is happening with regardto existing brain-machine interaction. The conceptualframework here is largely formed by the familiarmedical context of prostheses and aids. The designa-tion of the electrodes and chips implanted in the brainas neuroprostheses, places them in the ethical area oftherapy, medical treatment, the healing of the sick andsupport of the handicapped. As long as somethingthat was naturally present but is now lost due tosickness or an accident is being replaced, brain-machine interaction can be understood as therapyand therefore accepted within the ethical limitsnormally assigned to medical treatments. However,for non-medical applications the problem of classi-fications remains. Prostheses to replace functions thathave been lost may be accepted relatively easily, buthow are we going to regard enhancements orqualitative changes in functions such as the additionof infrared vision to the human visual faculty? Are weonly going to allow the creation of cyborgs formedical purposes, or also for military goals, or forrelaxation and entertainment?

Finally, the fourth strategy is assimilation of themonster, whereby existing categories and concepts areadjusted or new ones introduced.7 In the following I

will suggest that the concept of the person—in thesense in which it is used in ethics, rather than incommon language—may be useful for this purpose.

Morality of Persons

In the empirical sense, cyborgs, or blends of humanbodies with mechanical parts, are gradually becomingless exceptional. It therefore seems exaggerated toview people with prostheses or implants as somethingvery exceptional or to designate them as a separateclass. And this raises the question of why we shouldreally worry about the blurring of the distinctionbetween human and machine? This is not merelybecause the mixing of the flesh with steel or siliconeintuitively bothers us, or because the confusion aboutcategories scares us. More fundamentally, I believethis is because the distinction between the human andthe machine also points to a significant moraldistinction. The difference between the two conceptsis important because it indicates a moral dividing linebetween two different normative categories. For mostof our practices and everyday dealings the normativedistinction between human and machine matters. Youjust treat people differently to machines—with morerespect and care—and you expect something elsefrom people than you expect from machines—responsibility and understanding, for example. Humanbeings deserve praise and blame for their actions whilemachines cannot. The important question is thereforewhether brain-machine interfaces will somehow affectthe moral status of the people using them [13]. Do westill regard a paralysed patient with a brain implant andan exoskeleton as a human being, or do we see him asa machine? Will we consider someone with two bioniclegs to be a human being or a machine?

I belief that in part this also depends on the contextand the reasons for wanting to make the distinction.In the context of athletic competition, the bionicrunner may be disqualified because of his supra-humancapacities. In this context, he is ‘too much of a machine’to grant fair competition with fully biological humanbeings. However, in the context of everyday interactionwith others, a person with bionic legs is just as morallyresponsible for his actions as any other person. In thissense he clearly is human and not a machine. This isbecause, with regard to moral status, the human being asan acting, responsible moral agent is identified morewith the mind than with the body. The mind is what

7 The distinction between adaptation and assimilation is notvery clear—it depends on what one would wish to call an‘adaptation’ or ‘adjustment’ of a concept.

Nanoethics (2009) 3:217–230 223

Page 8: Brain

matters in the moral sense. Whether this mind controls awheelchair with the aid of hands, or electrical brain-generated pulses, is irrelevant to the question of whocontrols the wheelchair: the answer in both cases is theperson concerned. Whether someone is paralysed or notdoes not alter the question of whether he or she is aperson or not; it will of course affect the kind of personhe or she is but whether he or she is a person depends onhis or her mental capacities. Ethical theories considerthe possession of some minimal set of cognitive,conative and affective capacities as a condition forpersonhood. This means that, ethically speaking, undercertain conditions, intelligent primates or Martianscould be considered persons while human babies orextremely demented old people would not. Whateverthe exact criteria one applies, there is no reason to doubtthe fact that someone who is paralysed, someone whocontrols a robot by remote or someone who has a DBSelectrode is a person. Certain moral entitlements,obligations and responsibilities are connected to thatstate of ‘being a person’. This notion therefore helps toresolve the confusion surrounding the cyborg. Ratherthan classifying him as either man or machine, weshould be looking at personhood. Personhood is whatreally matters morally and this is not necessarilyaffected by brain-machine interfaces. As long as theydo not affect personhood, brain-machine interfaces areno more special than other types of prosthesis, implantsor instrumental aids that we have already grown used to.

New Views on Physical Integrity?

Nevertheless, brain-machine interfaces may in somecases cause new moral issues. A concrete examplethat can illustrate how shifting catagories can affectconcepts and ethics is that of physical integrity. Howshould this important ethical and legal principle beinterpreted when applied to cyborgs? The principleitself is not under discussion. We want to continue toguard and protect physical integrity. The question is,however, how to define the concept ‘body’ now thatbiological human bodies are becoming increasinglyfused with technology and where to draw the linebetween those plastic, metal or silicone parts that dobelong to that body and the parts that do not.

In the spring of 2007 the Dutch media paidattention to an asylum seeker who had lost an armas a result of torture in his native country andhad received a new myoelectrical prosthesis in the

Netherlands. He just got used to the arm and wastrained in using it naturally when it became apparentthat there were problems with the insurance and hewould have to return the prosthesis. Evidently,according to the regulations a prosthesis does notbelong to the body of the person in question and itdoes not enjoy the protection of physical integrity.However, the loss of an arm causes a great deal ofdamage to the person, whether the arm is natural or awell-functioning prosthesis. If prostheses becomemore intimately connected to and integrated with thebody (also through tactile sensors) such that theybecome incorporated in the body scheme and aredeemed a natural part of the body by the personconcerned, it seems there must come a point at whichsuch a prosthesis should be seen as belonging to the(body of the) person concerned from the moral andlegal point of view. It has even been questionedwhether the interception of signals that are transmittedby a wireless link from the brain to a computer orartificial limb, should perhaps also fall under theprotection of physical integrity [29]

Symbolic Order in Motion: Body-Mind

In the previous section I assumed the distinctionbetween body and mind to be clear-cut. The commonview is that the mind controls the body (whether thisbody is natural or artificial) and that the mind is theseat of our personhood, and of consciousness,freedom and responsibility. In this section I examinehow this view might change under the influence ofnew brain-machine interactions and neuroscientificdevelopments in general and what implications thismay have for ethics. I will concentrate on DBS, sincethis brain-machine technique has at present theclearest impact on human mind and behaviour.8 Ofcourse, however, our categories and common views willnot change because of one single new technique—rather, it is the whole constellation of neuroscientificresearch and (emerging) applications that may changethe ways in which we understand our minds andimportant related concepts.

8 By contrast, in the case of the neuroprosthesis discussed in theprevious section, it is mainly the mind that influences the body,through the interface.

224 Nanoethics (2009) 3:217–230

Page 9: Brain

The Mind as Machine

Neuroprostheses and other brain-machine interactionscall into question the demarcation between body andmind, at least in the popular perception. Technologiessuch as neuroprosthetics and DBS make very clear thefact that physical intervention in the brain has a directeffect on the mind of the person in question. Byswitching the DBS electrode on or off, the behaviour,feelings and thoughts of the patient can be changedinstantly. Thoughts of a paralysed person can betranslated directly into electrical pulses and physicalprocesses. As a result of neuroscience and itsapplications the human mind comes to be seen moreand more as a collection of neurones, a system ofsynapses, neurotransmitters and electrical conductors.A very complex system perhaps, but a physicalsystem nonetheless, that can be connected directly toother systems.

For some, this causes moral concern, since it maylead us to see ourselves merely in mechanical terms:

‘The obvious temptation will be to see advances inneuroelectronics as final evidence that man is just acomplex machine after all, that the brain is just acomputer, that our thoughts and identity are justsoftware. But in reality, our new powers should leadus to a different conclusion: even though we canmake the brain compatible with machines to servespecific functions, the thinking being is a being ofvery different sorts.’ [26; p. 40–41]

I believe this change in our popular view of themind that Keiper fears is actually already takingplace. Neuroscientific knowledge and understandingpenetrate increasingly into our everyday lives, and itis becoming more normal to understand our behaviourand ourselves in neurobiological terms. This shiftis for example noticeable in the rise of biologicalpsychiatry. Many psychiatric syndromes that were stillunderstood in psychoanalytical or psychodynamicterms until well into the second half of the twentiethcentury, are now deemed biological brain diseases. Theshift is also noticeable in the discussion on thebiological determinants of criminal behaviour (andopportunities to change such behaviour by interveningin the brain) or in the increased attention for thebiological and evolutionary roots of morality. Also inpopular magazines and books, our behaviour andourselves are increasingly presented as the direct resultof our brains’ anatomy and physiology.

Scientific and technological developments havecontributed to this shift. The development of EEG inthe first half of the last century revealed the electricalactivity of the brain for the first time, thus creating thevision of the brain as the wiring of the mind. Thedevelopment of psychiatric drugs in the second halfof the last century also helped naturalize our visionof the mind, picturing the brain as a neurochemical‘soup’, a collection of synapses, neurotransmitters andreceptors [22]. More recently the PET scan and thefMRI have made it possible to look, as it were, insidethe active brain. The fact that fMRI produces suchwonderful pictures of brains ‘in action’ contributes toour mechanical view of the relation between brain andbehaviour. Certain areas of the brain light up if wemake plans, others if an emotional memory is evoked;damage in one area explains why the psychopath hasno empathy, a lesion in another correlates with poorimpulse control or hot-headedness. While neurophilo-sophers have warned against the oversimplified ideathat images are just like photographs that show usdirectly how the brain works, these beautiful, colourfulimages appeal to scientists and laymen alike [41].

According to Nikolas Rose, we have come tounderstand ourselves increasingly in terms of abiomedical body, and our personalities and behaviourincreasingly in terms of the brain. He says that a newway of thinking has taken shape: ‘In this way ofthinking, all explanations of mental pathology must‘pass through’ the brain and its neurochemistry—neurones, synapses, membranes, receptors, ion channels,neurotransmitters, enzymes, etc.’ [40; p. 57]

We are experiencing what he calls a ‘neurochemicalreshaping of personhood’ [40; p. 59]. Likewise, Mooijhas argued that the naturalistic determinism of theneurosciences is also catching on in philosophy andhas now spread broadly in the current culture ‘thatis to a large extent steeped in this biological thinking,in which brain and person more or less correspond’[34; p.77].

The mind is being seen more and more as aphysical, bodily object (the ‘the mind = the brain’idea), and given that the human body, as describedabove, has long been understood in mechanical terms,the equal status of the mind and brain means that themind can also be understood in mechanical terms. Asthe basic distinction between mind and machineseems to drop away the distinction between humanand machine once more raises its head, but now on

Nanoethics (2009) 3:217–230 225

Page 10: Brain

a more fundamental and extremely relevant level,morally speaking. If in fact our mind, the seat of ourhumanity, is also a machine, how should we under-stand personhood in the morally relevant sense? Howcan we hold on to notions such as free will and moralresponsibility?

Neuroscientific Revisionism

A recent notion amongst many neuroscientists andsome neurophilosophers is that our experience ofhaving a self, a free will or agency, is based on amisconception. The self as a regulating, controllingauthority does not exist, but is only an illusionproduced by the brain.9 From this notion it seems tofollow that there is no such thing as free will and thatthere can therefore be no real moral responsibility.Within philosophy revisionists, who claim that ourretributive intuitions and practices are unwarrantedunder determinism, claim that this view obliges us torevise our responsibility-attributing practices, includ-ing our legal system. Revisionism implies the need toreplace some of our ordinary concepts with new ones.It has, for example, been suggested to substituteblame with ‘dispraise’ [43] or to eliminate conceptsconnected to desert like blame, moral praise, guilt andremorse, altogether [47]. On a revisionist account,praise, blame and punishment are just devices thatmodify conduct, and that can be more or lesseffective, but not more or less deserved.

Greene and Cohen assume that because of thevisible advances in the neurosciences—and I takebrain-machine interfaces to be part of those—thenaturalistic deterministic view on human behaviour willby degrees be accepted by more and more people, andrevisionism will catch on [19]. To their way ofthinking, our moral intuitions and our folk psychologywill slowly adapt to the overwhelming evidence theneurosciences present us with. The technologiesenabled on the basis of neuroscientific understanding,such as DBS, neurofeedback, psychiatric drugs, and

perhaps also intelligent systems or intelligent robots,can contribute to this. Little by little we will holdpeople less responsible and liable for their actions,according to Greene and Cohen, but will see themincreasingly as determined beings who can beregulated, more or less effectively, by sanctions orrewards. They allege that questions concerning free willand responsibility will lose their power in an age inwhich the mechanistic nature of the human decisionprocess will be totally understood. This will also haveconsequences for the legal system. ‘The law will con-tinue to punish misdeeds, as it must for practical reasons,but the idea of distinguishing the truly, deeply guiltyfrom those who are merely victims of neuronal circum-stances will, we submit, seem pointless.’ ([19; p. 1781])

Greene and Cohen, like other revisionists, advocatea shift in the nature of our criminal justice system,from a retributive to a consequentialistic system. Thismeans a shift from a system based on liability andretribution to one based on effects and effectivenessof punishment. A consequentialistic system of thiskind is, in their opinion, in keeping with the truescientific vision of hard determinism and the non-existence of free will. Greene and Cohen recognizethat many people will intuitively continue to think interms of free will and responsibility. What is more,they think that this intuitive reflex has arisen throughevolution and is deeply rooted in our brains. We canhardly help thinking in these sorts of terms, despitethe fact that we know better, scientifically speaking.Nonetheless, Greene and Cohen insist that we shouldbase important, complex matters such as the criminaljustice system10 on the scientific truth about ourselvesand not allow ourselves to be controlled by persistent,but incorrect, intuitions.

9 ‘Obviously we have thoughts. Ad nauseam, one might say.Whatis deceptive, is the idea that these thoughts control our behaviour.In my opinion, that idea is no more than a side effect of our socialbehaviour. […] The idea that we control our deeds with ourthoughts, that is an illusion’, says cognitive neuroscientist VictorLamme, echoing his collegue Wegner. [30; p. 22, 49].

10 Likewise, moral views on responsibility may change. Aninstrumental, neo-behaviouristic vision on morality and themoral practice of holding one another responsible might arise.Holding one another responsible may still prove a veryeffective way of regulating behaviour, even if it is not basedon the actual existence of responsibility and free will. As longpeople change their behaviour under the influence of moralpraise and blame, there is no reason to throw the concept ofresponsibility overboard. From this point of view, there wouldbe no relevant difference anymore between a human being andany other system that would be sensitive to praise and blame,such as an intelligent robot or computer system. If such asystem would be sensitive to moral judgements and respond tothem with the desired behaviour on this view they wouldqualify as much as moral actors as human beings would.

226 Nanoethics (2009) 3:217–230

Page 11: Brain

Moral Responsibility Reconsidered

A whole body of literature has accumulated refutingthis thesis and arguing that new neuroscientificevidence need not influence our moral and legalnotions of responsibility (e.g. [37]). This literaturereflects the dominant position in the determinismdebate nowadays, that of compatibilism. According tocompatibilism determinism is reconcilable with theexistence of a free will, and with responsibility. Aslong as we can act on the basis of reasons and as longas we are not coerced, we are sufficiently free to carryresponsibility and the naturalistic neuroscientificexplanatory model of behaviour is therefore notnecessarily a threat to our free will and responsibility,according to the compatibilist. The question is, however,whether the compatibilist’s philosophical argumentationalso convinces the average layman or neuroscientist,certainly in the light of new experimental findings andtechnical possibilities. How popular views on this topicwill develop remains to be seen.

At the moment even adherents of the revisionistview seem convinced that we will never be able tostop thinking, or even think less, in terms of intentions,reasons, free will and responsibility. It seems almostinconceivable not to hold one another responsible fordeeds and behaviour [45].

Nevertheless, neuroscientific research does chal-lenge our view of ourselves as rational, autonomousand moral beings [31]. Research shows us, forexample, that many if not most of our actionsautomatic, unconsciously initiated and only some ofour actions are deliberate and consciously basedon reasons. Our rationality, moreover, is limited byvarious biases, like confirmation bias, hyperbolicdiscounting, false memories et cetera. New findingsin neuroscience, such as the fact that immaturity ofthe frontal lobes impedes the capacities for reasoning,decision making and impulse control in adolescents,or that exercise of self-constraint eventually leadsto exhaustion of the capacity for self-control (ego-depletion), do necessitate us to re-think the ways inwhich or the degrees to which we are actually morallyresponsible is specific situations and circumstances(see for example the series of articles on addiction andresponsibility in AJOB Neuroscience 2007).

A more naturalized view of the human mind couldthus still have important consequences, even if wedo not jettison the notion of moral responsibility

altogether. More grounds for ‘absence of criminalresponsibility’ could, for example, be acknowledgedin criminal law, whereby new technologies could playa role. Functional brain scans might provide moreclarity on the degree to which an individual hascontrol over his or her own behaviour.

Prosthetic Responsibility?

Due to their ability to directly influence complexhuman behaviour by intervening in the brain, brain-machine interfaces may raise interesting issues ofresponsibility, even when we reject revisionism, ascan be illustrated by the following case of a 62 yearold Parkinson patient treated with DBS.11

After implantation of the electrodes, this patientbecame euphoric and demonstrated unrestrainedbehaviour: he bought several houses that he couldnot really pay for; he bought various cars and gotinvolved in traffic accidents; he started a relationshipwith a married woman and showed unacceptable anddeviant sexual behaviour towards nurses; he sufferedfrom megalomania and, furthermore, did not under-stand his illness at all. He was totally unaware of anyproblem. Attempts to improve his condition by chang-ing the settings of the DBS failed as the maniccharacteristics disappeared but the patient’s severeParkinson’s symptoms reappeared. The patient waseither in a reasonable motor state but in a maniccondition lacking any self reflection and understandingof his illness, or bedridden in a non-deviant mental state.The mania could not be treated by medication [32].

Who was responsible for the uninhibited behaviourof the patient in this case? Was that still the patienthimself, was it the stimulator or the neurosurgeonwho implanted and adjusted the device? In a sense,the patient was ‘not himself’ during the stimulation;he behaved in a way that he never would have donewithout the stimulator.12 That behaviour was neither

11 This case also been discussed by [3, 16, 42].12 Of course, this problem is not exclusive for DBS; somemedications can have similar effects. However, with DBS thechanges are more rapid and more specific and can be controlledliterally by a remote control (theoretically, the patients behaviourcan thus be influenced without the patient’s approval once theelectrode is in his brain). These characteristics do make DBSdifferent from more traditional means of behaviour influencing,though I agree with an anonymous reviewer that this is more amatter of degree than an absolute qualitative difference.

Nanoethics (2009) 3:217–230 227

Page 12: Brain

the intended nor the predicted result of the stimulationand it therefore looks as though no one can be heldmorally responsible for it. However, in his non-manicstate when, according to his doctors, he was compe-tent to express his will and had a good grasp of thesituation, the patient chose to have the stimulatorswitched on again. After lengthy deliberations thedoctors complied with his wishes. To what extentwere his doctors also responsible for his manicbehaviour? After all, they knew the consequences ofswitching on the stimulator again. To what extent wasthe patient himself subsequently to blame for gettinginto debt and bothering nurses?

For such decisions, the notion of ‘diachronicresponsibility’ [36] can be of use, indicating that aperson can take responsibility for his future behaviourby taking certain actions. Suppose, for example, thatDBS would prove an effective treatment for addic-tion, helping people to stay off drugs, alcohol orgambling, could it then rightly be considered a‘prosthesis for willpower’ [11], or even a prosthesisfor responsibility? I believe that technologies thatenable us to control our own behaviour better—asDBS might do in the case of addiction, or in thetreatment of Obsessive Compulsive Disorder—can beunderstood in terms of diachronic responsibility andself-control, and thus enhance autonomy and respon-sibility [42].

Future applications of brain-machine interactionmay raise further questions: suppose a doctor wouldadjust the settings of DBS without consent from thepatient and cause behaviour the patient claimed not toidentify with—who would then be responsible? AsClausen has pointed out, neuroprostheses may chal-lenge our traditional concept of responsibility whenimperfections in the system lead to involuntary actionsof the patient [7]. Likewise, if the wireless signals of aneuroprosthesis were incidentally or deliberately dis-rupted, it would be questionable who would beresponsible for the ensuing ‘actions’ of the patient.

Clearly, even without major shifts in our views onfree will and responsibility, brain-machine interfaceswill require us to consider questions of responsibility.

Conclusion

The convergence of neuroscientific knowledge withbio-, nano-, and information technology is already

beginning to be fruitful in the field of brain-machineinteraction, with applications like DBS, neuropros-thesis and neurofeedback. It is hard to predict thespecific applications awaiting us in the future,although there is no shortage of wild speculations.The emergence of new technical possibilities alsogives rise to shifts in our popular understanding ofbasic categories, and to some new moral issues. Theboundaries of the human body are blurring and mustbe laid down anew; our views on what it is to be aperson, to have a free will and to have responsibilityare once more up for discussion. In this article I haveexplored how these shifts in categories and conceptsmight work out.

I have argued that the distinction between humanand machine, insofar as it concerns a morally relevantdistinction, does not have to be given up immediatelybecause increasingly far-reaching physical combina-tions are now being made between human andmechanical parts. Depending on the context and thereasons we have for wanting to make a distinction, wewill draw the line between human and machinedifferently. In the context of sports, a bionic limbmay disqualify its user for being too much of a‘machine’ while in another context such a limb maybe qualified as an integral part of a human being andbe protected under the right to physical integrity.Important general moral questions that lie behind theconfusion about categories of human and machineconcern moral responsibility and moral status. Theconcept of a person, as used in ethical theory to designatemoral actors, is more precise and more useful in thiscontext than the general category of the ‘human’ or thepoly-interpretable notion of the ‘cyborg’.

In the most radical scenario of shifts in oursymbolic order, the concept of ‘person’ may alsocome under pressure. As I have shown based onGreene and Cohen's vision, the person as a being witha free will and moral responsibility, and as a moralactor, should, according to some, disappear from thestage altogether. Implementing such a neuroreduc-tionistic vision on the mind and free will would haveclear consequences for criminal law: it would haveto be revised to a consequentialist, neo-behaviouristicsystem. People would then barely be considered to bemorally responsible beings but be seen as systems thatrespond to praise and blame in a mechanical fashion. Ibelieve it is unlikely that such a shift in our popularviews will come about, because the intuitive appeal of

228 Nanoethics (2009) 3:217–230

Page 13: Brain

the notion of responsibility, and because there aremany good arguments to resist this shift. Even if wedo not jettison responsibility altogether, however,brain-machine interactions raise many interestingquestions regarding distribution and attribution ofresponsibility.

A general lesson for ethics of emerging technologiesis that such technologies necessitate renewed consider-ation and reinterpretation of important organizingconcepts and distinctions that are crucial to moraljudgement. The symbolic labour required to answersuch conceptual and normative questions is at least asimportant for the development of converging technol-ogies as the technical-scientific labour involved.

Open Access This article is distributed under the terms of theCreative Commons Attribution Noncommercial License whichpermits any noncommercial use, distribution, and reproductionin any medium, provided the original author(s) and source arecredited.

References

1. Anonymus (2007) Kunstarm met gevoel. Med Contact62:246

2. Bell EW, Mathieu G, Racine E (2009) Preparing the ethicalfuture of deep brain stimulation. Surg Neurol. doi:10.1016/j.surneu.2009.03.029

3. Berghmans R, de Wert G (2004) Wilsbekwaamheid in decontext van elektrostimulatie van de hersenen. NedTijdschr Geneeskd 148:1373–75 [Competence in thecontext of DBS]

4. Blanke O, Aspell JE (2009) Brain technologies raiseunprecedented ethical challenges. Nature, l458, 703

5. Blume S (1999) Histories of cochlear implantation. Soc SciMed 49:1257–68

6. van den Burg W (2003) Dynamic Ethics. J Value Inq37:13–34

7. Clausen J (2008) Moving minds: ehical aspects of neuralmotor prostheses. Biotechnol J 3:1493–1501

8. Clausen J (2009) Man, machine and in between. Nature457:1080–1081

9. EGE—European Group on Ethics in Science and NewTechnologies (2005) Ethical aspects of ICT implants in thehuman body. opinion no. 20. Retrieved from http://ec.europa.eu/european_group_ethics/docs/avis20_en.pdf

10. Ford PJ (2007) Neurosurgical implants: clinical protocolconsiderations. Camb Q Healthc Ethics 16:308–311

11. Ford P, Kubu C (2007) Ameliorating or exacerbating:surgical ‘prosthesis’ in addiction. Am J Bioeth 7:29–32

12. Foster KR (2006) Engineering the brain. In: Illes J (ed)Neuroethics. Defining issues in theory, practice and policy.Oxford University, Oxford, pp 185–200

13. Gillet G (2006) Cyborgs and moral identity. J Med Ethics32:79–83

14. Glannon W (2007) Bioethics and the brain. OxfordUniversity, Oxford

15. Glannon W (2009) Our brains are not us. Bioethics23:321–329

16. Glannon W (2009) Stimulating brains, altering minds. JMed Ethics 35:289–292

17. Graham-Rowe D (2006) Catching seizures before theyoccur. Retrieved September 15, 2009, from http://www.technologyreview.com/biotech/17124/

18. Gray CH (2001) Cyborg ctizen: politics in the posthumanage. Routledge, New York

19. Greene J, Cohen J (2004) For the law, neurosciencechanges nothing and everything. Phil Trans Roy Soc LondB 359:1775–1785

20. Hansson SO (2005) Implant ethics. J Med Ethics 31:519–525

21. Haraway D (1991) A cyborg manifesto: science, tech-nology, and socialist-feminism in the late twentieth century.In: Haraway D (ed) Simians, cyborgs and women: thereinvention of nature. Routledge, New York, pp 149–181

22. Healy D (2000) The creation of psychopharmacology.Harvard University, Cambridge

23. Hochberg L, Taylor D (2007) Intuitive prosthetic limbcontrol. Lancet 369:345–346

24. Hochberg L et al (2006) Neuronal ensemble control ofprosthetic devices by a human with tetraplegia. Nature442:164–171

25. Hughes J (2004) Citizen cyborg: why democratic societiesmust respond to the redesigned human of the future.Westview, Cambridge

26. Keiper A (2006) The age of neuroelectronics. The NewAtlantis Winter 2006:4–41

27. Keulartz JM, Korthals MS, Swierstra T (eds) (2002)Pragmatist ethics for a technological culture. KluwerAcademic, Dordrecht

28. Keulartz J, Schermer M, Korthals M, Swierstra T (2004)Ethics in technological culture: a programmatic proposal for apragmatist approach. Sci Technol human values 29(1):3–29

29. Koops B, van SchootenH, Prinsen B (2004) Recht naar binnenkijken: een toekomstverkenning van huisrecht, lichamelijkeintegriteit en nieuwe opsporingstechnieken, eJure, ITERseries 70. Retrieved from http://www.ejure.nl/mode=display/downloads/dossier_id=296/id=301/Deel_70_Koops.pdf

30. Lamme V (2006) De geest uit de fles. Dies rede. Universityof Amsterdam, Amsterdam

31. Levy N (2007) Neuroethics. Cambridge University, Cambridge32. Leentjes AFG et al (2004) Manipuleerbare wilsbekwaamheid:

een ethisch probleem bij elektrostimulatie van de nucleaussubthalamicus voor een ernstige ziekte van Parkinson.Ned Tijdschr Geneeskd 148:1394–1397 [Manipulablecompetence]

33. Lunshof JE, Chadwick R, Vorhaus DB, Church GB (2008)From genetic privacy to open consent. Nat Rev Genet9:406–411

34. Mooij A (2004) Toerekeningsvatbaarheid. Over handelingsv-rijheid. Boom, Amsterdam [Accountability. On the freedomto act]

35. Moreno JD (2004) DARPA on your mind. Cerebrum6:91–99

Nanoethics (2009) 3:217–230 229

Page 14: Brain

36. Morse S (2007) Voluntary control of behavior and responsi-bility. Am J Bioeth 7:12–13

37. Morse SJ (2008) Determinism and the death of folkpsychology: two challenges to responsibility from neuro-science. Minnesota Journal of Law, Science and Technology9:1–35

38. Rabins P et al (2009) Scientific and ethical issues related todeep brain stimulation for disorders of mood, behavior andthought. Arch Gen Psychiatry 66:931–37

39. Roco MC, Bainbridge WS (2002) Converging technologiesfor improving human performance. National ScienceFoundation, Arlington

40. Rose N (2003) Neurochemical selves. Society 41:46–5941. Roskies A (2008) Neuroimaging and inferential distance.

Neuroethics 1:19–3042. Schermer M (2007) Gedraag Je! Ethische aspecten van

gedragsbeïnvloeding door nieuwe technologie in degezondheidszorg. NVBE, Rotterdam [Behave yourself!Ethical aspects of behaviour-influencing new technologies]

43. Smart JJC (2003) Free will, praise and blame. In: Watson G(ed) Free will, 2nd edn. Oxford University, New York,pp 58–71

44. Smits M (2006) Taming monsters: the cultural domestica-tion of new technology. Tech Soc 28:489–504

45. Strawson P (2003) Freedom and resentment. In: Watson G (ed)Free will, 2nd edn. Oxford University, New York, pp 72–93

46. Synofzik M, Schlaepfer TE (2008) Stimulating personality:ethical criteria for deep brain stimulation in psychiatric patientsfor enhancement purposes. Biotechnol J 3:1511–1520

47. Vargas M (2005) The revisionist’s guide to responsibility.Philos Stud 125:399–429

48. Warwick K (2004) I, cyborg. Century, London49. Wegner DM (2002) The illusion of conscious will. MIT,

Cambridge50. Weinberger S (2007) Pentagon to merge the next-gen

binoculars with soldiers’ brain. Retrieved September 152009 from http://www.wired.com/gadgets/miscellaneous/news/2007/05/binoculars

230 Nanoethics (2009) 3:217–230

Page 15: Brain

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.