21
CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING TECHNOLOGIES KARIM A. JEBARI Licentiate Thesis Stockholm, Sweden 2012

CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING TECHNOLOGIES

KARIM A. JEBARI

Licentiate Thesis Stockholm, Sweden 2012

Page 2: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

ii

Abstract. Jebari, Karim A. 2012. Crucial Considerations: Essays on the Ethics of Emerging Technologies Theses in Philosophy from the Royal Institute of Technology 42. 79 + vi pp. Stockholm. ISBN 978-91-637-2006-2 Essay I explores brain machine interface (BMI) technologies. These make direct communication between the brain and a machine possible by means of electrical stimuli. This essay reviews the existing and emerging technologies in this field and offers a systematic inquiry into the relevant ethical problems that are likely to emerge in the following decades.

Essay II, co-written with professor Sven-Ove Hansson, presents a novel procedure to engage the public in ethical deliberations on the potential impacts of brain machine interface technology. We call this procedure a Convergence seminar, a form of scenario-based group discussion that is founded on the idea of hypothetical retrospection. The theoretical background of this procedure and the results of the five seminars are presented here.

Essay III discusses moral enhancement, an instance of human enhancement that alters a person’s dispositions, emotions or behavior in order to make that person more moral. Moral enhancement could be carried out in three different ways. The first strategy is behavioral enhancement. The second strategy, favored by prominent defenders of moral enhancement, is emotional enhancement. The third strategy is the enhancement of moral dispositions, such as empathy and inequity aversion. I argue that we ought to implement a combination of the second and third strategies.

Keywords: neuroethics, brain machine interface, convergence seminars, moral enhancement, human enhancement, privacy, autonomy

Karim A. Jebari, Division of Philosophy, Department of Philosophy and the History of Technology, Royal Institute of Technology (KTH) SE-100 44 Stockholm, Sweden

Page 3: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

iii

This licentiate thesis consists of an introduction and the following essays:

I. Jebari, Karim A., “Brain Machine Interface and Human Enhancement. An Ethical Review” Forthcoming in Neuroethics

II. Jebari, Karim A. and Hansson, Sven Oven., “European Public Deliberation on Brain Machine Interface Technology” Submitted manuscript

III. Jebari, Karim A., “Three Kinds of Moral Enhancement”

Submitted manuscript © 2012 by Karim A. Jebari ISSN 1650-8831 ISBN 978-91-637-2006-2 Printed in Stockholm, Sweden by E-print AB 2012

Page 4: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

iv

CONTENTS

ACKNOWLEDGMENTS vi INTRODUCTION 1

I. Previous research on human enhancement 1 II. Human enhancement and public discourse 3 III. Human dignity 4 IV. Summary of the essays 7 V. Sammanfattning på svenska 9

ESSAYS

I. BRAIN MACHINE INTERFACE AND HUMAN

ENHANCEMENT. AN ETHICAL REVIEW 17 1. Introduction 17 2. Existing Brain Machine Interfaces 19 3. Experimental technologies 23 4. Ethical considerations 27 5. Concluding remarks 33

II. EUROPEAN PUBLIC DELIBERATION ON BRAIN MACHINE

INTERFACE TECHNOLOGY 34 1. Introduction 34 2. Theoretical background 36 3. Method 38 4. Results 40 5. Conclusions 56 6. Appendix 57

III. THREE KINDS OF MORAL ENHANCEMENT 60

1. Introduction 60 2. Why moral enhancement? 61 3. Three kinds of moral enhancement 62 4. Empathy 66 5. A sense of fairness 69 6. Objections against moral enhancement 70 7. Concluding remarks 73

Page 5: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

v

Page 6: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

vi

ACKNOWLEDGMENTS

I would like to thank my supervisors, Sven-Ove Hansson and Barbro Fröding. I would also like to thank the participants at the philosophy seminar at the Royal Institute of Technology as well as the participants at the CHE-seminar. In particular I would like to thank Niklas Juth for his extensive commentary. I would also like to thank Anders Sandberg and Nick Bostrom for being great sources of inspiration for my work. My greatest thanks are to the many anonymous reviewers who have helped me more than anyone in clarifying my thoughts on these topics.

Page 7: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

1

INTRODUCTION

The topic of these essays concerns the ethical issues on a range of technological developments that has the potential to redefine the human condition: Brain Machine Interface (BMI). The ability to connect the nervous system to a computer may surpass the impact of both nanotechnology and biotechnology. It is in the nature of technological development that it is difficult to predict and has a potential to revolutionize the human condition. Due to the high impact of possible applications of BMI technology, we do not have the luxury of waiting for this technology to emerge before thinking about the ethical consequences of its widespread adoption. The three essays in this thesis form part of a research effort aimed at thinking about the impact of BMI technology from a broad perspective. In particular, they seek to explore the ethical and social and ramifications associated with present and future technology. This introduction consists of the following five parts. The first part is a short review on the literature of human enhancement. The second part explores three difficulties that I have encountered in discussions with the public, when probing the risks and possibilities with BMI and human enhancement technology. The third part is a detailed discussion of a more daunting aspect of human enhancement; the issue of whether it threatens human dignity. The fourth part of this introduction is a short summary of each essay. The fifth part is a popular summary of the three essays in Swedish. I. Previous research on human enhancement Although the term “transhumanism” was first coined by the biologist Julian Huxley in 1957, systematic philosophical enquiries on transhumanism and human enhancement were first formulated in the last decade of the twentieth century.1 The British philosopher Max More is credited by James Hughes, former executive director of the World Transhumanist Association, to have presented the first modern systematic philosophical defense of human enhancement and transhumanism in the early 1990’s.2 The early defendants of human enhancement have often argued from the radically individualist position of “morphological freedom”, the notion that an individual’s desire to enhance is a private concern.3 These radical pro-enhancement ideas are often combined with “the proactionary principle” which states that

“Our freedom to innovate technologically is valuable to humanity. The burden of proof therefore belongs to those who propose restrictive measures”.4

By the beginning of the twenty first century, the debate was transformed by the creation of several academic institutions that have been actively promoting these

Page 8: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

2

ideas, of which the Future of Humanity Institute under its director Nick Bostrom is the most prominent.5 These academic institutions have lifted the profile of human enhancement and transhumanism, from being in the fringe of academia, to a topic taken seriously in prominent bioethical peer reviewed journals and many courses in bioethics at big universities.

Human enhancement and Transhumanism entered the broader public debate after rigorous criticism by leading intellectuals such as Francis Fukuyama, Leon Kass, Michael Sandel, Bill McKibben and Jürgen Habermas. These have been labeled as “bioconservatives” by proponents of human enhancement, as they all seem to subscribe to the “bioconservatives thesis”:

“Even if it were technically possible and legally permissible for people to engage in biomedical enhancement, it would not be morally permissible for them to do so”.6 These intellectuals, although from radically different ideological camps, have argued in favor of a general restriction of human enhancement with remarkably similar arguments. Michael Sandel argues that human enhancement epitomizes the modern strive for mastery and control, and that human enhancement is contrary to traditional virtues such as humility and openness to the unbidden.7 Another potential consequence of human enhancement is the further entrenching of inequalities. Michael Sandel and Bill McKibben argue that if for example cognitive enhancement were possible, that this would create a “genetic divide” between the enhanced and the “naturals”.8, 9 The 1997 film Gattaca's depiction of a dystopian society in which one's social class depends entirely on genetic modifications is often cited by critics in support of a combination of these views. Francis Fukuyama suggests that human enhancement threats to undermine the idea that all humans have equal moral worth, by undermining the idea of a “human essence”.10 Jürgen Habermas argues that moral autonomy depends on not being subject to another person’s specifications. This autonomy is threatened by the genetic enhancement of embryos.11 Leon Kass’ main objection to human enhancement is that it undermines human dignity.12 This objection will be discussed in greater detail in section III of this introduction.

In the last few years, more moderate proponents of human enhancement have made the debate more mature. Philosopher Nicholas Agar argues that although some enhancement is permissible and even desirable, radical enhancement risks undermining the values that we today hold dear.13 Julian Savulescu and Ingmar Persson has warned about the dangers of cognitive enhancement and proposed moral enhancement to reduce some of the risks that technological development entails.14 Allen Buchanan is another philosopher who has a moderate and cautious approach to human enhancement. He argues that we have to take a more fine-grained approach, because there is no general answer on what to do with regards to human enhancement, as different modes of enhancement in different contexts are going to have different risk benefit

Page 9: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

3

profiles.15 The debate on human enhancement has often focused on genetic engineering and the use of performance enhancing or psychoactive drugs. As BMI technology has developed, its enhancement potential has been recognized by the authors in this debate. However, although some ethical concerns regarding BMI are familiar to the human enhancement debate, others are not. These essays will try to explore some of the novel ethical problems that BMI poses. II. Human enhancement and public discourse In my discussions with the general public three broad reactions to human enhancement were prominent in driving intuitions and affecting the judgment of those whom I engaged with. During the writing of these essays, and in particular essay II, that describes the convergence seminars, I have encountered these reactions consistently. The convergence seminars (described in more detail in essay II) are a form of scenario-based discussion technique. One of the ideas of this discussion technique is to bring out moral intuitions and allow the participants to critically reflect on them. Although disgust may be just as any other motion a valid starting point in an ethical discussion, it ought not to be the final word. However, this is often the case. In my experience, no other emotional reaction is as powerful as disgust when it comes to the distortion of rational deliberation. Individuals that experience this “yuk-effect” tend to refuse to engage critically with their intuitions or to rationalize ad absurdum their initial reactions. Jonathan Haidt describes this effect and how it affects our moral deliberations and risk perceptions in some detail.16 According to Haidt, much of our moral reasoning is a post-hoc construction that justifies our initial gut reaction. Many applications of brain machine technology have in my experience roused such reactions and such reasoning when framed in science fiction contexts, by for example referring to people with brain implants as “cyborgs”. When the medical and therapeutical uses of this technology were mentioned, much less adverse emotional reactions were evoked. Whereas some bioethicists’ argue that there is wisdom in repugnance, the arguments against this view are compelling.17 Martha Nussbaum has for example noted that disgust has been used as a justification for persecution.18 Paleontologist Stephen Jay Gould has also remarked that reactions of disgust undermine critical and rational reflections, and are thus contrary to wisdom.19 The philosopher John Harris has also rejected the notion that there is wisdom in repugnance by arguing that: “there is no necessary connection between phenomena, attitudes, or actions that make us uneasy, or even those that disgust us, and those phenomena, attitudes, and actions that there are good reasons for judging unethical” (p.37).20 My advice to researchers, scientists and policy makers who wish to engage the public on this issue in an informed and analytical manner is thus to avoid the temptation to frame them in a science-fiction context.

A second, related concern relied on the narrative aspects of science fiction films and novels. While thought experiments and scenarios are widely used both among philosophers and the public to better understand risks,

Page 10: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

4

fiction is different, as it has an inbuilt narrative bias. A scenario has to be credible and plausible. In contrast, a novel must tell a good story, even if the unfolding of events in this story is very implausible. As Eliezer Yudkowski argues, dramatic “logic” is not logic.21 Storytellers are routinely warned that an event is not necessarily dramatically credible just because it happened in real life. Conversely, we ought to reject the notion that an event s likely just because it has been vividly portrayed in a fictional work. The availability of fiction in our thinking about the ethics of technology may lead us to both underestimate and overestimate risks. In the context of brain machine interface, the narrative in dystopian fiction is particularly apt at evoking images of totalitarian governments using this technology to control its citizens or of the mindless drones in Star Trek’s Borg community. Less attention is directed to the risks of unsafe medical practices, rejection of brain implants and other long term adverse health effects. As risks in movies and books are often related to specific agents, not being aware that one’s risk assessment is anchored by narratives may lead us to overlook risks created by mistakes or random events. Thus fictional representations combine availability bias, agency bias and narrative bias in a powerful mix that distorts clear thinking on these issues. When communicating the risks of technological innovations that resemble technology used in science fiction narratives, this is worth pointing out. Not all science fiction scenarios are dystopian however, and the allure of fantastic future projections may be just as misleading in making us believe that we can really predict the future with great accuracy.

The third and most prevalent heuristic that seemed available to many whom I have engaged with was based on the myth of Ikaros. This story and its morale, about technological hubris and the inevitable punishment of the gods, resounds profoundly in our heuristic toolbox. The idea that technology is something inherently dangerous has since been replicated in countless narratives, from Mary Shelly’s Frankenstein to Aldous Huxley’s Brave New World and modern works such as the Terminator franchise. While it is of course reasonable to be cautious when making important decisions about technology, no story about the opportunity costs of delayed technological development seems available to us. Although we tend to appreciate (some) well-known technologies, future technologies are mostly portrayed as menacing, dehumanizing and alien. However, some technologies may be very risky if not implemented, but this is much more difficult to imagine and tell a compelling story about. I believe that philosophers of risk and applied ethics may engage fruitfully in this discussion if trying to make these biases and heuristics clear. III. Human dignity Human dignity is generally recognized as a fundamental value. In the debate on human enhancement, detractors of transhumanism, most notably Leon Kass in his book Life, Liberty and the Defense of Dignity: The Challenge for Bioethics, have claimed that human dignity is threatened by enhancement.22 As there is a clear

Page 11: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

5

enhancement potential in possible future applications of BMIs, there is an obvious concern that linking our brains to computers may undermine our dignity.23 But what does “dignity” mean? According to Immanuel Kant, all persons have dignity, a kind of absolute value that ought to protect the individual from being used as a mere means to further the objectives of another person or collective of persons. Kant held that dignity is not a quantitative notion; either we have it, or we lack it. To have dignity is, on this view, to be a moral subject. According to Kant to have dignity an individual must possess a rational nature. Therefore, any attempt to eliminate a person’s rationality is a great offence against that person.24

An alternative view holds that dignity is a quality in some individuals that are honorable, noble and worthy. This quality can be taught or cultivated. An otherwise dignified person would become less so if that person would momentarily lose his or her composure or if this person is subject to degrading treatment. This view is sometimes referred to as “aristocratic dignity”. A third possible interpretation of the notion of dignity is that dignity is a quality that all members and only members of the species Homo Sapiens have, and that this quality confers moral obligations to those who possess this quality. Presumably, we may diminish an individual’s dignity if we subjected that person to something that would make that person a non human. Although this is quite fantastical, we might imagine a procedure where the genetic code of a human is by some imaginable future technology reprogrammed so as to radically differ from that of other humans. These three interpretations seem to be the only possible ways of understanding “dignity” But what does Kass mean by dignity? It is far from clear. He writes:

Yet contemplating present and projected advances in genetic and reproductive

technologies, in neuroscience and psychopharmacology, and in the development of artificial organs and computer-chip implants for human brains, we now clearly recognize new uses for biotechnical power that soar beyond the traditional medical goals of healing disease and relieving suffering. Human nature itself lies on the operating table, ready for alteration, for eugenic and psychic "enhancement," for wholesale re-design. In leading laboratories, academic and industrial, new creators are confidently amassing their powers and quietly honing their skills, while on the street their evangelists are zealously prophesying a post-human future. For anyone who cares about preserving our humanity, the time has come to pay attention. [...]

Our immediate ancestors, taking up the challenge of their time, rose to the occasion and rescued the human future from the cruel dehumanizations of Nazi and Soviet tyranny. It is our more difficult task to find ways to preserve it from the soft dehumanizations of well-meaning but hubristic biotechnical "re-creationism"--and to do it without undermining biomedical science or rejecting its genuine contributions to human welfare.25 (p. 2-3)

Page 12: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

6

In this passage, Kass worries about reproductive technologies, neuroscientific advances brain-machine interfaces and other possible ways to engage in human enhancement. But it is far from clear why any of these advances must threaten human dignity. Although it is perfectly imaginable that some brain-machine interface may be used to override the rational faculties of an agent, or to remote control people with brain implants, that possibility does not make human enhancement morally wrong, as Kass seems to suggest, if we accept the Kantian notion of dignity. However, I could certainly agree that any comprehensive regulation on BMI should consider how to avoid applications that violate Kantian dignity. In particular, coerced implantation of devices that can reduce rationality, emotional depth or intensity of desires should be if not outright banned, at least heavily restricted. It should be noted that these non-dignified states of being can also be achieved by many drugs that are used in modern medicine, such as morphine. This is a reason to restrict access to and the coercive use of these drugs. But surely we would not want to ban morphine altogether, just because it could be used to enslave or manipulate people? Further, if we value Kantian dignity, we should conclude that it would be highly desirable to enhance people’s ability to retain this dignity in old age, or to extend this dignity to people that have the misfortune to lack it. If it matters to us to make the mentally disabled dignified in the Kantian sense we ought to use technology to produce such dignity in these people. I believe that if Kass takes Kantian dignity seriously, he would concur with this notion.

According to the second notion, aristocratic dignity, it is also possible to imagine some device that is deeply degrading. Perhaps a BMI implant could be used to torture people, or to make them perform humiliating acts. Again, this does not imply that cognitive or sensory enhancement with the help of BMI would be degrading or violate dignity. And we might just as easily imagine people being spared humiliating and degrading physical conditions with the help of BMI.

What about the third interpretation of “dignity”? Does cognitive enhancement make us less human? It depends on what we mean by human. Leon Kass does not offer us a definition. If human were to refer to a member of the species Homo Sapiens, then we would have to engage in very speculative science fiction before brain-machine interfaces would risk to undermine that membership. However, perhaps being human in Kass’ sense is not a definitive state that is easily demarcated. Perhaps what Kass is after is some general characteristics of our human nature that could be erased by radical enhancement, i.e. by bringing about posthuman people, and that posthumans lack human dignity. What is a posthuman person? According to the transhumanist account, a posthuman is a person who was either born as a human or has human antecedents, but could not reasonably be characterized as human. A posthuman differs from a transhuman in that a transhuman is commonly defined as an enhanced human, but one whose abilities are within the realm of what can be considered human. In contrast, a posthuman has abilities that differ radically from human ones. For example: a

Page 13: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

7

person who, after being enhanced, displays the general level of intelligence of John von Neumann would be classified as a transhuman. A person able to make contributions equivalent to those produced by von Neumann’s over the course of his life in one year would be posthuman. This characterization, however vague, serves to illustrate how alien a posthuman person would be. Although it is logically possible to bring about the existence of such people with enhancement technologies, it remains science fiction of the most speculative kind. Although the creation of posthumans in the far future is not morally uncontroversial, it is not a prospect that relevant enough to take seriously at this moment. IV. Summary of the essays The first essay reviews existing and experimental applications of brain machine interface (BMI) technology. This technology essentially consists in various ways to connect the nervous system via electrodes to a machine, thereby making the direct exchange of information between these two possible. BMI technology can extract information from the brain or spinal cord, and direct prostheses, computers and accessories, making this technology very promising for people with disabilities such as paralysis. Although steering prostheses requires at the moment advanced arrays of micro electrodes, cheaper and simpler BMI: s suffice to direct characters in computer games. Simple EEG-based BMI: s has also been used in analyzing reactions to marketing input. Thus this technology has already entered the commercial realm. The breakthrough of EEG is important because it is, in contrast with earlier invasive BMI: s, non-invasive and relatively cheap. Thus it has a potential for commercial use that the intracranial Electrocorticography (ECoG) interfaces lacked.

Brain- machine interface devices can also be used to feed information to the brain. This allows implants to provide hearing for deaf people or rudimentary visual orientation for the blind. Deep brain stimulation is routinely used to reduce motor symptoms for patients with Parkinson’s disease and other neurodegenerative diseases as well as chronic pain and major depressive disorder. However, as long as this treatment relies on an invasive procedure, it is likely to be restricted to use in cases of serious diseases.

The future development of BMI is of moral concern. While this technology provides help for disabled and sick people, its commercialization can potentially undermine both privacy and autonomy. Advertisement agencies, employers and the government are all interested in effective ways to know how we feel, think and respond to stimuli. If privacy is important to preserve, as is generally accepted, the kind of information that could theoretically be extracted through brain machine interface devices is of concern. Here there is work to be done, both for philosophers and lawmakers, to hammer out a plausible definition of privacy and formulate a comprehensive and transparent regulatory framework. Threats to autonomy may come in the future if less risky technology for deep brain stimulation (DBS) is developed.26 Since DBS can powerfully alter emotional

Page 14: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

8

states and change our desires and dispositions, such a technology has clearly an enormous potential for abuse. However, it could in theory also be used as a tool for moral enhancement, and to enhance autonomy in some cases.

The second essay, co-written with Professor Sven-Ove Hansson, describes a novel procedure to engage lay people in deliberations on risk, ethics and technology. This procedure, that we call “Convergence seminars” is a form of scenario-based group discussion that is founded on the idea of hypothetical retrospection. This idea is the systematic application of a pattern of argumentation that is prevalent in non-philosophical discussions. One of the most common types of arguments about future possibilities consists of referring to how, in the future, one might come to evaluate the actions one takes now. These arguments are often stated in terms of predicted regret: “Do not do that. You may come to regret it”. Just as we can improve our decisions by considering them from the perspectives of other concerned individuals, we can also improve them by considering them from alternative future perspectives; i.e., hypothetically seeing them as we will see them retrospectively in the future. We consider this methodology to be particularly useful in areas where considerable uncertainty exists and where standard quantitative methods for risk assessment are less suitable, such as the future development of Brain Machine Interface (BMI) technology. A set of concrete scenarios were developed for the discussions. These scenarios lead to some future point in time, but each scenario lead to a different branch of future development. The focus was on some decision in the present or near-present time that the participants were asked to evaluate from the viewpoint of their scenario. The different scenarios were also constructed so that they represented branches in which different alternate decisions gave rise to problems that made them difficult to defend in hypothetical retrospection. To make the procedure easy to apply in a few hours, the procedure proposed and used here employed only three scenarios. This was the first use of convergence seminars on this topic. The method functioned well, both logistically and more importantly, by giving rise to the type of discussions that we aimed for, namely discussions on how today’s decisions might be influenced by different possible future developments. As expected, the methodology was well suited for discussions on the future of BMI, with its many uncertainties. The responses provided by the participants in discussions and questionnaires indicated that their advice regarding what decisions should be made about the BMI development was influenced both by different possible future developments and by the points of view of their co-participants. It is also worth noting that most participants agreed that BMI technology was beneficial when and if used for medical purposes, whereas some were skeptical to the use of this technology for commercial and military purposes.

The third essay explores the idea of moral enhancement, a controversial instance of human enhancement that has been widely discussed recently. Here I distinguish between three kinds of moral enhancement. Behavioral enhancement, known from science-fiction films such as A Clockwork

Page 15: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

9

Orange consists in restricting or promoting certain behavior. This could be done for example by drug induced hyper sensitivity to alcohol or by some implant that could modulate behavior by electrical stimulation. Emotional enhancement, as proposed by Thomas Douglas in his article “Moral Enhancement”, consists in promoting or restricting specific emotions.27 Aggression and xenophobia are likely candidates on the list of problematic emotions that could be reduced. The third possibility is to enhance dispositions to feel in certain ways in certain contexts. The most plausible candidates here are empathy and inequity aversion. I argue that some arguments made against moral enhancement are only relevant against behavioral enhancement, and that general arguments against moral human enhancement are less powerful when directed against the second and third kind of moral enhancement. I also argue that we ought to adopt a combined strategy, where dispositional enhancement is supplemented with some emotional enhancement. V. Sammanfattning på svenska Artiklarna i den här licenciatavhandlingen utforskar frågor som har att göra med de etiska aspekterna av användningen av hjärnmaskingränssnitt. Ett hjärnmaskingränssnitt möjliggör direkt utbyte av information mellan en hjärna och en maskin. Maskiner med denna funktion har förvisso funnits en längre tid, sedan den första electroencephalografin (EEG) började användas för att registrera och analysera patienters hjärnaktivitet. Men de senaste decennierna har denna teknik utvecklats avsevärt, främst inom sjukvården, och den har även börjat användas inom konsumentelektronik och marknadsföring. Eftersom hjärnmaskingränssnitt ger oss ett nytt sätt att interagera med datorer och maskiner, kan den här teknologin komma att få en stor påverkan på hur det moderna samhället och det mänskliga tillståndet utvecklas. Teknologisk utveckling är till sin natur svår att förutsäga och teknologier som har en potential att revolutionera vår tillvaro måste diskuteras och granskas innan de har förändrat samhället på ett irreversibelt sätt.

I den första artikeln, ”Brain Machine Interface and Human Enhancement - An Ethical Review”, argumenterar jag för att hjärnmaskingränssnitt tillhör den kategorin teknologier som har denna potential. Två trender inom teknikutvecklingen kan sägas påskynda denna process. Den ena är att avancerade datorer blir allt mindre, mer kraftfulla och billigare. Med hjälp av ren beräkningskraft kan mycket mer information från hjänan processas på ett effektivt sätt. Det har gjort att icke-invasiva gränssnitt, alltså sådana som kan användas utan att utsätta användaren för en medicinsk risk, kan användas inom allt fler områden. De första tankestyrda proteserna krävde en riskabel och dyr operation, där mikroelektroder fästes direkt på hjärnbarken. Moderna proteser kan använda sig av den information som externa elektroder kan fånga upp. De allra senste proteserna har dessutom mjukvara som lär sig sina användares kroppsrörelser och kan därmed vara mer följsam och bekväm. Den andra trenden

Page 16: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

10

är vår allt större kunskap om hjärnan, som möjliggör en mycket mer effektiv användning av hjärnmaskingränssnitt. Cochleaimplantat, hörhjälpmedel som genom elektrisk stimulering av hörselnerven ger gravt hörselskadade och döva barn möjligheten att uppfatta ljud, är förmodligen den mest kända tillämpningen av den här tekniken. Dessa implantat har förbättrats stadigt sedan de introducerades för mer än ett decennium sedan. Mycket av den här förbättringen har berott på just ökade insikter om hörslenervens komplexa anatomi. Hjärnpacemakers, implantat som genom elektriska impulser stimulerar vissa hjärnregioner, är också en teknologi som utvecklats i en snabb takt det senaste decenniet. Till en början användes dessa hjärnimplantat endast för att motverka symptomen hos svåra neurodegenerativa sjukdomar, som parkinsons sjukdom och dystonia. Numera har även patienter med depression, svåra tvångssyndrom och kronisk smärta nytta av dessa implantat. Förhoppningen inom forskningen på detta område är att kunna stimulera hjärnan på ett liknande sätt utan att behöva utsätta patienten för en invasiv operation. Om detta skulle bli verklighet kan hjärnpacemakers få en mängd nya användningsområden, eftersom hjärnstimulans har en förmåga att på ett precist och effektivt sätt påverka hjärnaktivitet som vida överträffar psykoaktiva drogers.

Hjärnmaskingränssnitt har tagit steget från att vara en rent medicinsk teknologi, där frågor om patientsäkerhet och djurförsök dominerade den etiska debatten, till att även bli en konsumentprodukt. EEG-baserade dataspelstillbehör säljs idag av minst två konkurrerande aktörer på marknaden. EEG används också för marknadsundersökningar, bland annat för att kartlägga testpublikers reaktioner på reklamfilmer. Denna vändning i teknikutvecklingen gör att nya etiska problem blir relevanta. Den här artikeln undersöker två områden där ett potentiellt missbruk av hjärnmaskingränssnitt bör leda till vidare granskning och regleringar av denna teknologi. Eftersom hjärnmaskingränssnitt kan extrahera potentiellt mycket privat information om användaren är personlig integritet (eng. ”privacy”) en mycket relevant fråga. Det bör påpekas att det är inte i nuläget möjligt att ”läsa tankar” med den här typen av teknologi, samt att den här möjligheten ligger långt fram i tiden. Likväl finns idag en trend att arbetsgivare, informationsföretag som Google och Amazon och PR-experter får en allt större insikt i människors beteende och hur de kan påverka det. Smarta kameror med ansiktsigenkänning, mjukvara som sparar personers datoranvändade samt statistiska analysverktyg kan redan idag generara mängder av information om vanliga personer, information som ofta är otillgänglig för allmänheten, och som ibland används för syften som inte gagnar individen. En ökad användning av hjärnmaksingränssnitt skulle innebära att mer sådan information produceras utan användares kännedom eller på sätt som inte är tillräckligt transparenta.

Hjärnmaskingränssnitt har stor potential att ge personer som idag är förlamade, blinda, döva, svårt deprimerade eller kroniskt sjuka en möjlighet att återfå en del av sina förlorade förmågor. I detta avseende kan den här typen av teknologi göra stor skillnad för dessa människors autonomi. Med autonomi menas

Page 17: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

11

här en individs möjlighet att utforma sitt liv i enlighet med vad individen själv finner vara goda och riktiga mål. Men det är inte bara sjukdom och skador som kan underminera en persons autonomi. Vissa filosofer har även argumenterat att reklam påverkar vår autonomi negativt, i den mån dess syfte är att påverka våra begär på ett sätt som går vår förnuftiga natur förbi. I den mån som reklamföretag lär sig alltmer om individers vanor, preferenser och begär ökar rimligen företagens förmåga att påverka dessa. Givet att reklam utgör ett hot mot vår autonomi så kan sofistikerade hjärnmaskingränssnitt öka det här problemet. Men hjärnmaskingränssnitt kan också på ett direkt sätt påverka våra preferenser och begär, med hjälp av elektrisk stimulering av hjärnan. Notera att så länge som elektrisk stimulering förutsätter ett implantat så kommer den här typen av hjärnmaskingränssnitt att sannolikt endast användas av svårt sjuka personer. Skulle ett icke invasivt alternativ utvecklas däremot, finns det vissa möjligheter att dessa skulle kunna användas för att låta personer förändra sina preferenser och begär. I den mån som vi själva har kontroll över dessa apparater kan vår autonomi öka avsevärt. Men givet att hjärnstimulering är så potent så bör den här möjligheten att vara förbehållen individen själv, annars riskerar vi att få ett samhälle där någon eller några aktörer får otillbörlig makt över andra människors känslor och attityder. Sett ur ett autonomiperspektiv vore detta beklagansvärt.

Min andra artikel, ”European Public Deliberation on Brain Machine Interface Technology” är resultatet av ett årslångt fältarbete där jag besökte fyra olika europeiska städer (inklusive Stockholm) och testade en metod för att engagera allmänheten i etiska diskussioner om hjärnmaskingränssnitt och relaterade teknologier. Metoden som testades kallas för ”konvergensseminarier” och har utvecklats av professor Sven-Ove Hansson. Konvergensseminarier är ett gruppsamtal som utgår från specifika scenarier om möjliga framtida utvecklingar. Idén bakom den här metoden är en systematisk tillämpning av ett sätt att argumentera i icke-filosofiska sammanhang. Ofta uttrycks ett resonemang i termer av hur en bedömare skulle se på ens handlingar nu från ett framtida perspektiv. ”Kan du nu ha skäl att tro att du kommer att ångra ditt val i framtiden?” är den centrala frågan i den här typen av reflektion. De scenarier som utvecklades skildrar alla en möjlig utveckling som ett resultat av vissa politiska beslut som fattades vid en tidpunkt i den nära framtiden. Alla scenarier visar på de värsta tänkbara (inom rimlighetens gräns) konsekvenserna av olika möjliga beslut. I ett av scenarierna reglerades hjärnmaskingränssnitt hårt, och detta ledde till att EU missade stora potentiella produktivitetsökningar som gynnade andra ekonomier på EU:s bekostnad. I två andra scenarier var regleringen av gränssnitten näst intill obefintlig, men konskevenserna såg olika ut. I scenario två ledde den bristande regleringen till medicinsk felpraktik och många skador och medicinska problem. I scenario tre ledde den bristande regleringen till en väldigt abrupt samhällsutveckling som skapade ett stort gap mellan generationer och ett sammanbrott i det sociala kontraktet.

Page 18: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

12

Urvalet av deltagare kunde av resursskäl inte vara representativt, men vår ambition var att hämta deltagare från vitt skilda regioner och intressegrupper för att åtminstone försöka få till en viss spridning. De fem grupperna som deltog var lärare från en liten by i södra Spanien, medlemmar i parkinsonföreningen i Stockholm, teologistudenter i London, medicinsk personal från Warsawa och filosofistudenter vid Stockholms Universitet.

Seminarideltagarna delades upp i tre mindre grupper där varje grupp läste ett scenario. Därefter fick deltagarna diskutera sina scenarier i sin grupp. Efter en tid delades grupperna upp så att varje scenario fanns representerad i varje ny grupp. Här ombads deltagarna att presentera sitt scenario och sina tankar kring det för de andra gruppdeltagarna. Slutligen hade vi en diskussion i stor grupp med alla deltagare. Diskussionerna som denna övning resulterade i höll, anser jag, en mycket hög nivå, och fokuserade på hur och i vilken utsträckning vi idag kan påverka den framtida samhällsutvecklingen. Även om deltagarnas synpunkter inte kan ses som representativa för den europeiska allmänheten, är det värt att notera att de allra flesta var ense om att hjänmaskingränssnitt var till godo i den mån tekniken användes för medicinska ändamål. Fler studier av den här typen behövs, dels för att engagera medborgare i dessa viktiga diskussioner, dels för att stärka dialogen mellan medborgare, lagstiftare och vetenskapssamhället.

Min tredje artikel, ”Three Kinds of Moral Enhancement ” diskuterar möjligheten att med hjälp av elektrisk stimulering eller psykoaktiva droger ge personer en möjlighet att förbättra sin moral. Med förbättring i det här sammanhanget menas inte en förbättring av individer med abnorm moral, utan en generell förbättring av den genomsnittliga personen. Moralisk förbättring har ofta ansetts mer kontroversiell än till exempel en förbättring av individers kognitiva förmågor. Trots det har vi, anser jag, goda skäl att tillåta och uppmuntra moralisk förbättring. Vi lever i en tid där våra handlingar kan påverka människor på stora avstånd, både i tid och rum. Vi saknar dessvärre den moraliska kapaciteten att ta dessa människors behov på allvar. Vår bristande hänsyn och klanmoral gör att vi tillåter att en miljard människor befinner sig i ett tillstånd av akut armod. Detta samtidigt som vårt fokus på den nära framtiden gör att vi utsätter framtida generationer för oacceptabla risker.

Jag argumenterar för att en del av invändningarna som riktats mot moralisk förbättring visar att man missat en viktig distinktion mellan olika möjliga förbättringsstrategier. Jag argumenterar vidare för att moralisk förbättring kan utföras på följande tre olika sätt. (1) Beteendeförbättring, där vissa beteenden blockeras eller framkallas, till exempel genom att skapa ett stark känsla av obehag när individen ägnar sig åt beteendet. Exempel på detta skildras i filmen A Clockwork Orange där huvudpersonen Alex utsätts för en behandling som betingar honom till att må illa när han kommer i kontakt med våldsamma känslor. (2) Emotionell förbättring går ut på att reducera eller stärka kraften i vissa emotioner. Det kan handla om att minska aggressivitet eller xenofobi. Till skillnad från

Page 19: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

13

beteendeförbättring påverkar emotionell förbättring personligheten direkt, på gott och ont. (3) Dispositionell förbättring går ut på att förbättra människors förmåga till empati och aversion mot ojämlika utfall. Dessa är inte emotioner i sig, utan snarare en benägenhet att reagera på vissa situationer på särskilda sätt. Empati innebär, i det här sammanhanget (a) förmågan att uppleva en känsla, (b) som är lik en annan persons känsla, (c) som väcktes av att identifiera eller föreställa sig en annan persons känsla och (d) när vi vet att vår känsla orsakas av den andres känsla. En känsla för rättvisa är, liksom empati, en benägenhet att känna vissa emotioner i vissa specifika situationer. Det kan röra sig om vrede i en situation av upplevd orättvisa, eller tillfredsställelse när ett utfall uppfattas som rättvist.

Den främsta invändningen som riktats mot moralisk förbättring handlar om att en sådan förbättring innebär en inskränkning i personers frihet och möjlighet till självbestämmande. Den distinktion jag presenterar visar att denna invändning bara är tillämplig på den första typen av moralisk förbättring. Endast beteendeförbättring reducerar individens handlingsmöjligheter. Andra former av moralisk förbättring påverkar inte handlingsutrymmet, utan snarare individens personlighet och benägenhet att handla på vissa sätt. Emotionell förbättring är dock mer komplicerat och svårare att förverkliga än vad man först kan tro. Våra känslor är mycket komplexa och påverkar sociala interaktioner på sätt som inte är till fullo kartlagda. Aggression har till exempel en viktig roll i att stärka prosocialt beteende genom att bestraffa antisociala handlingar. Xenofobi tycks hänga ihop med solidaritet med den nära gemenskapen. Emotionell förbättring tycks vara bäst lämpad för att reducera de mest skadliga emotionerna, snarare än att få till stånd en mer generell förbättring av den genomsnittliga individens moral. Alltså kan emotionell förbättring bara vara ett komplement till dispositionell förbättring. Traditionella argument mot andra former av förbättring, att de skulle förvärra ekonomisk ojämlikhet, eller att det skulle handla om ett nollsummespel, tycks inte gälla för moralisk förbättring. Det är inte individen själv som blir den största vinnaren av moralisk förbättring, utan de som den här individen interagerar med. Inte heller blir det svårare för mig att vara moralisk om du är mer moralisk. Vissa, främst katolska, dygdetiker bekymrar sig om att den rätta moraliska karaktären måste byggas upp genom ansträngning och dedikation. Men det här resonemanget tycks implicera att de för vilka moraliskt beteende är lätt borde försämra sin empati för att därmed genom ansträngning kunna bli mer moraliska.

1Huxley, J. (1957) “Transhumanism” New Bottles for New Wine, London: Chatto & Windus, 1957, pp. 13-17 2 James Hughes Citizen Cyborg: Why Democratic Societies Must Respond To The Redesigned Human Of The Future Basic Books 2004 3 Max More “The Extropian Principles - A transhumanist Declaration” version 3.0 http://www.maxmore.com/extprn3.htm

Page 20: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

14

4Max More “The Proactionary Principle” Version 1.2, July 29, 2005 http://www.maxmore.com/proactionary.htm 5 Nick Bostrom, “In Defense of Posthuman Dignity”, (2005) Bioethics, Vol. 19, No. 3, pp. 202-214. 6 As formulated by Tom Douglas in Douglas, T., 2008 “Moral Enhancement” Journal of Applied Philosophy Volume 25, Issue 3, pages 228–245, August 7 Michael Sandel, “The Case Against Perfection”, The Atlantic, April 2004 8 Michael Sandel, The Case against Perfection: Ethics in the Age of Genetic Engineering The Belknap Press, 2009 9 Bill McKibben Enough: Staying Human in an Engineered Age Times Books 2003 10 Francis Fukuyama. 2002. Our Posthuman Future: Consequences of the Biotechnology Revolution. New York. Farrar, Strauss and Giroux: p. 149. 11 Jürgen Habermas. The Future of Human Nature. Polity Press. 2004 12 Leon Kass. Ageless Bodies, Happy Souls: Biotechnology and the Pursuit of Perfection. The New Atlantis 2003 13 Nicholas Agar Humanity’s End: Why We Should Reject Radical Enhancement (MIT Press, Cambridge, MA:, 2010) 14 Ingmar Persson & Julian Savulsescu, “The Perils of Cognitive Enhancement and the Urgent Imperative to Enhance the Moral Character of Humanity”, Journal of Applied Philosophy, 25, 3 (2008): 162-177. 15 Allen Buchanan Better than Human: The Promise and Perils of Enhancing Ourselves Oxford University Press, 2011 16 Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review. 108, 814-834 17 Kass, Leon R. (2002). Life, Liberty, and the Defense of Dignity. Encounter Books. ISBN 1-893554-55-4. 18 Nussbaum, Martha C. (August 6, 2004). "Danger to Human Dignity: The Revival of Disgust and Shame in the Law". The Chronicle of Higher Education. Washington, DC. Retrieved 2007-11-24. 19 Gould, Stephen Jay (1997). Full House: The Spread of Excellence From Plato to Darwin. Harmony. ISBN 0-517-70849-3. 20 Harris, John (1998). Clones, Genes, and Immortality: Ethics and the Genetic Revolution. Oxford: Oxford University Press. p. 37. ISBN 0-19-288080-2. 21 Eliezer Yudkowsky “Cognitive biases potentially affecting judgment of global risks” in Global Catastrophic Risks, eds. Nick Bostrom and Milan Cirkovic, Oxford University Press, 2008 22 Kass, L. Life, Liberty, and the Defense of Dignity: The Challenge for Bioethics. Encounter Books 2002 23 Roi Cohen Kadosh, Neil Levy,Jacinta O’Shea, Nicholas Shea and Julian Savulescu. 2012. "The neuroethics of non-invasive brain stimulation" Current Biology Vol 22 No 4

Page 21: CRUCIAL CONSIDERATIONS: ESSAYS ON THE ETHICS OF EMERGING

15

24 Kant, I., 1785/1996, Grundlegung zur Metaphysik der Sitten, translated as “Groundwork of the Metaphysics of Morals”, in Immanuel Kant Practical Philosophy, Mary Gregor (trans. and ed.), New York: Cambridge University Press., p 429 25 Kass, L “Preventing A Brave New World” The New Republic Online June 21, 2001 http://www.stanford.edu/~mvr2j/sfsu09/extra/Kass3.pdf 26 One such possible technology could be transcranial direct-current stimulation (tDCS). See Kados, R. C. et al. “The neuroethics of non-invasive brain stimulation” Current Biology, Volume 22, Issue 4, R108-R111, 21 February 2012 27 Douglas, T., 2008 “Moral Enhancement” Journal of Applied Philosophy Volume 25, Issue 3, pages 228–245, August