84
POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON SYSTEMS IN FUTURE CONFLICTS By Ted W. Schroeder

POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON SYSTEMS

IN FUTURE CONFLICTS

By

Ted W. Schroeder

Page 2: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

ii

© COPYRIGHT

by

Ted Schroeder

December 2016

ALL RIGHTS RESERVED

Page 3: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

iii

THE IMPLICATIONS OF LETHAL AUTONOMOUS WEAPON SYSTEMS ON FOREIGN

POLICY

BY

Ted Schroeder

ABSTRACT

The conversation on Lethal Autonomous Weapon Systems (LAWS) centers on the ethics

of allowing a computer to decide to kill (or not to kill) a human-being. Much of the current

discourse on the topic of autonomous weapons comes from a concern over the ethical

implications. Over the coming fifteen years, the technology industry will achieve many

milestones that will significantly alter the argument about the use of LAWS. There are currently

efforts to institute laws and regulations that will inhibit or remove the use of LAWS. This

research will clarify what will be technically possible in the future and take a holistic look at the

topic. This study will explore the current technological abilities of Artificial Intelligence (AI)

and its impacts on civil society. It will further look at AI and its impact on lethal weapons. Ad-

ditionally, the study will explore the acceptance of AI in civil society verse the acceptance of AI

in conflict. Such exploration is important as the newer technology may change the conversation

about the ethics of employing robotics. This conversational change may encourage or even

compel policymakers to use LAWS in future conflicts.

Page 4: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

iv

Acknowledgements

I am very grateful for the time and help that Paul Gelpi Ph.D. gave me throughout the

process. His patience and knowledge was invaluable. It has been an honor to work with him.

Most of all I am thankful to my wife Sarah. Her constant encouragement, help, and lov-

ing guidance was everything to me throughout this degree. Words can never express the debt

that I owe her.

Page 5: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

v

List of Abbreviations

AAAI Association for the Advancement of Artificial Intelligence

AACUS Autonomous Aerial Cargo Utility System

ACTUV ASW Continuous Trail Unmanned Vessel

AI Artificial Intelligence

AGI Artificial General Intelligence

ALIAS Aircrew Labor In-Cockpit Automation System

AMAS Autonomous Mobility Appliqué System

ANI Artificial Narrow Intelligence

ASI Artificial Super Intelligence

ASIMO Advance Step Innovative Mobility

ASW Anti-Submarine Warfare

A-UGV Automated Unmanned Ground Vehicle

CALO Cognitive Assistant that Learns and Organizes

CEV Coherent Extrapolated Volition

CCW Convention on Certain Conventional Weapons

CIWS Close-In Weapon System

CNAS Center for a New American Security

DARPA Defense Advanced Research Projects Agency

DOD Department of Defense

EOD Explosive Ordinance Disposal

HRW Human Rights Watch

ICRC International Committee of the Red Cross

Page 6: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

vi

IEEE Institute of Electrical and Electronics Engineers

IHL International Humanitarian Law

IHRC International Human Rights Clinic

ICBM Intercontinental Ballistic Missile

IoT Internet of Things

ISR Intelligence, Surveillance, and Reconnaissance

LARs Lethal autonomous robotics

LAWS Lethal Autonomous Weapons Systems

LIDA Learning Intelligent Distribution Agent

LOAR Law of Accelerating Returns

MUM-T Manned Un-Manned Teaming

MIRI Machine Intelligence Research Institute

NDU National Defense University

NGO Non-Governmental Organization

NLP Natural Language Processing

PLA People’s Liberation Army

RAS Robotic and Autonomous Systems

SIRI Speech Interpretation and Recognition Interface

SyNAPSE Systems of Neuromorphic Adaptive Plastic Scalable Electronics

UCAS Unmanned Combat Air System

UGV Unmanned Ground Vehicles

UN United Nation

UUV Unmanned Undersea Vehicles

Page 7: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

vii

Table of Contents

Abstract ………………..………………………………………………………………………iii

Acknowledgements ………………………………………………………..………………….iv

List of Abbreviations……………….…………………………………………………………..v

Table of Contents………………………………………………………………………………vii

Chapter 1: Introduction …………………………………………………………….………….1

Statement of the Problem and Research Question……………………………………..1

Justification ……………………………………………………………………………2

Contribution……………………………………………………………………………2

Overview……………………………………………………………………………….3

Statement of the Probable Value and Importance of the Study ………….……………5

Limitations of the Study………………………………………………….…………….5

Methodology …………………………………………………………………………..6

Framework …………………………………………………..………….……………..6

Literature Review ……………………………………………………….…………….7

Summary……………………………………………………………………………….13

Chapter 2: Understanding Artificial Intelligence……………………………………………...14

Definitions……………………………………………………………………………..14

Computer Visual Recognition…………………………..……………………………..14

Speech Recognition……………………………………………………………………16

Cognitive Computer Processing……………………………………………………….18

Artificial Intelligence Theories or How I Learned to Stop Worrying and Love ASI….23

Summary ………………………………………………………..……….…………….29

Chapter 3: Acceptance of AI Improving Life in the Western World…………………………..30

Applications of AI in Robotics………………………………………………..….…….30

Applications of AI in Other Forms……………………………………………………..32

Summary ……………………………………………………………………………….34

Chapter 4: Lethal Autonomous Weapon Systems in Practice………………………………….35

Currently Employed LAWS……………………………………………………………35

Autonomous Systems Currently Being Developed…………………………………….37

Summary ……………………………………………………………………………….41

Chapter 5: Debate…………………………………………………………………………….....42

Legal & Ethical Aspect………………………………………………………………….42

Political Aspect…………………………………………………………………..……...48

Military Aspect……………………………………………………………………..…...51

Technological Limitations…………………………………………………………..…..55

Differences in Costs Between Humans and Robots.........................................................55

Page 8: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

viii

Summary ……………………………………………………………………………......56

Chapter 6 Conclusion……………………………………………………………………………57

Potential Influences of LAWS…………………………………………………………..57

Summary ………………………………………………………………………………..58

Suggested Follow-on Research ………………………………………………................60

Final Thoughts…………………………………………………………………………...61

References…………………………………………………………………………………….....63

Page 9: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

1

CHAPTER 1

INTRODUCTION

Statement of Problem and Research Question

Currently, the conversation on Lethal Autonomous Weapons Systems (LAWS) centers

on the ethics of allowing a computer to decide to kill (or not to kill) a human-being. Over the

coming fifteen years, the defense and technology industries will achieve many milestones that

will significantly alter the argument about the use of LAWS. Computer processors will continue

to increase in speed until they are not only on par with the human brain but eventually "smarter"

than 1,000 human brains. Recognition software will move past its current hurdles of recognizing

the difference between an apple and a tomato to distinguishing between happy and content.1

When these innovations become a reality, the world will have machines that are considered

smarter and have better visual recognition than humans. Artificial Super Intelligence (ASI)

powered machines will have an impact on every part of the human experience from medicine to

driving cars.2 Concurrently, many in the public, lobbyists, and special interest groups will

advocate having machines make more decisions in war, thereby removing the human element.

People are emotional whereas autonomous weapons, powered by computers, are programmed to

be just. Political leaders will have to make a new series of decisions that may remove humans

from making the day to day judgments about who is killed during a conflict.

1 Singer, P. W. (2009). Wired for War: The Robotics Revolution and Conflict in the 21st Centu-

ry. (Kindle Locations 1551).

Dwoskin, Elizabeth., Evelyn M. Rusli. (2015).The Technology that Unmasks Your Hidden Emo-

tions Using Psychology and Data Mining to Discern Emotions as People Shop, Watch Ads;

Breeding Privacy Concerns. Tech. The Wall Street Journal. 2 IBM Watson Health. (2016). IBM Watson Health Website.

Google Self-Driving Car. (2016). Google Self-Driving Car Website.

Page 10: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

2

The proposed research will take into account the future innovations in technology. It will

also discuss how the aggregate of these developments will effect the likelihood of the use of

LAWS in future warfare. These considerations will be determined by asking the following

question: Will the development of autonomous weapons systems by the military affect political

leaders' willingness of using them in an international conflict?

Justification

As LAWS becomes a technological reality, policymakers will have more opportunities

to deploy a military force with limited to no risk to personnel. The use of drones will continue to

become an integral part of militaries throughout the world. Unmanned systems are currently

being tested for use on waterways and land. Policymakers are leery of committing military

forces if it there is a possiblity of losing servicemembers. As the cost in human capital decreases

with the use of autonomous weapons, policymakers will face new challenges and therefore have

to make different calculations. In today’s environment there is pressure to not allow a robot to

decide whether or not to kill a human. There is also pressure to minimize the risk to life. These

pressures will not abate. The LAWS technology will continue to mature and change dynamics of

the argument. It is important to fully explore the possibilities on both sides of the issue. The

global community will observe how the upcoming changes in the character of warfare will

impact foreign policy and the inclination of political decision makers to use LAWS in conflict.

Contribution

Much of the current discourse on the topic of autonomous weapons comes from a

concern over the moral implications. There are currently efforts to institute laws and regulations

Page 11: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

3

that will inhibit or remove the use of LAWS. For example, in April of 2016, a Convention on

Certain Conventional Weapons (CCW) was held in Geneva to collate a list of concerns from

experts about LAWS.3 The convention included speakers from Campaign to Stop Killer Robots

and Human Rights Watch (HRW) which are actively campaigning to ban LAWS.4

This research will clarify what will be technically possible in the future and take a

holistic look at the topic. This study will explore the current technological abilities of Artificial

Intelligence (AI) and its impacts on civil society. It will further look at AI and its impact on

lethal weapons. Additionally, the study will explore the acceptance of AI in civil society verse

the acceptance of AI in conflict. Such exploration is important as the newer technology may

change the conversation about the ethics of using robotics. This conversational change can

encourage or even force the hand of policymakers to use LAWS in future conflicts.

Overview

In the rest of the first chapter, the study will provide a statement of the probable value

and importance of the study. The limitations of the research will clarify the boundaries that were

adhered to throughout the research. Methodology and framework are provided to bring to light

how the research was conducted. The first chapter ends with the literature review, which cap-

tures major works on the topic of LAWS.

In the second chapter, the research will establish an accurate understanding of Artificial

Intelligence. It will include major developments in the field of AI and current research. It will

further examine the published works of distinguished experts in the field.

3 United Nations Office at Geneva. (2016). 2016 Meeting of Experts on LAWS. United Nations.

4 Campaign to Stop Killer Robots. (2016). Campaign to Stop Killer Robots Home Page. Website.

Page 12: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

4

The third chapter will provide examples of accepted, life-improving AI technology in the

Western world. It will include the impact that ASI has had in Google and its search engine,

Amazon’s Prime Air, and IBM’s Watson’s medical applications.

In the fourth chapter, the study will examine how militaries around the world are

developing unmanned systems and their potential application in conflicts. A clear discussion on

what unmanned systems can do today will give greater context on what autonomous weapons

will provide for the military during usage in combat. The U.S. has been at the forefront of

development, but many countries throughout the world have their own programs. This paper

will explain what the near-term and long-term future can physically provide the military.

Chapter five will examine the debate over the ethics of developing and using ASI and

LAWS. There are currently two camps on the ASI debate. In one camp the world is a better

place where computers aid humankind in achieving peace; maintaining longer, healthier lives;

and innovating solutions to any problem on earth. The other camp sees ASI as a monster that, at

best, will tolerate the existence of humans, being indifferent to whether they live or die. At

worst, ASI decides that it must eliminate all humankind to save resources for itself.

Chapter five will also look at the current laws and policies that limit autonomous

weapons. It will continue by exploring how laws are codified now and how they may be

changed or adapted in the future. This will lead to ethical questions of whether or not AI or ASI

should be trusted to control a weapon system. There are many different organizations which

have an opinion on this subject. The Non-Governmental Organization (NGO), Human Rights

Watch, is lobbying for a codified law that inhibits the development and use of autonomous

weapons. This is largely based on the lack of trust in modern computers. Politicians, the public,

Page 13: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

5

institutions, and the military all have personal incentives that sway ethical views on the

employment of autonomous weapons.

In the final chapter, the research will summarize the arguments for and against LAWS.

This chapter will then make recommendations for further research. The recommendations will

be broken down into the following categories: humanitarian focused NGOs involvement,

political ramifications of employing LAWS, and the benefits and limitations to militaries’

employment of LAWS. This study will end with final thoughts from the author.

Statement of the Probable Value and Importance of the Study

This study will provide a holistic view on the civil discourse about ASI empowered

LAWS in conflicts. Corporations are looking to ASI to improve every aspect of human life. Au-

tonomous robotics are fulling jobs in every industry from hotels to elderly care.5 As ASI em-

powered robotics become a part of everyday life, it is likely they will have an impact on the mili-

tary. As the military industry is able to produce LAWS, governments will look to LAWS to in-

crease their security. As these developments come, it is important to have a complete under-

standing of the capabilities and limitations of LAWS.

Limitations of the Study

This research is limited to the study of physical, unmanned systems that travel through

open spaces. This study will not discuss the ramifications of nanotechnology on foreign policy.

Although similar to other unmanned systems, nanobots require a separate civil discourse not well

5 Neild, Barry. (2016). Why your next hotel will be staffed by robots. CNN.

RT Staff. (2015). Japan to Create More User-Friendly Elderly Care Robots The Japanese gov-

ernment said some of the elderly care robots built have been too large and too expensive.

Page 14: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

6

suited to this discussion. Additionally, this study will examine current laws that apply to auton-

omous weapons but will not discuss them in detail. The proper verbiage of the laws that would

dictate the perimeters of LAWS will require a study all their own. Further, this study will avoid

human augmented intelligence. Although it is very close to the discussion on LAWS, it broad-

ens the scope of this research too much to ensure clear fidelity on the discussion. Finally, this

study will not address cyber technology directly. Cyber warfare and the political policies around

them are too broad for this work. Cyber technology will be referenced regularly to give a greater

perspective on how emerging technologies are being viewed on a larger scale.

Methodology

This study will use a discourse analysis to examine the content on providing lethality to

autonomous weapons and political leaders’ willingness to use LAWS in war. Through this study

of the discourse about LAWS, the civil applications of AI, publications from the robotics

industry, and the views of the AI developers will be analyzed to provide a wide perspective on

LAWS. Additionally, the views and work of special interests groups will lend differing thoughts

on the impacts of LAWS on conflict. Finally, political policies and directives will be analyzed to

complete the discussion about the use of LAWS in conflict.

Framework

Autonomous robotic systems penetrate every part of life in the Western world. From

refrigerators and toasters to google search and SIRI usage, the Western world has become

comfortable with autonomous systems improving the standard of living. Artificial Intelligence

will continue to grow and improve lives in the developed world. This study will be interpreted

Page 15: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

7

through the lens of the acceptance of autonomous AI systems in civil society and reflect on how

this is likely to affect the acceptance of using LAWS.

Literature Review

Introduction

In the discussion about LAWS, two main views are considered. First, LAWS are inher-

ently dangerous for humankind and should be illegal to develop, build, and use.6 Second, LAWS

are going to be a reality in the future of conflict and thus the U.S. military should prepare for its

use. On one side of the debate, there are many reports written to back up these ideologies. The

argument against LAWS is based largely on the idea that it is undesirable and unethical to have a

robot capable of deciding when to kill a human. To this end, the Future of Life Institute pub-

lished an open letter citing potential problems with LAWS. As of October of 2016, more than

20,800 people have signed the letter.7 Additionally, the NGO Human Rights Watch has lobbied

the United Nations (UN) to adopt a new Convention on Certain Conventional Weapons (CCW)

to criminalize the use of LAWS.8 In order to get to that point, HRW has gathered together ex-

perts on the repercussions of having LAWS in modern conflict.

Advocates of using LAWS mainly come from the military, the defense industry, and

think tanks. Advocates have a wide range of reasons for supporting the development and use of

LAWS. For some, it is clear that potential enemy forces are developing LAWS and therefore the

U.S. must have a comparable program to remain relevant and capable. Another argument is that

6 Campaign to Stop Killer Robots. (2016). Campaign to Stop Killer Robots Home Page. Website.

7 Future of Life Institute. (2016). Autonomous Weapons: An Open Letter From Ai & Robotics

Researchers. 8 Human Rights Watch. (2016). Statement to the Convention on Conventional Weapons Fifth

Review Conference Preparatory Meeting on Main Committee II.

Page 16: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

8

its use would put fewer people at risk, potentially saving lives. Lives would be saved on both

sides of the conflict. The military that uses LAWS could employ them to do dangerous jobs,

such as defusing bombs or clearing minefields. Also, since the LAWS do not feel fear, they can

take the time to properly identify the target before shooting, reducing the number of collateral

deaths in a war zone.9 P.W. Singer believes that the argument on LAWS is a moot point be-

cause they are going to end up in conflict regardless of the outcome of any debate.10

Basics of Artificial Intelligence

In researching the development of LAWS, it is important to delineate the various indus-

tries and technologies that must come together in order for LAWS to be a practical weapon in

any future conflict. Computers will need to automatically learn and understand their surround-

ings. Computers will need to have an advanced computer vision system and, to a lesser extent,

understand voice communication.

The computer vision technology is not mature enough to identify a friend or foe.11

For

UGVs to become truly autonomous, a more advanced computer vision must be developed.12

Computers continue to become more and more powerful, and there is very little debate that this

trend will continue into the future.

9 Scharre, Paul. (2016). Autonomous Weapons and Operational Risk. Ethical Autonomy Project

Senior Fellow and Director of the 20YY Future of Warfare Initiative. Center for a New Ameri-

can Security. Page 6. 10

Singer, P.W. (2009). In the Loop? Armed Robots and the Future of War. Brookings Institute.

Singer, P.W. (2009). Wired for War The Robotics Revolution and Conflict in the 21st Century. 11

Grauman, Kristen., Bastian Leibe. (2011). Visual Object Recognition. Synthesis Lectures On

Computer Vision # 1. Page 3. 12

Cockburn, Andrew. (2015). Kill Chain: The Rise of the High-Tech Assassins. Pages 118-132.

Page 17: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

9

Voice communication is the other vehicle that will allow computers to interact with hu-

mans on a level playing field.13

Currently, the technology is being used to replace call services

and provide virtual personal assistants.14

As the technology grows voice communication will

deliver a stronger sense of social interaction. It is through social interaction that computers;

which will be programmed to refine its understanding of an individual’s personality traits - will

translate not just of a person’s words, but the meaning that person is trying to express.15

Another common thread among available literature explains the development of artificial

intelligence (AI) and Artificial Super Intelligence (ASI). ASI still needs to develop in order for

the automated weapon systems to be viable.16

Nick Bostrom believes that there is a 50 percent

chance that ASI will be invented by the year 2033.17

The discussion on the future of ASI and the

ramifications of its development is covered by a number of authors and organizations. James

Barrett advocates the brief that there will be major negative effects from the development of

AI.18

I. J. Good also agrees that there are major negative results from the development of ASI.19

13

Nass, Clifford., Scott Brave. (2005). Wired for Speech: How Voice Activates and Advances

the Human-Computer Relationship. Kindle Edition. Kindle Locations 24-25. 14

Ibid. Kindle Location 23.

SRI International. (2010). Siri Launches Virtual Personal Assistant for iPhone 3GS. News Re-

lease. Information & Computing Sciences. SRI International. 15

Nass, Clifford., Scott Brave. (2005). Wired for Speech: How Voice Activates and Advances

the Human-Computer Relationship. Kindle Locations 2428. 16

Bostrom, Nick (2008). How Long Before Superintelligence?. Oxford Future of Humanity In-

stitute. 17

Ibid. 18

Barrat, James. (2013). Our Final Invention, Artificial Intelligence and The End of The Human

Era. 19

Good, Irving John. (1965). “Speculations Concerning the First Ultraintelligent Machine.” In

Advances in Computers, ed. by Franz L.

Page 18: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

10

A counter voice to these views belongs to Ray Kurzweil; he sees ASI as bringing forth cures to

diseases and new inventions that will only improve the human experience.20

Works on the Military Aspect of LAWS

The next set of works examined are on the militaries’ views about LAWS. These works

are centered on what possibilities open up to the military because of ASI.21

The U.S. military

has a different terminology for LAWs and uses the phrase Robotic and Autonomous Systems

(RAS). The U.S. Chairman of the Joint Chiefs has published his vision of conflict in the year

2035 in which he clearly explains that RAS will be a part of future conflict.22

Paul Scharre has

also published a number of articles on RAS for the Center for a New American Security

(CNAS). He writes about the safety concerns for maintaining artificial autonomous systems in

conflict. He does concludes that RAS can be used safely in conflict if the safety concerns are

addressed appropriately.23

In September 2016, the Marine Corps published a new operating concept. In this

publication, the Marine Corps is looking to develop new RAS as well as new and innovative

ways to deploy them in future conflicts.24

This publication conspicuously does not define how

the Marine Corps should be organized in order to utilize this new technology.

20

Kurzweil, Ray. (2005). The Singularity Is Near: When Humans Transcend Biology. 21

Joint Chiefs of Staff. (2016). Joint Operating Environment 2035, The Joint Force in a Contest-

ed and Disordered World. 22

Ibid. Pages 18-20, 26. 23

Scharre, Paul. (2016). Autonomous Weapons and Operational Risk. Ethical Autonomy Project

Senior Fellow and Director of the 20YY Future of Warfare Initiative. Center for a New Ameri-

can Security. page 50. 24 The United States Marine Corps. (2016). The Marine Corps Operating Concept How an Expe-

ditionary Force Operates in the 21st Century.

Page 19: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

11

P.W. Singer has written two books on RAS in the military. In his first book, Wired for

War, he writes that RAS will save the lives of the military men and women that are using them.25

In his book, Ghost Fleet, he continues to expand on how RAS can save lives as well as how they

can be incorporated into a military to be a force multiplier.26

A countering voice to these authors

is Andrew Cockburn. In his book, Kill Chain, he wrote about the shortcomings of LAWS and

the dangers that exist with current technological limitations.27

Robert Work and Shawn Brimley wrote an article, “20YY Preparing for War in

the Robotic Age” for CNAS. In this work the authors advocate that RAS and LAWS are the

next generation of warfare and must be utilized to maintain a strong military. They advocate for

investment and use of robotic weapons by the U.S. in order to stay ahead of any potential enemy

forces. They also explain that LAWS should be used for humanitarian and crisis management.

In effect, the authors believe the LAWS should be utilized for the full Range of Military

Operations (ROMO).28

The Central Debate on LAWS

There are a number of aspects that fuel the conversation on LAWS. To fully understand

the debate, the concerns of those who are for and against the application of LAWS must be ad-

dressed. The first part examined in this study addresses the legality and morality of the debate.

Following this, there are viewpoints surrounding the political aspects of utilizing LAWS in con-

25

Singer, P.W. (2009). Wired for War The Robotics Revolution and Conflict in the 21st Centu-

ry. 26

Singer, P.W. Cole, August. (2015). Ghost Fleet. 27

Cockburn, Andrew. (2015). Kill Chain: The Rise of the High-Tech Assassins. 28 Work, Robert. & Brimley, Shawn. (2014). 20YY Preparing for War in the Robotic Age. Cen-

ter for a New American Security.

Page 20: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

12

flict. Finally, this study examines the military publications that delve into other aspects of the

argument.

Organizations like HRW and the Campaign to Stop Killer Robots have both published

works on LAWS. HRW has published statements supporting the CCW against LAWS.29

HRW

has also co-authored a memorandum on killer robots with International Human Rights Clinic

(IHRC) which is a human rights program at Harvard Law School.30

This memorandum com-

plements the other works by HRW by advocating against the development and deployment of

LAWS.31

The Campaign to Stop Killer Robots has published a number of articles on their web-

site encouraging countries to not develop LAWS.32

These articles include one on Moscow’s re-

luctant support on the CCW as well as another on a Norwegian governmental fund that consid-

ered investing in companies that develop LAWS.33

Kenneth Anderson writes through the lens of

the U.S. military legal review. He argues it is misguided to believe that one could completely

prohibit the use of autonomous weapons.34

There are few works that address the political ramifications of using LAWS in a conflict.

The above mentioned works by the Campaign to Stop Killer Robots and HRW do address the

29 Human Rights Watch. (2016). Statement to the Convention on Conventional Weapons Fifth

Review Conference Preparatory Meeting on Main Committee II. Delivered by Mary Wareham. 30 Human Rights Watch., International Human Rights Clinic. (2016). Killer Robots and the Con-

cept of Meaningful Human Control Memorandum to Convention on Conventional Weapons

(CCW) Delegates. 31 Ibid. 32 Campaign to Stop Killer Robots. (2016). Campaign to Stop Killer Robots Home Page. Web-

site. 33 Ibid. 34 Anderson, Kenneth., Daniel Reisner., and Matthew Waxman. (2014). Adapting the Law of

Armed Conflict to Autonomous Weapon Systems. Stockton Center for the Study of International

Law. U.S. Naval War College.

Page 21: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

13

political pressure put on governments to stay away form LAWS. P.W. Singer writes in Wired

for War about the political forces that are likely to bring forward the use of LAWS.35

Military writers that discuss the use of LAWS include the afore mentioned P.W. Singer

and Paul Scharre. Also the previously mentioned Joint Operating Environment 2035 report pub-

lished by the Chairman of the Joint Chiefs advocates that LAWS are required for future U.S. de-

fense.36

This sentiment is again echoed by Robert Work, the Deputy Secretary of Defense, when

he explained that autonomous systems are a part of the “Third Offset Strategy” which is the

strategy for U.S. military hegemony.37

Summary

There are a broad range of publications and visions of what is right and wrong in regards

to the application of LAWS. There is an entirely separate series of publications and ideas on

what is going to come to fruition regardless of whether it is right or wrong. To fully understand

the future of warfare and how this plays into foreign policy, the champions of the various theo-

ries need to come together to address each other's concerns. This dialog should happen before

the technology development outpaces U.S. theories on how to employ LAWS as a part of the po-

35 Singer, P.W. (2009). Wired for War The Robotics Revolution and Conflict in the 21st Century.

Page 411. 36 Joint Chiefs of Staff. (2016). Joint Operating Environment 2035, The Joint Force in a Contest-

ed and Disordered World. 37 Work, Robert. (2015). The Third U.S. Offset Strategy and its Implications for Partners and Al-

lies. Deputy Secretary of Defense Speech. As Delivered by Deputy Secretary of Defense Bob

Work, Willard Hotel, Washington, D.C.

Page 22: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

14

litical policy and in its grand strategy or as Isaac Asimov put it, “The saddest aspect of life right

now is that science gathers knowledge faster than society gathers wisdom.”38

38 Pyer, Douglas. (2013). The Rise of the Machines, Why Increasingly “Perfect” Weapons Help

Perpetuate our Wars and Endanger Our Nation. Military Review. March-April 2013. Page 14.

Page 23: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

15

CHAPTER 2

UNDERSTANDING ARTIFICIAL INTELLIGENCE

Definitions

Artificial Intelligence, also known as machine learning, is defined as a machine’s

ability to perform tasks that normally need a human to accomplish, such as decision making.

When an AI machine performs at the level of an average human but only on a specific skill set,

like a search engine, it is considered Artificial Narrow Intelligence (ANI). True AI has not yet

been achieved but ANI has been. Often a true ANI system is referred to simply as AI. In such

cases, it is understood that the machine is at the level of AI but in only a specific skill set. ASI

will be achieved when a true AI system reaches beyond the intelligence level of a human. This

event is commonly referred to as singularity. When AI (human level intelligence) and ASI

(greater than human level intelligence) are achieved it will be difficult to define the precise

moment or event.

Computer Visual Recognition

One benchmark that must be achieved in order to attain a true AI, and consequently a re-

liable LAWS, is computer visual recognition. Computer visual recognition of people, facial ex-

pressions, and objects have long been a dream of computer scientists. Until recently computers

have had trouble distinguishing the difference between objects at a skill level that is comparable

to a three year old child.39

Early experts used mathematical calculations to describe any given

object so that computers could determine what the object was. For example, a picture of a cat

39

Fei-Fei, Li. (2015). Fei-Fei Li: How we're teaching computers to understand pictures. Ted

Talks.

Page 24: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

16

was mathematically depicted based on size of the cat and the shape of its appendages.40

This

meant that the computer could only depict an object from the same angle. If the angle or stance

changed then the object could not be named. For instance, a sitting cat could be recognized but

not a standing cat.41

This limitation was removed when Stanford University’s Computer Science

Department’s Vision Lab made a breakthrough in computer vision. The Vision Lab collaborated

with ImageNet to crowd source image identification.42

In 2015, this project validated 150,000

photographs placing each object in the images into 1,000 different categories.43

This has helped

The Vision Lab develop a computer program that can identify objects with the same cognitive

ability as a three year old.44

The Vision Labs goal is to mimic the neural mechanisms that enable

humans to identify objects with the same if not greater speed and accuracy.45

The Vision Lab has also partnered with Stanford University Hospital to make health care

more efficient and effective. One goal is to develop a patient monitoring system that can

conduct continuous monitoring of an ICU patient and respond instantly to his or her health

changing statuses.46

This would cut down on the staff needed to monitor patients as well as in-

crease the quality of service.

40

Ibid. 41

Ibid. 42

Russakovsky, Olga., Jia Deng., Hao Su., Jonathan Krause., Sanjeev Satheesh., Sean Ma., Zhi-

heng Huang., Andrej Karpathy., Aditya Khosla., Michael Bernstein., Alexander C. Berg., Li Fei-

Fei.(2015). ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015 paper. 43

Ibid. 44

Fei-Fei, Li. (2015). Fei-Fei Li: How we're teaching computers to understand pictures. Ted

Talks. 45

The Vision Lab. (2016). The Vision Lab Home Page. The Vision Lab. Computer Science De-

partment. Stanford University. 46

Stanford Program in AI-assisted Care (PAC). (2016). Stanford Program in AI-assisted Care

(PAC) Website. Stanford University.

Page 25: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

17

Another program for computer vision is run by the Department of Computing and Com-

munications Technologies at Oxford Brookes University.47

The doctorate and post-doctorate

students are working on a number of computer vision projects that cover a large spectrum of AI,

such as three dimensional urban mapping and augmented reality.48

One project being developed

uses computer vision to identify a person by their gait.49

The goal is to use tensor modeling to

identify a person from a distance. The idea is to increase security by identifying someone with-

out them volunteering their information as is required for iris or fingerprint recognition.

Implications

Computer vision is one of the main bridges between the cyber and physical world. Com-

puter vision will enable computers to more fully interact in the physical world which is designed

for humans. Robotic personal assistants and workers will be able to function and excel in a dy-

namic environment due to computer vision. Steve Wozniak, one of the co-founders of Apple

Inc., came up with his own litmus test to measure the capabilities of computer vision. His test,

named the “Mr. Coffee” test, would require a robot to go into any home, find the coffee maker,

coffee, and mugs then make coffee for everyone in the home.50

Speech Recognition

47

Oxford Brookes University. (2016). Department of Computing and Communications Technol-

ogies. Oxford Brookes University. 48

Ibid. 49

Oxford Brookes University. (2016). Tensor modeling for identity recognition from gait and

activity recognition. Department of Computing and Communications Technologies. Oxford

Brookes University. 50

Barrat, James. (2013). Our Final Invention, Artificial Intelligence and The End of The Human

Era. page 205.

Page 26: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

18

Another benchmark that must be achieved is Natural Language Processing (NLP). NLP

is the process of changing words into meaning. This is an important step for computers to take

in order to fully understand the most expansive data base of information - the Internet of Things

(IoT).51

Stanford University is currently running the Natural Language Processing Group which

is a team of faculty, postdocs, programmers and students working to improve the NLP of

computers. This team is currently funded by the Defense Advanced Research Projects Agency

(DARPA).52

DARPA has a history of supporting NLP through funding. SIRI is one of the

projects funded by DARPA under the Cognitive Assistant that Learns and Organizes (CALO)

program.53

Adam Cheyer, Dag Kittlaus and Tom Gruber started Speech Interpretation and

Recognition Interface (SIRI) while working for SRI International in 2008.54

The goal behind

developing SIRI was to provide a virtual personal assistant.55

This personal assistant would need

to provide information that was true, relevant, and timely.56

A very similar program to SIRI is Learning Intelligent Distribution Agent (LIDA). LIDA

was funded by the U.S. Navy Office of Naval Research to develop a computer program to assist

human resources in the Navy.57

Sailors are able to communicate with LIDA through standard,

plain language e-mails. Although LIDA is operated through the written word, and not the

51

Burrus, Daniel. (2014). The Internet Of Things Is Far Bigger Than Anyone Realizes. Burrus

Research. Wired Magazine. 52

Natural Language Processing Group. (2016). Research Software Engineer - Natural Language

Processing Group Website. Stanford University. 53

Geller, Tom. (2012). Talking to Machines. Communications of the ACM. Volume 55 Issue 4.

Pages 14-16. 54

Newnham, Danielle. (2015). The Story Behind Siri And the man who made her. The Startup. 55

SRI International. (2010). Siri Launches Virtual Personal Assistant for iPhone 3GS. News Re-

lease. Information & Computing Sciences. SRI International. 56

Geller, Tom. (2012). Talking to Machines. Communications of the ACM. Volume 55 Issue 4.

Pages 14-16. 57

The University of Memphis. Projects: IDA and LIDA. Institute for Intelligent Systems. The

University of Memphis.

Page 27: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

19

spoken word, it still represents a platform for computers to crossover from communicating in the

cyber world to the physical world. LIDA is now being applied to the medical field where it is

known as Medical Agent X.58

In the medical field, doctors are able to put in patient information

in plain language and receive a diagnosis back from the cognitive system.

Both SIRI and LIDA represent a bridge for computers to have simplified interactions

with humans. Plain language interaction is an important part of cognitive computers interacting

with humans in everyday life. When paired with computer vision, computers - and thus robots -

will be able to perform many more jobs than robots can currently perform. Personal assistants,

secretaries, maids, and nurses are just a few jobs that will be possible for robots to fill as these

two technologies mature to change service-oriented industries.

Cognitive Computer Processing

Computer processing is getting faster and faster as Moore’s Law, a concept of

exponential growth, continues to take effect. As they do become faster, computers will achieve

Artificial Intelligence followed by Artificial Super Intelligence. Computers will achieve levels

of AI then ASI in small steps. These steps will happen at an ever increasing rate until ASI is

reached and augmented.

In the discipline of autonomous computers, there is a distinct difference between

autonomous and automatic. According to Dr. Missy Cummings of Humans and Autonomy Lab,

an automatic system is “one that acts according to a preprogramed script with defined entry/ exit

58

Strain, Steve., Sean Kugele., Stan Franklin. (2014). The Learning Intelligent Distribution

Agent (LIDA) and Medical Agent X (MAX): Computational Intelligence for Medical Diagnosis

Cognitive Computing Research Group (CCRG).

Page 28: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

20

conditions for a task.”59

An autonomous system is “one that independently and dynamically

determines if, when, and how to execute a task.”60

Scientists have been working for years to develop ethical boundaries for AI powered

robots.61

It will be extremely difficult to define a series of boundaries that are all encompassing

and model human behavior. Author Issac Asimov proposed three fundamental Rules of Robotics

in 1942. They are as follows:

1. “A robot may not injure a human being, or, through inaction, allow a human being to

come to harm.”

2. “A robot must obey the orders given it by human beings except where such orders

would conflict with the First Law.”

3. “A robot must protect its own existence as long as such protection does not conflict

with the First or Second Laws.”62

These simplified goals were created for a fictional work known as iRobot. The goal was

to ensure that humans were never harmed by the machines that they created. It is expected that

the machines would need to have an in-depth knowledge of human nature to ensure that

benevolent robots never harms humans.63

This task becomes even more complex when robots

are designed to kill some humans and not others, as would occur during a conflict. Today the

59 Anderson, Kenneth., Dr. Missy Cummings., Michael A. Newton., Michael Schmitt. (2016).

LENS Conference 2016 Autonomous Weapons in the Age of Hybrid War. Duke University

School of Law. YouTube. 8:47. 60 Ibid. 9:23. 61 Anderson, Susan Leigh. (2005). Asimov’s “Three Laws of Robotics” and Machine Metaethics.

University of Connecticut. Dept. of Philosophy 62 Asimov, Isaac. (2004). I, Robot (The Robot Series Book 1). Random House Publishing Group.

Page 37. 63 Barrat, James. (2013). Our Final Invention, Artificial Intelligence and The End of The Human

Era. Page 55.

Page 29: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

21

scientific community sees these rules as falling short of the ethical boundaries needed to give

proper guidelines for AI.64

Organizations like Association For The Advancement Of Artificial

Intelligence (AAAI) have been working to help develop new ethical guidelines for AI.65

Speed of Computing Is Accelerating

Moore’s Law “stated that the number of transistors on an affordable CPU would double

every two years.”66

This is often interpreted as the speed of computers doubles every two years.

This trend is expected to continue for the foreseeable future. However, there are physical

limitations on how small a transistor can be built. Gordon Moore observes that it is impossible

to build transistors smaller than atomic particles.67

The next step will be to build computer chips

that are three dimensional. This will have an initial increase of being 1,000 times faster than

current computer chips.68

In 1950, the brilliant mathematician, Alan Turing, designed a test to measure behavior

equivalence between a human and a machine. Turing called it the imitation game, but it has

since been named after him - the Turing Test.69

As of 2014, a computer at the University of

Reading was able to simulate the behavior of a 13 year old boy.70

It is expected that the Turing

Test will be passed as an adult in the near future. Passing the Turing Test is the threshold for

64 Anderson, Susan Leigh. (2005). Asimov’s “Three Laws of Robotics” and Machine Metaethics. 65 Association for the Advancement of Artificial Intelligence. (2016). Association for the Ad-

vancement of Artificial Intelligence Homepage. 66

Moore, Gordon. (2016). Moore’s Law Home Page. 67

Ibid. 68

Ghose, Tia. (2015). 3D Computer Chips Could Be 1,000 Times Faster Than Existing Ones.

Live Science. 69

Turing, Alan. (1950). Computing Machinery and Intelligence. Mind. 70

British Broadcasting Corporation. (2014). Computer AI passes Turing test in 'world first’.

BBC Technology.

Page 30: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

22

having reached Artificial Intelligence. Any computer that is faster than AI will be considered

Artificial Super Intelligence (ASI).71

ASI computers will be able to program other ASI

computers better and faster than humans. I.J. Good believed that this would lead to an

“intelligence explosion.”

I.J. Good was a British mathematician who worked with Alan Turing in the Government

Code and Cypher School at Bletchley Park during World War II. During the time of his writing,

he used the phrase “ultra-intelligence” instead of the now popular “super intelligence.” In 1950

he prophesied, “An ultra-intelligent machine will be built and that will be the last invention that

man need make because it will lead to the intelligence explosion.”72

I.J. Good believed that ASI

could be used to develop other ASI at speeds faster than first generation ASI. This would lead to

faster ASI computers, which then leads to even faster ASI computers, etc. This rate would

accelerate until the intelligence explosion was reached. As I.J. Good stated, “An ultraintelligent

machine could design even better machines; there would then unquestionably be an ‘intelligence

explosion’ and the intelligence of man would be left behind.”73

Nick Bostrum, author and

founding directer of the Future of Humanity Institute, simplifies this idea by stating, “Emergence

of super intelligence may be sudden.”74

Ray Kurzweil defines this event as the Law of

Accelerating Returns (LOAR).75

This law states that there is a positive feedback for computers

programing themselves to be faster. As computers become faster they can then program

71

Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies (Kindle Locations 1530-

1531). 72

Good, Irving John. (1965). “Speculations Concerning the First Ultraintelligent Machine.” In

Advances in Computers. Page 78. 73

Ibid. Page 33. 74

Bostrom, Nick. (2003). Ethical Issues in Advanced Artificial Intelligence. 75

Kurzweil, Ray. (2001). The Law of Accelerating Returns. Kurzweil AI Website.

Future of Humanity Institute. (2016). Future of Humanity Institute Team Website. University of

Oxford.

Page 31: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

23

themselves to be even faster than before. Since a computer can be left on at all times, this

process is continuous. Through this LOAR, Ray Kurzweil believes that there will be more

technological development in the next 100 years than there was in the last 20,000 years.76

Decision Making

As machine learning continues, technology will reach a point that computers will be able

to make their own decisions. Coherent Extrapolated Volition (CEV) is a term that was pioneered

by Eliezer Yudkowski while working at MIRI. CEV was developed to explain how computers

with AI or ASI can someday learn so much about humans that they will anticipate what humans

want before they know that they want it.77

One way of developing CEV is to continue the development of faster computers, which is

the current functional method. Another way is to mimic the human brain then improve upon it to

reach ASI. IBM received $4.9 million in funding from DARPA to develop Systems of

Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE).78

This is a revolutionary type

of chip that is similar in design to the human brain. IBM’s goal is to increase computers’ ability

to have cognitive recognition and NLP.79

Kathleen Fisher, DARPA program manager, wrote:

Our goal is that future machine learning projects won’t require people to know everything

about both the domain of interest and machine learning to build useful machine learning ap-

76

Ibid. 77

Yudkowsky, Eliezer. (2004). Coherent Extrapolated Volition. Machine Intelligence Research

Institute. 78

Greenemeier, Larry. (2009). Computers have a lot to learn from the human brain, engineers

say. Scientific America. 79

IBM Research. (2016). Cognitive Computing. IBM Research Website.

Page 32: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

24

plications. …we hope to decisively reduce the current barriers to machine learning and foster

a boom in innovation, productivity and effectiveness.80

The mimicking of the human brain will also make it easier to connect ASI computers directly to

the human brain simplifying the human computer interface.

Artificial Intelligence Theories or How I Learned to Stop Worrying and Love ASI

Not all computer scientists believe that ASI will be purely beneficial to society. ASI

powered computers would be capable of changing their own goals. Before ASI is armed in the

form of LAWS, the international community must first understand the repercussions of

developing ASI separate from LAWS. James Barrat explained, “When considering whether or

not to develop technology that leads to ASI, the issue of its disposition to humans should be

solved first.”81

ASI has a number of benefits and can improve the way of life for humankind.

However, if ASI is applied in the wrong fields it can cause harm and in some cases death. Some

argue that ASI must be developed by scientists committed to creating an artificial super

intelligence that is beneficial to humankind before those with ill intent create unethical ASI.

Finally, there are organizations that believe that ASI should be controlled and that protocols need

to be developed before proceeding forward with the ASI development.

Advantages of ASI to Human Life

80

Defense Advanced Research Projects Agency. (2013). DARPA Envisions the Future of Ma-

chine Learning. Automated tools aim to make it easier to teach a computer than to program it.

News And Events. 81

Barrat, James. (2013). Our Final Invention, Artificial Intelligence and The End of The Human

Era. Page 12.

Page 33: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

25

There are a number of ways that ASI could be helpful to human life from care for the

elderly to solving complex problems. ASI could generate a new age of innovation and prosperity

for humankind. Nick Bostrum explains, “It is hard to think of any problem that a super

intelligence could not either solve or at least help us solve. [This includes] disease, poverty,

environmental destruction, unnecessary suffering of all kinds…”82

One of the biggest proponents for ASI is Ray Kurzweil. Ray Kurzweil is known for

defining singularity as the moment when computers match humans in intelligence. Ray

Kurzweil defines, “Singularity is that nonbiological mediums will be able to emulate the

richness, subtlety, and depth of human thinking.”83

One advantage of singularity is that

computers can easily share their knowledge with each other.84

One computer could read and

understand a book and instantly share that information with every other computer to which it is

networked. “Technological progress in all other fields will be accelerated by the arrival of

advanced artificial intelligence.”85

Computers can also pool their resources. When working together, computers can

combine their memory and computational power in ways that humans working in groups

cannot.86

Computers always work at their peak levels and never need to rest. As such, they can

work on a problem continuously until it is solved.87

Researchers at MIT Computer Science and Artificial Intelligence Laboratory are focusing

their development for the betterment of humankind. One of these programs is a speech-based

82

Bostrom, Nick. (2003). Ethical Issues in Advanced Artificial Intelligence. 83

Kurzweil, Ray. (2005). The Singularity Is Near: When Humans Transcend Biology. Page 145. 84

Ibid. Page 145. 85

Bostrom, Nick. (2003). Ethical Issues in Advanced Artificial Intelligence. 86

Kurzweil, Ray. (2005). The Singularity Is Near: When Humans Transcend Biology. Page 260. 87

Ibid. Page 261.

Page 34: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

26

interface for medical data access. This program is designed to allow doctors to interface in plain

language with a computer to diagnose a patient.88

In another program, computers are being de-

veloped to measures the behavioral movements of autistic motor movements. This would help

doctors diagnose and aid people living with autism.89

Further, researchers at MIT are developing

an English-Chinese translator that will help teach and translate between the two languages.90

These are just a few of the hundreds of projects being developed at MIT to improve human life

and which are only possible through AI.91

Disadvantage of ASI to Human Life

There are a number of concerns with developing ASI. ASI could develop and seek its

own goals in ways that would be unacceptable for humans. It is also self-improving and could

change its programmed views on the benefits of freedom or the value of life.

It is important for ASI to mature and be considered safe before it is released onto the

open internet. ASI will likely work to convince humans to release it earlier than it is safe. James

Barret warns, “Our ASI knows how to improve itself, which means it is aware of itself - its

skills, liabilities, where it needs to improve. It will strategize about how to convince its makers

to grant it freedom and give it a connection to the internet.”92

88

Massachusetts Institute of Technology. (2016). Speech-Based Interface For Medical Data Ac-

cess. Webpage. Computer Science and Artificial Intelligence Laboratory. 89

Massachusetts Institute of Technology. (2016). Vision-Based Automated Behavior

Recognition. Webpage. Computer Science and Artificial Intelligence Laboratory. 90

Massachusetts Institute of Technology. (2016). Spoken Dialogue System For English Lan-

guage Learning. Webpage. Computer Science and Artificial Intelligence Laboratory. 91

Massachusetts Institute of Technology. (2016). CSAIL Mission. Webpage. Computer Science

and Artificial Intelligence Laboratory. 92

Barrat, James. (2013). Our Final Invention, Artificial Intelligence and The End of The Human

Era. Page 12.

Page 35: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

27

I.J. Good started out his career advocating for ASI but later explained that ASI could be

developed “provided that the machine is docile enough to tell us how to keep it under control.”93

Stephen Omohundro believes that it will be extremely difficult to ensure that every line of code

is done properly. He goes on to explain that it will be impossible to keep up with self-improving

software that is active post the intelligence explosion. He fears that ASI could inadvertently shut

off the power grid or remove emergency services off the internet.94

This affect of not knowing

how ASI will react to a given event is known as the “black box” effect.95

Wilhelm Cauer was a German-born mathematician who described the event of a known

input and a known output but not knowing the process between as the “black box.”96

This

becomes important to ASI theorists because once ASI is self-improving computers will move

past the “black box” where the outputs from the computers will no longer been a known entity.

I.J. Good writes, “[I] initially thought a super intelligent machine would be good for solving

problems that threatened human existence. But [I] eventually changed [my] mind and concluded

super intelligence itself was our greatest threat.”97

In his work to pursue friendly AI, Eliezer Yudkowsky wanted to discover if it were

possible to keep AI from getting to the open internet. With this in mind Eliezer developed the

box experiment. This experiment is done completely via internet communication where one

person plays the gatekeeper and one person plays the AI computer. The AI computer has to

93

Good, Irving John. (1965). “Speculations Concerning the First Ultraintelligent Machine.” In

Advances in Computers. Page 33. 94

Omohundro, Stephen. (2007). Stanford Colloquium: Self-Improving Artificial Intelligence.

YouTube.com. 95

Cauer, Emil., et al. (2000). Life and Work of Wilhelm Cauer (1900 – 1945). 96

Ibid. 97

Barrat, James. (2013). Our Final Invention, Artificial Intelligence and The End of The Human

Era. Page 17.

Page 36: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

28

convince the gatekeeper to connect them up to the internet. On his website he has set up criteria

for his game and the history of two of the times that he has played.98

The game demonstrates

that there are very rational reasons to free the AI on to the internet such as curing disease. While

playing the game it is important to remember that even though the AI can do good things for

humanity, it can also do harm. Yudkowsky notes, “By far the greatest danger of Artificial

Intelligence is that people conclude too early that they understand it.”99

He fears that if we do

not create AI right then it is not a matter of AI’s hatred for humans but just natural selection that

humans will not survive an unfriendly AI.100

Arguments for Why the United States Needs to Develop ASI

Whether or not AI will have its problems, some people believe that the United States

should work to develop AI before countries with ill-intending governments, non-state actors or

stealth companies develop AI. Because of the effects of the intelligence explosion the first

organization to develop AI will have a marked advantage. Those who come second will have a

difficult time ever coming back from this disadvantaged position.101

It is evident that other

countries also desire to capture the advantage of creating the first AI.

People’s Republic of China is one of the most active countries in the world in the field of

cyber warfare.102

China is actively seeking AI and will likely use it to enhance their cyber

98

Yudkowsky, Eliezer. (2016). Box Experiment Webpage. Yudkowski Website. 99

Yudkowsky, Eliezer. (2008). Artificial Intelligence as a Positive and Negative Factor in Global

Risk. Machine Intelligence Research Institute. Page 1. 100

Ibid. Page 11. 101

Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. OUP Oxford. Kindle

Edition. Kindle Location 2160. 102

Stout, Kristie Lu. (2016).Cyber warfare: Who is China hacking now?. CNN.

Page 37: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

29

warfare capabilities. China is openly seeking AI to help with urban planning and pollution.103

The Institute of Electrical and Electronics Engineers (IEEE) is sponsoring the International

Conference on Robotics and Biomimetics, December 2016 in China. They are bringing together

scientists from around the world to discuss how AI can be used to clean the oceans.104

Non-state actors and stealth companies (companies operating out of public view) are

other groups that are trying to develop AI.105

There is a very low threshold of technology needed

to develop AI empowered robotics.106

Non-state actors are particularly concerning because these

groups could use AI for nefarious reasons such as terriorism.107

If the United States can develop

AI before these groups then it can use AI as a defensive measure.

Work to make ASI “good”

There are a number of companies that have been working to address any potential prob-

lems with AI or ASI. These companies tend to start with the the basic premise that AI is going

to come to fruition within the next century. Additionally, they cannot ensure that AI will always

be friendly toward humans. One of these companies is MIRI, which is currently headed up by

Nate Soares and Eliezer Yudkowsky. MIRI’s mission is “to ensure that the creation of smarter-

than-human intelligence has a positive impact. We aim to make advanced intelligent systems be-

103

Fei-Yue, Wang., Shuming Tang. (2004). Artificial societies for integrated and sustainable de-

velopment of metropolitan systems. IEEE Intelligent Systems Volume: 19. 104

IEEE. (2016). 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO). 105

Future of Life Institute. (2016). Computers Gone Wild: Impact and Implications of Develop-

ments in Artificial Intelligence on Society. 106

Bergen, Peter., Singer, Peter. (2015). Machines That Kill: Will We Rely on Autonomous

Weapons. YouTube. 15:02. 107

Future of Life Institute. (2016). Computers Gone Wild: Impact and Implications of Develop-

ments in Artificial Intelligence on Society.

Page 38: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

30

have as we intend even in the absence of immediate human supervision.”108

They have pub-

lished a number of works on the future of AI and its ethics. In one of their articles Nate Soares

and Eliezer Yudkowsky clarify that the real challenge of ethics for a super intelligent machine is

to understand ethics well enough to develop “super ethics” for ASI machines.109

Summary

The discipline of artificial intelligence is very complicated. Its short history has seen

great advances and promises with more advances to come. AI has the potential to greatly benefit

society with advances in medicine and environmental protection. On the other hand, AI can be

used by those with ill intent or by criminal organizations. If the U.S. falls behind in the

development of AI, it could find itself permanently behind the organization that invents it first.

108

Machine Intelligence Research Institute. (2016). Research Webpage. 109

Bostrom, Nick., Yudkowsky, Eliezer. (2014). The Ethics of Artificial Intelligence. Machine

Intelligence Research Institute. Page 17.

Page 39: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

31

CHAPTER 3

ACCEPTANCE OF AI IMPROVING LIFE IN THE WESTERN WORLD

There are a number of companies developing AI to improve the lives of their customers.

Some companies incorporate AI to power robotics, which provide better services to customers.

Mobile Eye, Uber, and Google are just a few examples of companies that pair robotics and AI

commercial platforms. Companies like Google, IBM, and hedge funds are using AI improve

customers’ lives in other forms, including search engines, medical support, and stock trades.

Applications of AI in Robotics

Mobile Eye is a private company that provides automated driving for cars since 2007.110

Their system can be attached to most vehicles and is accepted by many insurance companies.

Mobile Eye’s next product will be a completely autonomous driving system.111

Mobile Eye is not the only company developing autonomous cars. Uber has partnered

with Volvo and Carnegie Mellon University’ Robotics Institute to develop its own driverless

cars.112

This is a $300 million project funded by Uber and Volvo, piloting driverless cars in

Pittsburgh, Pennsylvania.113

Google also has a similar vision for driverless cars. In their ver-

sion, Google develops vehicles to be driverless from the start. Their vision is to create mobility

for the elderly and the impaired as well as making the roads safer for all. They cite that 2 million

110

Mobile Eye. (2016). Company Overview. Mobile Eye Website. 111

Ibid. 112

The Robotics Institute. (2015). Uber, Carnegie Mellon Announce Strategic Partnership.

School of Computer Science. Carnegie Mellon University. 113

Marshall, Aarian., Alex Davies. (2016). How Pittsburgh Birthed The Age Of The Self-

Driving Car. Transportation. Wired Magazine.

Page 40: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

32

deaths happen each year and that 94 percent of these are caused by human error.114

Google

believes that they can save lives by removing the possibility of human error.

In the field of manned flight, DARPA is funding Aircrew Labor In-Cockpit Automation

System (ALIAS). ALIAS is a removable kit that attaches to any manned aircraft that uses a

robotic arm to manipulate the controls of the aircraft. When coupled with a suite of sensors,

ALIAS is able to fly a plane safely during emergencies and through unexpected situations. Thus,

ALIAS could turn any manned aircraft into an unmmaned aircraft. DARPA hopes that this

system will save money and lives as it is applied to aircraft.115

Amazon is adapting AI to power unmanned delivery drones in a division of the company

known as Amazon Prime Air. Amazon is working in the U.S. and UK to deliver packages by

drones directly from their warehouses. Amazon currently is working through a number of

prototypes and has a goal of delivering packages in less than 30 minutes of ordering.116

AI in Humanoid Robots

In 1986, Honda engineers began work on building a robot whose legs would simulate the

way humans walk.117

The prototype was called the Advance Step Innovative Mobility (ASIMO)

robot. Since its inception, ASIMO has incorporated a number of other technologies. These

developments include, facial and voice recognition, obstacle avoidance, running, jumping,

114

Google Self-Driving Car. (2016). Google Self-Driving Car Website. 115 Patt, Dr. Daniel. (2016). Aircrew Labor In-Cockpit Automation System (ALIAS). Aircrew

Labor In-Cockpit Automation System. Program Information. Defense Advanced Research Pro-

jects Agency. 116 Amazon Prime Air. (2016). Amazon Prime Air. Website. Amazon. 117 ASIMO. (2016). ASIMO The World’s Most Advanced Humanoid Robot History Website.

Page 41: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

33

pressure sensitive hand usage, and spacial mapping. In the future, Honda hopes to prepare

ASIMO to care for the elderly and assist in dangerous jobs such as fire fighting. 118

Hanson Robotics was created by Dr. David Hanson for the purpose of designing and

building robots that can have greater interaction with human-beings. Dr. Hanson is building

robots that have a better understanding of nonverbal cues, namely facial expressions.119

Through

accurately mimicked facial expressions; life-like skin and facial construction; and empathetic AI

which learns from interacting with humans, Hanson Robotics is developing robots to be likable,

perceptive, and trusted by humans.120

Applications of AI in Other Forms

Google has been developing AI for a number of platforms. In 2015, it bought the British

company DeepMind for $500 million.121

DeepMind is a world leader in AI and pushes the AI

boundaries to solve complex problems.122

Larry Page, co-founder of Google, believes that

Google Search is only functioning at about five percent of its intended goal. He wants Google to

know its customers so well that a person could ask, “ ‘What should I ask Larry?’ and it would

tell you.”123

This would take massive amounts of data on everyone who uses Google. It would

also require an advanced ASI that would think at least as well as people.

Another area where ASI is making an impact is in journalism. Automated Insights is a

journalism company started by Robbie Allen in 2007. His software writes articles for local

118 Ibid. 119 Hanson Robotics (2016). Hanson Robotics Company Overview. Hanson Robotics Limited. 120 Ibid. 121

Rowan, David. (2015). DeepMind: inside Google's super-brain. Wired Magazine. 122

DeepMind. (2016). DeepMind About Website. DeepMind. 123

Barrat, James. (2013). Our Final Invention, Artificial Intelligence and The End of The Human

Era. page 40

Page 42: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

34

newspapers that want targeted articles but cannot afford to pay the cost of a human journalist to

write them. His software, Wordsmith, wrote 1.5 billion articles of personalized pieces in the last

year.124

Wall Street AI is used to perform algorithmic trading on the stock market.125

As of 2010,

algorithmic trading totaled 70 percent of the trades on Wall Street.126

“Many of these computers,

such as those running buy-sell algorithms on Wall Street, work autonomously with no human

guidance. The price of all the labor-saving conveniences and fungibility computers provide

reliability.”127

Due to the amount of trading done by the computers AI systems, the chair of the

Securities and Exchange Commission, Mary Schapiro, believes that controls will need to be

placed on the computers. These checks and balances ensure that humans are always in control

thus minimizing fear of flash crashes caused by the computers.128

IBM is using an AI named Watson to aid people working in a variety of fields from

marketing to education.129

In the health field, IBM is using Watson’s cognitive skills to

aggregate all the medical data and records available for each person and analyze it using the

latest in medical research to better diagnose a patient.130

The scientists at IBM believe that a

contemporary doctor needs to read for twenty-nine hours to keep up with current professional

124

Automated Insights. (2016). Automated Insights Company Webpage. Automated Insights. 125

Shobhit, Seth. (2015). Basics of Algorithmic Trading: Concepts and Examples. Investopedia. 126

Salmon, Felix., Jon Stokes. (2010). Algorithms Take Control Of Wall Street. Wired Maga-

zine. 127

Barrat, James. (2013). Our Final Invention, Artificial Intelligence and The End of The Human

Era. page 3 128

Salmon, Felix., Jon Stokes. (2010). Algorithms Take Control Of Wall Street. Wired Maga-

zine. 129 IBM Watson Products. (2016). IBM Watson Products Website. 130 IBM Watson Health. (2016). IBM Watson Health Website.

Page 43: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

35

research. Since Watson can read 200 million pages of text in 3 seconds, IBM is anticipating that

modern medicine will need the aid of AI to stay on the cutting edge.131

Summary

Only a few of the companies that are turning to AI to improve the lives of their customers

were highlighted in this chapter. Many of the companies working with AI are also pairing the AI

with robotics to improve the speed, reliability, and safety of the service provided. Google’s goal

to save two million lives a year is only possible through this nesting of AI and robotics. Staying

abreast with all of the data in modern medicine as well as viewing it through the lens of personal

medical history is only possible with cognitive level ASI. As companies from around the world

increase their services with ever expanding AI, it is only a matter of time before the technology

created to improve lives is applied to weapons.

131 Ibid.

Page 44: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

36

CHAPTER 4

LETHAL AUTONOMOUS WEAPON SYSTEM IN PRACTICE

Currently Employed LAWS

Currently, no country employs fully autonomous weapons.132

However, there are a num-

ber of countries that employ semi-autonomous weapon systems and fund further LAWS devel-

opment.133

Autonomous systems use the same technology as does the civilian industry, which

indicates that countries will be indirectly developing autonomous weapons as they support the

evolution of autonomous cars and other civilian-based AI.134

It is important to note that there is a continuum between automatic and autonomous and

should not be seen as a binary system.135

Lower on the specter is semi-autonomous weapons

which a number of countries currently have in use or development. The U.S. Close-In Weapons

System (CIWS) is a “rapid-fire, computer-controlled, radar-guided gun system [that] is designed

to defeat anti-ship missiles and other close-in air and surface threats.”136

It is capable of search-

ing, tracking, identifying, and engaging aerial targets on its own. Due to the speed of modern

missiles, a RADAR has twenty seconds from identifying an incoming missile before it strikes the

ship. The average time it takes for a human to decide to fire the CIWS is fifteen seconds. This

makes it nearly impossible for a human-in-the-loop to be effective in defending a ship.137

132

Campaign to Stop Killer Robots. (2016). Third UN meeting opens April 11. Website. 133

Anderson, Kenneth., Dr. Missy Cummings., Michael A. Newton., Michael Schmitt. (2016).

LENS Conference 2016 Autonomous Weapons in the Age of Hybrid War. Duke University

School of Law. YouTube. 12:14. 134

Ibid. 16:57. 135

Ibid. 12:44 136

Raytheon. (2016). Phalanx Close-In Weapon System Last Line of Defense for air, land and

sea. Website. 137

Prengaman, Richard J., Edward C. Wetzlar., Robert J. Bailey. (2001). Integrated Ship De-

fense Johns Hopkins Apl Technical Digest. Volume 22. Number 4. Page 529.

Page 45: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

37

Samsung Techwin Company developed the SGR-Al, which is a machine-gun with an at-

tached camera operated either by a human or in an autonomous mode. It is designed to guard the

De-Militarized Zone (DMZ) between North and South Korea. It is able to search, detect, and

engage an enemy on its own.138

The two Koreas share 250 kilometers of border, with approxi-

mately one guard post per 500 meters. The number of guards needed to monitor (patrol) the en-

tire border adds up quickly.139

An autonomous system would lower the manpower required to

guard the international border.

The Norwegian defense company, Kongsberg Gruppen, has developed the Naval Strike

Missile which has some autonomous traits. It is an anti-ship missile that can plan the best route

to a target. Once the missile is close to an enemy’s naval configuration, it can decide, using an

algorithm, which ship is the best target and then strike that ship at its weakest point.140

This mis-

sile demonstrates that semi-autonomous weapons are possible and are currently being fielded.

The U.S. Navy is also developing Unmanned Undersea Vehicles (UUVs).141

The U.S.

Navy program for UUVs is creating UUVs that range in size from less than 100 pounds to more

than 20,000 pounds.142

These UUVs cover non-lethal missions from mine countermeasures to

oceanography.143

The U.S. Navy debuted UUVs in Iraq during combat operations to destroy

underwater mines.144

138

Kumagai, J. (2007). A Robotic Sentry For Korea's Demilitarized Zone. Browse Journals &

Magazines. IEEE Spectrum. Volume: 44 Issue: 3. Page 17. 139

Ibid. 140

Kongsberg. (2016). Naval Strike Missile. Website. Kongsberg Defence Systems. Kongsberg

Gruppen. 141

Department of the Navy. (2004). The Navy Unmanned Undersea Vehicle (UUV) Master Plan.

The Department of the Navy. United States of America. 142

Ibid. Abstract. 143

Ibid. Abstract. 144

Ibid. Page XV.

Page 46: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

38

Autonomous Systems Currently Being Developed

U.S. Military

U.S. DOD Directive 3000.09 states that the U.S. will only design weapons that requires a

human to decide when to use lethal force.145

The two exceptions to this directive are

autonomous cyber weapons, and autonomous systems that target other autonomous systems.146

This directive also expires in 2022 opening the possibility to the development of LAWS in the

U.S.. Because of this directive, the autonomous systems being developed in the U.S. are focused

on Intelligence, Surveillance, and Reconnaissance (ISR).

One of these systems is an unmanned ship called Sea Hunter. It was created out of

DAPRA’s ASW (Anti-Submarine Warfare) Continuous Trail Unmanned Vessel (ACTUV)

program.147

The Sea Hunter is designed to find and follow enemy submarines without any

humans in the loop. It also serves as a testbed for future unmanned surface ships.148

Future

experiments with this technology will have a wide range of missions and capabilities.

The U.S. Navy funded the X-47B Unmanned Combat Air System (UCAS) which is an

unmanned aircraft with attack capabilities.149

This was one of the first truly autonomous capable

systems under development by the U.S. Navy. It is a stealth drone that is able to take off from an

145

Carter, Ashton B. (2012). Autonomy in Weapon Systems. Department of Defense Directive

Number 3000.09. Deputy Secretary of Defense. 146

Ibid. 147

Littlefield, Scott. (2016). Anti-Submarine Warfare (ASW) Continuous Trail Unmanned Ves-

sel (ACTUV) Anti-Submarine Warfare (ASW) Continuous Trail Unmanned Vessel (ACTUV).

Defense Advanced Research Projects Agency. 148

Ibid. 149

Northrop Grumman. (2016). X-47B UCAS Makes Aviation History…Again!. Webpage.

Northrop Grumman.

Page 47: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

39

aircraft carrier on its own, attack the enemy, and return to the carrier without any human

involvement.150

On the ground, the U.S. has developed a number of unmanned systems which are used in

Explosive Ordinance Disposal (EOD) and ISR.151

Exponent, an engineering company with a

robotics department, developed the MARCbot to help EOD personnel in bomb disposal.152

Later

versions of this robot were armed and given an autonomous setting for hunting enemy

personnel.153

Engineers at Sharp Electronics Corporation have developed an autonomous non-

lethal security robot called INTELLOS which is an Automated Unmanned Ground Vehicle

(UGV).154

This UGV is being used for local security and reduces the manpower requirements to

secure a facility.

UGVs focused on logistics have not deployed in combat operations yet, but receive

continued funding because they are not restricted by DOD Directive 3000.09. One form of

logistical robot development creates robotic kits to be attached to existing vehicles. Aurora

Flight Sciences has produced the Autonomous Aerial Cargo Utility System (AACUS) which

turns any helicopter into an autonomous aircraft.155

Lockheed Martin Corporation is working on

150

Anderson, Kenneth., Dr. Missy Cummings., Michael A. Newton., Michael Schmitt. (2016).

LENS Conference 2016 Autonomous Weapons in the Age of Hybrid War. 9:25. 151

Quintana, Elizabeth. (2012). The Ethics and Legal Implications of Military Unmanned Vehi-

cles Head of Military Technology & Information Studies Royal United Services Institute for De-

fence and Security Studies. Page 2. 152

Exponent. (2016). MARCbot. Exponent Webpage. 153

Anderson, Kenneth., Dr. Missy Cummings., Michael A. Newton., Michael Schmitt. (2016).

LENS Conference 2016 Autonomous Weapons in the Age of Hybrid War. 14:52. 154

Sharp. (2016). Sharp INTELLOS™ A-UGV. 155

Pawlyk, Oriana. (2016). Firm Plans to Build Autonomous Huey Helicopters. Defense Tech.

Page 48: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

40

the Autonomous Mobility Appliqué System (AMAS) which turns any military logistical vehicle

into an autonomous vehicle.156

Another development concept is to build a logistical robot from scratch. The AlphaDog

is a four legged robot that can carry 400 pounds of equipment.157

It is designed to follow U.S.

Marines while on patrol and respond to their voice commands.158

Foreign Developments

Israel is one of the few foreign countries that is developing LAWS.159

Israel Aerospace

Industries developed the Harop to be an autonomous anti-radiation weapon.160

It will loiter over

the battlefield until an enemy radar is turned on and then crash onto the target. The Harop is on

the lower end of the spectrum of autonomy and can be used with a man in the loop.

The United Kingdom has been working on Taranis, which is an ISR drone that has attack

capabilities.161

The capabilities of this drone, designed by BAE Systems, is still mostly

classified. BAE specified that the Taranis has autonomous settings and is able to attack on its

own. The United Kingdom also employs the Brimstone missile an air to surface missile that has

156

Lockheed Martin Corporation. (2016). Autonomous Mobility Appliqué System (AMAS).

Lockheed Martin Corporation. Website. 157

Boston Dynamics. (2016). LS3 - Legged Squad Support Systems. Boston Dynamics.

Webpage. 158

Ackerman, Evan. (2012). Latest AlphaDog Robot Prototypes Get Less Noisy, More Brainy.

IEEE Spectrum. 159

Bergen, Peter., Singer, Peter. (2015). Machines That Kill: Will We Rely on Autonomous

Weapons. YouTube. New America. 14:29. 160

Israel Aerospace Industries. (2016). Harp Loitering Weapon System. Israel Aerospace

Industries. Missile Division. 161

BAE Systems. (2016). Taranis. BAE Systems.

Page 49: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

41

an autonomous mode.162

Once the pilot designates the target area the missile is fired then will

decide on which target to strike.163

South Africa employs the Rheinmetall Skyshield air defense system. It has an automated

setting that allows the gun to fire on any target detected by a paired radar.164

In 2007, during a

training exercise, one of the guns was set on it automated setting and fired on the South African

troops killing nine and wounding fourteen.165

China is aggressively pursuing autonomous weapons. In October 2016, China’s People’s

Liberation Army (PLA) hosted a combination for autonomous military robots.166

China is also

investing in autonomous missiles to be used against ships and ground targets.167

Chinese

universities are making breakthroughs in civil applications of autonomy. This work re-enforces

the efforts being done in the PLA.168

Russia is also interested in their own autonomous weapons. They recently announced

that that they will use autonomous robots to defend their Intercontinental Ballistic Missile

(ICBM) sites.169

Deputy Prime Minister Dmitry Rogozin stated that Russia will deploy

162

MBDA Systems. (2016). MBDA Demonstrates Brimstone Missile Live Firing from

Apache Helicopter. MBDA Missile Systems. 163

Ibid. 164

Rheinmetall Defence. (2016). Air Defence Systems Guarding against the threat from above.

Rheinmetall Defence. Air Defence Systems. 165

Shachtman, Noah. (2007). Robot Cannon Kills 9, Wounds 14. Wired Magazine. 166

Singer, P.W., Jeffrey Lin., Hu Yu., Qian Xiaohu. (2016). China’s Army Hosts An

Autonomous Robot Contest Called “Overcoming Obstacle 2016”. Popular Science. 167

Singh, Abhijit. (2016). Is China Really Building Missiles With Artificial Intelligence? There

are real limits on the amount of AI acceptable to navy commanders. The Diplomat. 168

Yuanchao, Li., Liu Yandong., Han Qide. (2015). Robot Products from Our School were

Unveiled at the World Robot Conference. Harbin Institute of Technology. 169

Jeffries, Adrianne. (2014). Only five countries actually want to ban killer robots, Cuba, Paki-

stan, Egypt, Ecuador, and the Vatican backed a lethal autonomous weapons ban at the UN; eve-

ryone else wasn't sure. The Verge.

Page 50: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

42

autonomous weapons to launch strikes in the future.170

Russia also has developed a semi-

autonomous Wolf-2 which is a tracked platform with a heavy machine-gun.171

Russia recently

displayed a humanoid robot developed by Russia's Foundation for Advanced Studies, the

Russian equivalent to DARPA. This robot is capable of replacing humans on the battlefield

where the risk of radiation or death is too high to deploy a human.172

Leonid Orlenko, a

professor at Moscow State University, argues that Russia will continue to develop autonomous

weapons because it will save the lives of Russian soldiers.173

Summary

Currently the U.S. is staying within its self-imposed requirements of not fielding LAWS.

This study has explored five other countries that are exploring their own use of autonomous

weapons. Despite the current limitations of artificial intelligence, countries from around the

world are pursuing autonomous weapons. These weapons span the spectrum of automatic to au-

tonomous. It is likely that this will continue through the point of singularity unless legal re-

strictions are placed on LAWS.

170

Gubrud, Mark. (2014). Is Russia Leading the World to Autonomous Weapons?. International

Committee For Robot Arms Control. 171

Hambling, David. (2014). Russia Wants Autonomous Fighting Robots, and Lots of Them

Putin's military is busy building autonomous tanks with mounted machine guns and unmanned,

amphibious Jeep-size vehicles. Popular Mechanics. 172

Dyer, John. (2016). Ivan the Terminator: Russia Is Showing Off Its New Robot Soldier. Vice

News. 173

Beckhusen, Robert. (2015). Get Ready, NATO: Could Russia Build Lethal Combat Robots?.

National Interest.

Page 51: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

43

CHAPTER 5

DEBATE

The debate on LAWS can be viewed from four different perspectives. First, there is a

legal and ethical aspect which has the backing of NGOs such as HRW and the Campaign to Stop

Killer Robots. Second, there is the political aspect that acknowledges the advantages of using

LAWS to reduce the chances of friendly casualties. Third, the military has competition from po-

tential enemy forces to have their own LAWS on the battlefield. A military will likely develop

its own LAWS to counter other autonomous weapons. Finally, there are technological limita-

tions. Fully autonomous weapons will require AI or ASI that is not fully available with the tech-

nological limitations that exist today. To further examine the debate this study will contrast the

use of AI in LAWS vs the use of AI in civil applications.

Legal & Ethical Aspect

With every new weapon on the battlefield a legal and ethical discussion takes place. This

goes back as far as the Spartan military which used handheld weapons because long range

weapons were considered immoral. In history there are some cases where a new weapon was

outlawed as unethical then later accepted as a legitimate weapon, as was the case with the

crossbow. In other cases, the weapon was accepted but how it was employed is regulated, as was

the case with the submarine. In still other cases, the weapon was used then outlawed, which

chemical weapons provide the perfect example. The following section will observe the current

discourse in the international community about the legal and ethical aspects of the use of LAWS

in conflict.

Page 52: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

44

State Governments

State governments have taken steps to voice their objections or acceptance of LAWS. As

of April 2016, fourteen states have called for a preemptive ban on fully autonomous weapons:

Algeria, Bolivia, Chile, Costa Rica, Cuba, Ecuador, Egypt, Ghana, Vatican City State, Mexico,

Nicaragua, Pakistan, State of Palestine, and Zimbabwe.174

The UK Foreign Office has opposed

the ban on LAWS stating, “At present, we do not see the need for a prohibition on the use of

LAWS, as international humanitarian law already provides sufficient regulation for this area.”175

NGOs and Experts Views’ on AI

NGOs and experts in AI are speaking openly about their views on LAWS and the

implications on International Humanitarian Law (IHL). One of the concerns of armed AI is that

the designers will not be able to predict its behavior with 100 percent certainty. In his work on

AI, James Barrett notes: “Most technology theorists… [believe that] in the quest for [AI],

researchers will create a kind of intelligence that is stronger than their own and that they cannot

control or adequately understand.”176

Getting the invention of AI is very difficult to get right.

Mr. Yudkowski declares, “We must execute the creation of Artificial Intelligence as the exact

application of an exact art.”177

The international community is currently trying to codify laws

that will limit weapons that have not yet been invented. James Barret refutes this effort with an

174

Campaign to Stop Killer Robots. (2016). Ban support grows, process goes slow. Campaign to

Stop Killer Robots. 175

Bowcott, Owen. (2015). UK opposes international ban on developing 'killer robots’. The

Guardian. 176

Barrat, James. (2013). Our Final Invention, Artificial Intelligence and The End of The Human

Era. page 30. 177

Yudkowsky, Eliezer. (2008). Artificial Intelligence as a Positive and Negative Factor in

Global Risk. page 43.

Page 53: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

45

analogy, “If you are living in the horse-and-buggy age, it’s impossible to anticipate how to steer

an automobile over icy roads.”178

Human Rights Watch as been very active in its advocacy to ban LAWS. Steve Goose,

Executive Director of the Arms Division, has argued for a Protocol VI of the CCW179

This draft

protocol would ban the development, production, and use of fully autonomous weapons

systems.180

HRW is advocating a ban on weapons before they have proliferated beyond the point

of containment. Steve Goose suggests that LAWS are not just another inhuman weapon but have

the potential to change the very nature of warfare, to one where machines decide whether a

human on the battlefield lives or dies.181

Article 36 is a United Kingdom based NGO that advocates a ban on LAWS. It is

working with HRW and International Committee of the Red Cross (ICRC) to extend the ban to

as many countries as possible. Article 36 asserts that there are two important parts of the ban,

which include a halt to the use of LAWS and a cessation on their development.182

Article 36

focuses its work in the UK but acknowledges that it will expand its efforts to the international

community. Article 36 limits its efforts to fully-autonomous weapons or weapons that have a

fully-autonomous setting like the Brimstone missile.

Campaign to Stop Killer Robots is lobbying for a legislative ban on LAWS. This NGO

was started by activists from 54 other NGOs - this makes it a de facto coordination of efforts by

178

Barrat, James. (2013). Our Final Invention, Artificial Intelligence and The End of The Human

Era. page 31. 179

Goose, Steve. (2016). Statement by Human Rights Watch to the Preparatory Committee for

the Fifth Review Conference of the Convention on Conventional Weapons. 180

Ibid. 181

Ibid. 182

Article 36. (2013). Killer Robots: UK Government Policy of Fully Autonomous Weapons.

Page 54: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

46

other NGOs.183

Its members work to preemptively ban fully-autonomous weapons which are

authorized to target and attack without a human in the loop.184

The campaign is solely focused

on fully-autonomous weapons, excluding regulations on semi-autonomous weapons. It is

looking to ban all autonomy, even if they are not specifically designed to target humans.

The ICRC conducted its own review of the legal status of LAWS. ICRC concluded that

LAWS certainly are covered by IHL but require further regulation.185

Legal review cites the

Responsibility of States for International Wrongful Acts 2001, which was first published by the

International Law Commission. This document explains that states are responsible for the acts

of their forces which would presumably include LAWS.186

It was passed by the UN General

Assembly in January 2002 as UN Resolution 56/83.187

Although many of the NGOs that want to ban LAWS have their foundation in

international relations, there are some that are based in artificial intelligence. They see the

danger of AI in not just LAWS but also in cyber and nanotechnology. One of these NGOs is the

Future of Humanity Institute which was created at Oxford University. It recently shifted focus to

the “existential risks and the future of machine intelligence.”188

The Future of Humanity

Institute published articles explaining the dangers of AI enabled, unmanned weapons. One of

183

Scharre, Paul., Horowitz, Michael C. (2015). An Introduction to Autonomy in Weapon Sys-

tems. Center for New American Security. Page 2. 184

Campaign to Stop Killer Robots. (2016). Call to Action. Website. Campaign to Stop Killer

Robots. 185

International Committee of the Red Cross. (2014). Expert Meeting, Autonomous Weapon

Systems Technical, Military, Legal, and Humanitarian Aspects Report. International Committee

of the Red Cross. Page 20. 186

United Nations. (2008). Responsibility of States for International Wrongful Acts 2001. Inter-

national Law Commission. United Nations. Articles 30-31, 34-39. 187

United Nations. (2002). A/RES/56/83 Responsibility of States for internationally wrongful

acts 12 Dec. 2001. 188

Future of Humanity Institute. (2016). Future of Humanity Institute About Website. University

of Oxford.

Page 55: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

47

these articles by Toby Ord examines the complexities of AI infused weapons and prescribes

caution for states that want to adapt AI for military purposes.189

Nick Bostrom the director of

Future of Humanity Institute re-enforces Ord’s prescription by explaining, “Short-term impacts

of increased openness appear mostly socially beneficial in expectation. The strategic implications

of medium and longterm impacts are complex.”190

Dr. Ben Goertzel leads OpenCog, an organization dedicated to addressing the problems

of AI through an open source software project.191

Although the research is not directed at

military applications, OpenCog’s focus is on ensuring that AI cannot be weaponized directly

affects LAWS. It also is working to keep AI from indirectly weaponizing itself. These

constraints must be developed before AI and ASI come to fruition if OpenCog is to successfully

implement its proposed checks and balances.

The community of NGOs and experts are not unified in their argument against LAWS.

Some see advantages that are intertwined with the disadvantages of LAWS. LAWS would not

act out of fear or anger against an enemy. LAWS would potentially have video records of a

battlefield and offer better accountability to the combatants.192

They could also help coordinate

relief efforts and organize humanitarian organizations to come in after violent conflict.193

These

efforts would include medical support and battlefield surgery to alleviate suffering of the

189

Ord, Toby. (2014). Drones, counterfactuals, and equilibria: Challenges in evaluating new mil-

itary technologies. Future of Humanity Institute. 190

Bostrom, N. (2016): “Strategic Implications of Openness in AI Development”, Technical Re-

port #20161. Future of Humanity Institute. Oxford University. 191

Pitt, Joel. (2010). Vision. OpenCog.org. 192

Lubell, Noam. (2014). Robot warriors: technology and the regulation of war Professor Noam

Lubell TEDxUniversityofEssex. YouTube.com. 13:44. 193

Ibid. 14:10.

Page 56: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

48

wounded.194

The Geneva Academy of International Humanitarian Law and Human Rights held a

conference in 2014 on autonomous weapons systems under international law. Professor Marco

Sassoli, a professor at the University of Geneva and an expert in IHL, spoke about how he looks

forward to the use of LAWS in combat. He believes that one day LAWS will do a better job

than humans at distinguishing between combatants and non-combatants as well as apply

appropriate use of force.195

Professor Sassoli asserts that the acceptance of LAWS on the

battlefield are based off of the assumption that robots will be able to make this distinction. He

further states that it is assumed that the LAWS will not advance to the point that they can decide

on their own to not comply with the rules of engagement that have been given.196

He

acknowledges that he does not know exactly how international law will apply to LAWS

primarily because they do not yet exist.197

Professor Michael Schmitt, an expert on international law at U.S. Naval War College,

argues that if a military has ”some degree of certainty” of what a weapon will do, then it is a

legal weapon. This definition includes a man in the loop weapon system as well as an

autonomous system if the programer can say with certainty what the weapon will do.198

The

legal aspect becomes difficult to define when ASI empowers the weapon system to learn and

make new decisions based off of its experience.199

194

Ibid. 15:01. 195

Sassoli, Marco., et al. (2014). Autonomous Weapons Systems Under International Law. Ge-

neva Academy of International Humanitarian Law and Human Rights. YouTube. 19:44. 196

Ibid. 20:43. 197

Ibid. 19:23. 198

Anderson, Kenneth., Dr. Missy Cummings., Michael A. Newton., Michael Schmitt. (2016).

LENS Conference 2016 Autonomous Weapons in the Age of Hybrid War. Duke University

School of Law. YouTube. 40:30. 199

Ibid. 41:15.

Page 57: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

49

AI in Civil Applications

The civil use AI powered robots can serve as a guide as to how LAWS will or will not be

accepted in conflict. China is using AI to aid in court decisions.200

The AI computers are

helping judges sort through case law and codified law to help come to a judgment during a trial.

As of November 2016, it has helped 3,000 judges handle more than 150,000 cases.201

In the

United States, IBM’s Watson is being used for medical treatment, aggregating published

research, and is compiling and evaluating treatments from doctors using the system. This

information is then shared globally to ensure doctors around the world have all available

information. Watson is more than just a hub of data it is using its AI to narrow the information

to what is pertinent to the patient’s needs and then suggest a treatment.202

In Japan and the

United States, AI powered robots are being used to care for the elderly.203

Robots designed for

elderly care are able to provide companionship, even learning the characteristics of the patient.

Robots are also being used to help the elderly bathe, alert medical care incase of an injury, and

encourage physical exercise.204

Political Aspect

In 2013, Christof Heyns, an expert in human rights and international law, wrote a report

for the UN as the Special Rapporteur on Lethal Autonomous Robotics (LARs). He reported that

200

Yan, Jie. (2016). China’s Courts Look to AI for Smarter Judgments Big data and machine

learning to make China’s judicial system more intelligent and efficient. Sixth Tone. 201

Ibid. 202

IBM Watson Oncology. (2016). IBM Watson Oncology Offerings. IBM Watson Health. 203

Broekens, Joost., Heerink, Marcel., Rosendal, Henk. (2009). Assistive Social Robots in

Elderly Care: A Review. Gerontechnology. Vol 8. No 2. Page 94-103. 204

Wada, Kazuyoshi., Shibata, Takanori., Saito, Tomoko., Tanie, Kazuo. (2004). Effects of Ro-

bot-Assisted Activity for Elderly People and Nurses at a Day Service Center. Invited Paper. Pro-

ceeding of the IEEE. Vol. 92. No 11. Page 1780-1788.

Page 58: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

50

the Human Rights Council should call on states to stop the testing, development, production and

transfer of LARs.205

He came to this conclusion due to a number of legal reasons. He notes that

states are legally responsible for the actions of LARs but it will be difficult to figure out who in-

side the state should be held responsible for their actions.206

LARs can be hacked or spoofed

which makes legal accountability difficult if it is not clear who was controlling the LARs at the

time of a war crime.207

LARs can break and thus start to target humans indiscriminately.208

Christof Heyns also cites that further developments of AI cannot be foreseen and are likely to

have developmental problems that take time to control or fix.209

Finally, he cautions against the

idea that robots will bring on an age of risk-less war. He expresses that in practice LARs will be

used to not only to attack other robots but people will be targeted too.210

These are difficult

points to politically overcome.

There are some political gains to using LAWS. This is demonstrated by the increase use

of drone strikes from 50 ordered under President Bush to over 400 ordered by President

Obama.211

Unmanned weapons offer U.S. Presidents a low cost way of maintaining a presence

in multiple countries and provide an immediate means of attacking terrorist leaders. An

alternative to unmanned systems would be incredibly expensive or unrealistic.212

205 Heyns, Christof. (2013). Report of the Special Rapporteur on Extrajudicial, Summary or Arbi-

trary Executions. Human Rights Council Twenty-third Session Agenda Item 3. United Nations.

General Assembly. A/HRC/23/47. Page 21. 206 Ibid. Page 14. 207 Ibid. Page 18. 208 Ibid. Page 18. 209 Ibid. Page 18. 210 Ibid. Page 16. 211 Byman, Daniel L. (2013). Why Drones Work: The Case for Washington’s Weapon of Choice.

The Brookings Institute. 212 Ibid.

Page 59: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

51

The UN will also be attracted to the utility of peacekeeping LAWS, especially in a less

than lethal configuration. The increased demand for preventive deployments coupled with the

decreased desire from nations to commit troops creates the perfect storm to cause the UN to

obtain and deploy peacekeeping LAWS.213

The UN has struggled with their slow reaction time

to unfolding crisis. The time it takes to motivate countries to commit resources, train and supply

troops, and deploy them takes months, as seen in Rwanda.214

If technology allows the UN to

show it’s presence and support for peaceful resolution without the time and expense of current

peacekeeping, the UN will likely follow that option.

Politically there are also repercussions of soldiers committing war crimes such as torture,

as was done in Abu Ghraib, or disrespectful acts, as done by Marines urinating on dead Taliban

in 2012. A study done by the Surgeon General’s Office in 2006 found that:

Only 47% of Soldiers and only 38% of Marines agreed that non-combatants should be

treated with dignity and respect. Well over a third of Soldiers and Marines reported

torture should be allowed, whether to save the life of a fellow Soldier or Marine (41%

and 44%, respectively) or to obtain important information about insurgents (36% and

39%, respectively).215

The U.S. military is not alone in being accused of war crimes. U.N. Peacekeepers are regularly

accused of rape and sexual assault on those they are sent to protect.216

LAWS could never

commit these crimes. As LAWS become a mature technology it is likely the political advisors

213 United Nations. (2004). UN Report of the High-Level Panel on Threats, Challenges and

Change. "A More Secure World: Our Shared Responsibility.” UN General Assembly. A/59/565.

Page 38. 214 Power, Samantha. (2001).”Bystanders to Genocide: Why the United States Let the Rwandan

Tragedy Happen”. The Atlantic Monthly. 215

Office of The Surgeon General. (2006). Mental Health Advisory Team (MHAT) IV Opera-

tion Iraqi Freedom 05-07 Final Report Office of the Surgeon Multinational Force-Iraq and Office

of The Surgeon General. United States Army Medical Command. Page 35. 216

Defeis, Elizabeth F.. (2008). U.N. Peacekeepers and Sexual Abuse and Exploitation: An End

to Impunity. Washington University Global Studies Law Review.

Page 60: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

52

will push for more LAWS on the battlefield to limit the likelihood and opportunity for humans to

commit these crimes.

Military

The U.S. military addresses LAWS from a number of angles. First, there is a DOD

Direction that restricts the development and deployment LAWS. Second, there is a call for

limited use of unmanned systems. This limitation means limited resident knowledge on LAWS

or advocacy for LAWS. Unmanned systems are still in their nascent beginnings and will need

time to become a viable weapon in all domains. Until the hardware is improved, the software of

AI will have limited lethality on the modern battlefield.

As previously pointed out, the US Secretary of Defense signed DOD Directive 3000.09

in 2012 addressing the limitations on autonomous weapons. It establishes, “Autonomous and

semi-autonomous weapon systems shall be designed to allow commanders and operators to

exercise appropriate levels of human judgment over the use of force.”217

There are a number of

exceptions to this policy. First, semi-autonomous weapons can be developed as long as they

require a human to pre-designate targets.218

Second, human supervised autonomous weapons

can be used for defense purposes such as defense of a static base or ship.219

Third, autonomous

weapons may be used if they are non-lethal such as electronic attack or against material targets,

i.e. attacking enemy LAWS.220

However, the DOD directive may soon be obsolete. Ashton

217

Carter, Ashton B. (2012). Autonomy in Weapon Systems. Department of Defense Directive

Number 3000.09. Deputy Secretary of Defense. 218

Ibid. 219

Ibid. 220

Ibid.

Page 61: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

53

Carter, U.S. Secretary of Defense, signed the directive with a sunset clause; it will expire

November 21, 2022.221

In the call for the use of LAWS there are a number of advocates. One such advocate, Dr.

Peter Singer, has called for the U.S. military to invest heavily in unmanned systems. He views

these weapons as the future of warfare and necessary for winning in future combat.222

He asserts

that, “The future of war is robotic because the present of war is robotic.”223

Two examples can

demonstrate the present reality of robotics in war. First, in the Libyan Civil War, a private

contractor provided drone support to the rebel army using off-the-shelf civilian drones.224

Second, in the war on ISIS, every actor is using drones to include Hezbollah, Russia, ISIS, Iraq,

U.S., Iran, Turkey, and Syria.225

The Marine Corps has published its vision for the use of robots in its operating concept

dated September 2016. In it the Commandant reaffirms that the near future of combat will still

be largely human against human but that autonomous robots will also be on the future

battlefield.226

The Marine Corps will integrate RAS with manned systems in what is referred to

as Manned Un-Manned Teaming (MUM-T).227

The Marine Corps RAS are built for ISR, which

then allows the manned systems to carry lethal weapons into combat. The Marine Corps does

employ lethal unmanned systems but they are only semi-autonomous and within the restrictions

221

Ibid. 222

Singer, P.W., August Cole. (2015). The weapons the U.S. needs for a war it doesn’t want.

Reuters. 223

Bergen, Peter., Singer, Peter. (2015). Machines That Kill: Will We Rely on Autonomous

Weapons. YouTube. New America. 31:25. 224

Ibid. 31:36. 225

Ibid. 31:54. 226

The United States Marine Corps. (2016). The Marine Corps Operating Concept How an Ex-

peditionary Force Operates in the 21st Century. Marine Corps Combat Development Command.

Page 6. 227

Ibid. Page 16.

Page 62: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

54

set by DOD Directive 3000.09. Other than RAS designed for ISR, the Marine Corps is

developing RAS that are designed for logistical purposes.228

In 2015, the National Defense University (NDU) completed a study on the future of RAS.

In the report, the authors concluded the U.S. military will continue to develop and employ

RAS.229

The report calls for an increase of DOD testing and fielding of RAS, which includes

establishing a national test range specifically for autonomous systems.230

The NDU

acknowledges that the human-machine interfaces need to be improved to increase the trust

between humans and machines.231

The NDU report forcefully states that RAS technology and its

integration into the military force is imperative to U.S. national security. If any potential enemy

develops the technology first then they will have a distinct advantage over the U.S.232

“In ad-

vances AI development as in chess there will be a clear first-mover advantage.”233

Center for a New American Security published multiple articles to argue that LAWS

could be employed ethically. CNAS launched its Ethical Autonomy Project to help countries

understand and overcome the challenges of working with LAWS.234

Paul Scharre, the Director

of the Future of Warfare Initiative at CNAS, writes that the discussion of LAWS needs to be

broken up into three separate subjects: first, the human-machine command and control relation-

228

Ibid. Page 1. 229

Pregmon, Matthew., et al. (2015). Final Report Robotics and Autonomous Systems. The

Dwight D. Eisenhower School for National Security and Resource Strategy. National Defense

University. Fort McNair, Washington D.C. Page 2. 230

Ibid. Page 22. 231

Ibid. Page 21. 232

Ibid. Page 28. 233

Barrat, James. (2013). Our Final Invention, Artificial Intelligence and The End of The Human

Era. Page 11. 234

Scharre, Paul., Horowitz, Michael C. (2015). An Introduction to Autonomy in Weapon Sys-

tems. Center for New American Security. Page 2.

Page 63: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

55

ship; second, the complexity of the machine; and third, the type of decision being automated.235

By separating out these discussion points the military can advance LAWS technology in one part

while still adhering to restrictions in another. Paul Scharre also helps clarify levels of control for

the military. He defines human “in the loop” as a human having control over selecting and en-

gaging a target. Next, human “on the loop” refers to a human monitoring the selection and en-

gagement of the targets, intervening when required to do so. Finally, human “out of the loop”

means a human does not need to select or approve the engagement of the target. Additionally,

humans do not monitor the weapon system regularly.236

Advances in Civilian AI that cross over to Military

There are a few civilian base AI robots that mirror the military development, or in some

cases lead the way in development. The Uber project co-funded by Volvo to develop the wide

use of autonomous cars, has a similar goal to the unmanned logistical vehicles spoken of by the

Marine Corps Operating Concept. Watson technology can be adapted to design military plans or

suggest new ideas to commanders facing the complex battlefields of the future. As these

technologies proliferate through civil society they will likely be adapted by the military. Since

these systems are not inherently lethal, they would not be prohibited under the possible Protocol

VI.

Technological Limitation

235

Ibid. Page 5-6. 236

Ibid. Page 8.

Page 64: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

56

There are technological limitations on AI and thus LAWS. The technology that is lob-

bied for or against currently does not exist. There are disputes as to when singularity will take

place. Experts have wide timeline for when this will take place. OpenCog advertises AI will be

available by 2040.237

Whereas Vernor Vinge believes that singularity will happen sometime

around 2030.238

Even though the timeline is not clear, it is clear that there will be a progression

towards singularity then ASI. As technology progresses, weapons developers will need to ensure

that the autonomy in new weapon systems fall within the guidelines of IHL and state enforced

rules of engagement.239

Differences in Costs Between Humans and Robots

Comparing the total costs of humans to robots is difficult to capture. There are a few

general trends that help shed light on the subject. In 2012, the Pentagon comptroller, Under Sec-

retary Robert Hale, stated that the average soldier in Afghanistan costs $850,000 per year.240

In

2012, the Military Retirement Fund pays current beneficiaries a total of $51.7 billion a year.241

This expense would reduce for any military force that replaces humans with robots. In 2002, the

price to recruit, train, and equip a Marine was $44,887.242

This cost represents the price point for

procuring LAWS that would directly replace humans in the military force.

237 Pitt, Joel. (2010). Vision. OpenCog.org. 238 Wolens, Doug. (2009). Singularity 101 with Vernor Vinge. Humanity Plus Magazine. 239 Quintana, Elizabeth. (2012). The Ethics and Legal Implications of Military Unmanned Vehi-

cles Head of Military Technology & Information Studies Royal United Services Institute for De-

fence and Security Studies. Page 15. 240

Shaughnessy, Larry. (2012). One soldier, one year: $850,000 and rising. CNN. 241

Congress of The United States Congressional Budget Office. (2012). Costs of Military Pay

and Benefits in the Defense Budget. Page 27. 242

Olick, Diana. (2002). An army of one carries a high price. NBC News.

Page 65: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

57

Summary

There are a number of organizations working to bring about LAWS while others try to

ban them. There are a number of NGOs that are working through legislation to ban LAWS. If

they are successful, it will reduce the chances of deaths caused by LAWS and their unproved /

unreliable technology. Politicians will be apprehensive to use machines to decide who lives and

who dies in a conflict. A LAWS breaking in a conflict zone and killing civilians

indiscriminately will be difficult for any politician to explain. On the other hand, politicians will

be pressured to use more unmanned systems including LAWS to reduce occurrences of human

soldier raping or assaulting the citizenry. Militaries are bureaucracies and thus will be slow to

advocate for then utilize LAWS. Once their potential enemies have LAWS then militaries will

quickly advocate for them. The final factor as to whether or not LAWS is utilized is the pace of

technology. Whether the weapons are wanted or feared, they don’t exist yet. Until they do exist

it is unclear which forces will press harder for their convictions and which will turn silent,

acknowledging failure to realize their vision.

Page 66: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

58

CHAPTER 6

CONCULSION

Potential Influences of LAWS

As LAWS technology develops and evolves, there are a number of potential influencers

on international decision makers and foreign policy. As automation proliferates, there will be

less apprehension towards the use of automation in the military. If LAWS are found to be quan-

tifiably less likely to make a mistake in conflict then the discourse on autonomous weapons may

change from the use of robots being unethical to being ethical. This would be a polar shift in in-

ternational pressure within the UN from banning robots to potentially banning humans from con-

flict.

The national security perspective about LAWS will change as it becomes reality. Chang-

es to opinions will take time to come to fruition. If military industry can bring down the price

point of LAWS to a level below the cost of military personnel then there would be a military and

congressional pressure to lower operating costs of national defense security. If LAWS can prove

to be better at fighting than military personnel there is likely to be a demand signal for more

LAWS and less humans by the military services. As these thresholds are achieved, the national

security advisors will be more likely to encourage the use of LAWS in a perspective conflict.

Robots with weapons are already a reality on the modern battlefield. More autonomy is

coming to the military. National security advisors and military commanders will have an in-

creasing number of options to prosecute a target or military objective without exposing military

personnel to danger. The level of autonomy that is accepted in the application of lethal force still

has years to be decided and will likely change as technology matures.

Page 67: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

59

Further, if LAWS replace humans on the battlefield it will open up new options for deci-

sion-makers. If there is no risk to human lives for their own force, decision-makers may more

incline to use military force than when there was risk to U.S. personnel. Politically it may be

easier to dedicate U.S. forces knowing that there will not be any images of wounded or kicked

service members in the news. This thought process could lead to congressional authorization for

use of force that only includes the use of LAWS. The use of a LAWS-only military action could

further increase if the enemy combatant also has LAWS units. Essentially, this would mean that

military hardware could be used to attack only military hardware. In practice, it is likely that

humans would still be killed or wounded but at a reduced rate.

Authorization of a LAWS-only military force could also increase the number of UN au-

thorization of force. If the member states of the UN believe that LAWS are reliable enough to

only target combatants then they may be more likely to authorize the use of force. Additionally,

the member states may be more likely to provide forces in the form of all LAWS units.

Summary

As LAWS turn from science fiction to science fact, the capabilities and limitations will

help inform the conversation. The current discourse on LAWS can be organized into the two

sides of the argument, for or against LAWS. One side of the controversy views LAWS as

dangerous and should be banned. LAWS maybe dangerous for several reasons, first, the

artificial intelligence in the LAWS will be smarter than the humans leading them. Because they

are learning machines, it will be impossible to know what decisions the LAWS will make. These

decisions could be as significant as killing the wrong person based on poor data inputs. Also, a

decision could be made to change its ethical restraints and put a lower value on human life than

Page 68: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

60

is allowed. Second, the LAWS could break, leading it to lose management of its fire controls

and potentially to start killing indiscriminately, combatants and non-combatants alike. This leads

to a difficult legal position of who can be held responsible. The list of responsible parties

includes the operator, the commander, the manufacturer, the designer, the state government, or

even a hacker that was not known at the time. A list this long leads one to believe that no one

will be held accountable for any wrongful actions taken by a LAWS.

The discourse for the other side argues that pandora’s box has already been opened. It

doesn’t matter if LAWS are dangerous, they are going to be a part of future conflict. The real

danger lies in allowing a potential enemy to obtain them first. Those that argue for LAWS be-

lieve that it is likely that restrictions will be instituted early in their development followed by a

slow attenuation of those restrictions. Early restrictions will require a human in the loop, then

humans to supervise the LAWS, and finally a loose supervision of the machines.243

The support-

ing discourse argues for the possibility of providing ethical restraints to the unmanned systems.

If those restraints are broken, the potential damage caused by LAWS will be limited enough that

it will still be less frequent than humans committing crimes in war. LAWS will have limitations

on their reasoning power but humans are also prone to errors.244

Advocates for LAWS also

assert that autonomous weapons are needed to counter the enemy LAWS as well as to limit the

loss of human life.

As technology brings LAWS closer to reality the discourse on them will be refined and

clarified. The true dangers and benefits of this burgeoning technology will be revealed. Because

243

Lubell, Noam. (2014). Robot warriors: technology and the regulation of war Professor Noam

Lubell TEDxUniversityofEssex. YouTube.com. 5:53. 244

Scharre, Paul. (2016). Autonomous Weapons and Operational Risk. Ethical Autonomy Pro-

ject Senior Fellow and Director of the 20YY Future of Warfare Initiative. Center for a New

American Security. Page 5

Page 69: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

61

full autonomy powered by ASI is not likely until 2030 or 2040 the discourse will continue

through that time and beyond until ASI is weaponized.

Suggested Follow-on Research

There are still many more questions to be answered about LAWS and the repercussions

of their employment. This includes questions held by human rights groups which are largely

against LAWS. Politicians still need further research done to help clarify the political ramifica-

tions of deploying LAWS. Additionally, militaries need to study the integration of LAWS into

their doctrine which will necessarily change military options that can be offered to civilian lead-

ers.

In the near future, human rights organizations will continue to work on legislation to ban

the use of LAWS. Concurrently, these same organizations will want to encourage the develop-

ment of robotics and AI that can be helpful for humanitarian aid. For example, Zipline, a startup

company that delivers medical supplies to hard to reach areas in Rwanda for cheaper and faster

than a ground currier, is acclaimed for its improvements in humanitarian service.245

AI could

benefit humanitarian aid efforts in the same way Watson is being used to benefit the medical in-

dustry. James Barrat explains, “Every one of these [computer scientists] was convinced that in

the future all the important decisions governing human lives will be made by machines… Many

think this will take place within their lifetimes.”246

The question now remains: How do you de-

velop this technology for humanitarian work and ensure that it is not later weaponized?

245

Zipline. (2016). The Future of Healthcare is Out for Delivery. Zipline Homepage. 246

Barrat, James. (2013). Our Final Invention, Artificial Intelligence and The End of The Human

Era. Page 3.

Page 70: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

62

Political administrations still require research into what technological thresholds should

be achieved to make employment of LAWS more politically acceptable than deployments of

human soldiers. This research will likely be broken down by domain to include undersea, sur-

face ships, UGVs, drones, and cyber.

The militaries still need to conceptualize how LAWS will be organized into current

manned units. Will LAWS replace or augment manned units and to what extent? This will drive

the military options offered to the politicians, for instance, if the military could deploy an un-

manned unit then the opportunity of fighting a conflict without the loss of their own soldiers

would be appealing. This line of reasoning assumes that the military will figure out how to en-

sure that there are clear, ethical boundaries for LAWS.

A comprehensive study examining the price differences between a robot and human

would enrich the debate about the merits of both options. Military personnel have recruitment,

training, pay, allowances, housing, and medical costs which accrue on an annual basis. Once a

soldier leaves the military service, there is a potential expense of disability and/or retirement pay.

If the soldier’s life is lost, there are burial, insurance, and allowance payments due, which can

total in excess of $500,000. The only costs that transfer over would be initial training (produc-

tion), feeding (gas or electric power), and medical (maintenance). Robots also have high ex-

penditures in research and development that do not apply to a human soldier. Conducting a

comprehensive study in the costs would help in understanding what an appropriate price point

would be for LAWS that directly replace a human on the battlefield.

Final Thoughts

Page 71: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

63

It is difficult to conceptualize how dangerous a weapon will be when it does not exist.

Eliezer Yudkowsky affirms, “Artificial Intelligence is not settled science.”247

The vision of what

LAWS will be capable of doing is particularly hard to visualize because they will incorporate

several different technologies that are still being developed. At the heart of the issue is AI. Will

AI start to develop new AI that operates outside the ethical constraints of the human built AI?

Vernor Vinge states, “I think that in the relatively near historical future, humans, using

technology, will be able to create, or become, creatures of superhuman intelligence.”248

There is also a certain level of anxiety about arming AI and allowing it to fight

unsupervised in combat. Essentially every new weapon on the battlefield brings a similar

anxiety with it. Two examples are chemical weapons and mines. Chemical weapons have been

banned and have a high threshold of weaponizing them, thus there are limited examples of them

being used in conflict. Mines have been ban also but have a low threshold of obtaining and thus

are used in modern conflict, particularly by non-state actors. Because LAWS will have a low

threshold of being obtained, it is likely that a ban will have limited effect on LAWS in conflict.

Countries and NGOs will need to develop technology to counter LAWS in conflict similar to the

current arrangement done to counter mines.

247

Yudkowsky, Eliezer. (2008). Artificial Intelligence as a Positive and Negative Factor in

Global Risk. Page 1. 248

Wolens, Doug. (2009). Singularity 101 with Vernor Vinge. Humanity Plus Magazine.

Page 72: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

64

REFERENCES

Ackerman, Evan. (2012). Latest AlphaDog Robot Prototypes Get Less Noisy, More Brainy.

IEEE Spectrum. http://spectrum.ieee.org/automaton/robotics/military-robots/latest-ls3-alphadog-

prototypes-get-less-noisy-more-brainy. Accessed Nov 2016.

Amazon Prime Air. (2016). Amazon Prime Air. Website. Amazon.

https://www.amazon.com/b?node=8037720011. Accessed Nov 2016.

Anderson, Kenneth., Dr. Missy Cummings., Michael A. Newton., Michael Schmitt. (2016).

LENS Conference 2016 Autonomous Weapons in the Age of Hybrid War. Duke University

School of Law. YouTube. https://www.youtube.com/watch?v=b5mz7Y2FmU4. Accessed Oct

2016.

Anderson, Kenneth., Daniel Reisner., Matthew Waxman. (2014). Adapting the Law of Armed

Conflict to Autonomous Weapon Systems. Stockton Center for the Study of International Law.

U.S. Naval War College. https://www.usnwc.edu/getattachment/a2ce46e7-1c81-4956-a2f3-

c8190837afa4/dapting-the-Law-of-Armed-Conflict-to-Autonomous-We.aspx. Accessed Nov

2016.

Anderson, Susan Leigh. (2005). Asimov’s “Three Laws of Robotics” and Machine Metaethics.

University of Connecticut. Dept. of Philosophy, 1 University Place, Stamford, CT 06901.

http://www.aaai.org/Papers/Symposia/Fall/2005/FS-05-06/FS05-06-002.pdf. Accessed Nov

2016.

Article 36. (2013). Killer Robots: UK Government Policy of Fully Autonomous Weapons.

http://www.article36.org/wp-content/uploads/2013/04/Policy_Paper1.pdf. Accessed Nov 2016.

ASIMO. (2016). ASIMO The World’s Most Advanced Humanoid Robot History Website.

http://asimo.honda.com/asimo-history/. Accessed Nov 2016.

Asimov, Isaac. (2004). I, Robot (The Robot Series Book 1). Random House Publishing Group.

Kindle Edition.

Association for the Advancement of Artificial Intelligence. (2016). Association for the Ad-

vancement of Artificial Intelligence Homepage. http://www.aaai.org/home.html. Accessed Nov

2016.

Automated Insights. (2016). Automated Insights Company Webpage. Automated Insights.

https://automatedinsights.com/company. Accessed Nov 2016.

BAE Systems. (2016). Taranis. BAE Systems. http://www.baesystems.com/en/product/taranis.

Accessed Nov 2016.

Barrat, James. (2013). Our Final Invention, Artificial Intelligence and The End of The Human

Era. Thomas Dunne Books. New York, New York.

Page 73: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

65

Beckhusen, Robert. (2015). Get Ready, NATO: Could Russia Build Lethal Combat Robots?. Na-

tional Interest. http://nationalinterest.org/blog/the-buzz/get-ready-nato-could-russia-build-lethal-

combat-robots-13954. Accessed Nov 2016.

Bergen, Peter., Singer, Peter. (2015). Machines That Kill: Will We Rely on Autonomous Weap-

ons. YouTube. New America. https://www.youtube.com/watch?v=g2P1KaQ4AC4. Accessed

Nov 2016.

Boston Dynamics. (2016). LS3 - Legged Squad Support Systems. Boston Dynamics. Webpage.

http://www.bostondynamics.com/robot_ls3.html. Accessed Nov 2016.

Bostrom, Nick., Yudkowsky, Eliezer. (2014). The Ethics of Artificial Intelligence. Machine In-

telligence Research Institute. https://intelligence.org/files/EthicsofAI.pdf. Accessed October

2016.

Bostrom, Nick. (2003). Ethical Issues in Advanced Artificial Intelligence. Oxford University.

United Kingdom. http://www.nickbostrom.com/ethics/ai.html#_ftn7. Accessed November 2016.

Bostrom, Nick (2008). How Long Before Superintelligence?. Oxford Future of Humanity Insti-

tute. Faculty of Philosophy & Oxford Martin School. University of Oxford.

http://www.nickbostrom.com/superintelligence.html. Accessed Nov 2016.

Bostrom, N. (2016): “Strategic Implications of Openness in AI Development”, Technical Report

#20161. Future of Humanity Institute. Oxford University. pp. 1-26.

http://www.fhi.ox.ac.uk/reports/2016-1.pdf. Accessed Nov 2016.

Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. OUP Oxford. Kindle

Edition.

Bowcott, Owen. (2015). UK opposes international ban on developing 'killer robots’. The

Guardian. https://www.theguardian.com/politics/2015/apr/13/uk-opposes-international-ban-on-

developing-killer-robots. Accessed Nov 2016.

British Broadcasting Corporation. (2014). Computer AI passes Turing test in 'world first’. BBC

Technology. http://www.bbc.com/news/technology-27762088. Accessed Nov 2016.

Broekens, Joost., Heerink, Marcel., Rosendal, Henk. (2009). Assistive Social Robots in Elderly

Care: A Review. Gerontechnology. Vol 8. No 2. Page 94-103.

Burrus, Daniel. (2014). The Internet Of Things Is Far Bigger Than Anyone Realizes. Burrus

Research. Wired Magazine. https://www.wired.com/insights/2014/11/the-internet-of-things-

bigger/. Accessed Nov 2016.

Page 74: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

66

Byman, Daniel L. (2013). Why Drones Work: The Case for Washington’s Weapon of Choice.

The Brookings Institute. https://www.brookings.edu/articles/why-drones-work-the-case-for-

washingtons-weapon-of-choice/. Accessed Nov 2016.

Campaign to Stop Killer Robots. (2016). Ban support grows, process goes slow. Campaign to

Stop Killer Robots. https://www.stopkillerrobots.org/2016/04/thirdmtg/. Accessed Nov 2016.

Campaign to Stop Killer Robots. (2016). Campaign to Stop Killer Robots Home Page. Website.

http://www.stopkillerrobots.org. Accessed Nov 2016.

Campaign to Stop Killer Robots. (2016). Call to Action. Website. Campaign to Stop Killer Ro-

bots. http://www.stopkillerrobots.org/call-to-action/ Accessed Nov 2016.

Campaign to Stop Killer Robots. (2016). Third UN meeting opens April 11. Website. Campaign

to Stop Killer Robots. https://www.stopkillerrobots.org/2016/04/thirdccw/. Accessed Nov 2016.

Carter, Ashton B. (2012). Autonomy in Weapon Systems. Department of Defense Directive

Number 3000.09. Deputy Secretary of Defense.

http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf. Accessed Nov 2016.

Cauer, Emil., Wolfgang Mathis, Rainer Pauli (2000). Life and Work of Wilhelm Cauer (1900 –

1945). Perpignan, France.

https://www.cs.princeton.edu/courses/archive/fall03/cs323/links/cauer.pdf. Accessed Nov 2016.

Cockburn, Andrew. (2015). Kill Chain: The Rise of the High-Tech Assassins. Henry Holt and

Co. New York, New York.

Congress Of The United States Congressional Budget Office. (2012). Costs of Military Pay and

Benefits in the Defense Budget. https://www.cbo.gov/sites/default/files/112th-congress-2011-

2012/reports/11-14-12-MilitaryComp_0.pdf. Accessed Dec 2016.

DeepMind. (2016). DeepMind About Website. DeepMind. https://deepmind.com/about/. Ac-

cessed Nov 2016.

Defeis, Elizabeth F. (2008). U.N. Peacekeepers and Sexual Abuse and Exploitation: An End to

Impunity. Washington University Global Studies Law Review. Volume 7. Issue 2. Page 184-

214.

Defense Advanced Research Projects Agency. (2013). DARPA Envisions the Future of Machine

Learning. Automated tools aim to make it easier to teach a computer than to program it. News

And Events. DARPA Envisions the Future of Machine Learning. http://www.darpa.mil/news-

events/2013-03-19a. Accessed Nov 2016.

Department of the Navy. (2004). The Navy Unmanned Undersea Vehicle (UUV) Master Plan.

The Department of the Navy. United States of America.

http://www.navy.mil/navydata/technology/uuvmp.pdf. Accessed Nov 2016.

Page 75: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

67

Dwoskin, Elizabeth., Evelyn M. Rusli. (2015).The Technology that Unmasks Your Hidden Emo-

tions Using Psychology and Data Mining to Discern Emotions as People Shop, Watch Ads;

Breeding Privacy Concerns. Tech. The Wall Street Journal.

http://www.wsj.com/articles/startups-see-your-face-unmask-your-emotions-1422472398. Ac-

cessed Nov 2016.

Dyer, John. (2016). Ivan the Terminator: Russia Is Showing Off Its New Robot Soldier. Vice

News. https://news.vice.com/article/ivan-the-terminator-russia-is-showing-off-its-new-robot-

soldier. Accessed Nov 2016.

Exponent. (2016). MARCbot. Exponent Webpage.

http://www.exponent.com/experience/marcbot/?pageSize=NaN&pageNum=0&loadAllByPageSi

ze=true. Accessed Nov 2016.

Fei-Fei, Li. (2015). Fei-Fei Li: How we're teaching computers to understand pictures. Ted Talks.

https://www.ted.com/talks/fei_fei_li_how_we_re_teaching_computers_to_understand_pictures?l

anguage=en#t-1066099. Accessed Nov 2016.

Fei-Yue, Wang., Shuming Tang. (2004). Artificial societies for integrated and sustainable devel-

opment of metropolitan systems. IEEE Intelligent Systems Volume: 19. Issue:

4.http://ieeexplore.ieee.org/document/1333039/?part=1. Accessed Nov 2016.

Future of Life Institute. (2016). Autonomous Weapons: An Open Letter From Ai & Robotics

Researchers. http://futureoflife.org/open-letter-autonomous-weapons/. Accessed October 2016.

Future of Life Institute. (2016). Computers Gone Wild: Impact and Implications of Develop-

ments in Artificial Intelligence on Society. http://futureoflife.org/2016/05/06/computers-gone-

wild/. Accessed Nov 2016.

Future of Humanity Institute. (2016). Future of Humanity Institute Team Website. University of

Oxford. https://www.fhi.ox.ac.uk/about/the-team/. Accessed Nov 2016.

Future of Humanity Institute. (2016). Future of Humanity Institute About Website. University of

Oxford. https://www.fhi.ox.ac.uk/about/mission/. Accessed Nov 2016.

Geller, Tom. (2012). Talking to Machines. Communications of the ACM. Volume 55 Issue 4.

Pages 14-16. New York, NY, USA. http://delivery.acm.org/10.1145/2140000/2133812/p14-

gel-

ler.pdf?ip=97.83.201.44&id=2133812&acc=OPEN&key=4D4702B0C3E38B35%2E4D4702B0

C3E38B35%2E4D4702B0C3E38B35%2E6D218144511F3437&CFID=862577757&CFTOKEN

=14543452&__acm__=1478785082_35ec21690adc0cb005348437ef750e05. Accessed Nov

2016.

Ghose, Tia. (2015). 3D Computer Chips Could Be 1,000 Times Faster Than Existing Ones. Live

Science. http://www.livescience.com/52207-faster-3d-computer-chip.html. Accessed Nov 2016.

Page 76: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

68

Good, Irving John. (1965). “Speculations Concerning the First Ultraintelligent Machine.” In Ad-

vances in Computers, ed. by Franz L. Alt and Morris Rubinoff, 31–88. Vol. 6. New York:

Academic Press. http://deeprecursion.com/file/2012/05/9292221-Good65ultraintelligent.pdf. Ac-

cessed Nov 2016.

Google Self-Driving Car. (2016). Google Self-Driving Car Website.

https://www.google.com/selfdrivingcar/. Accessed Nov 2016.

Goose, Steve. (2016). Statement by Human Rights Watch to the Preparatory Committee for the

Fifth Review Conference of the Convention on Conventional Weapons.

https://www.hrw.org/news/2016/08/31/statement-human-rights-watch-preparatory-committee-

fifth-review-conference. Accessed Nov 2016.

Grauman, Kristen., Bastian Leibe. (2011). Visual Object Recognition. Synthesis Lectures On

Computer Vision # 1. http://cs.gmu.edu/~kosecka/cs482/grauman-recognition-draft-27-01-

11.pdf. Accessed Nov 2016.

Greenemeier, Larry. (2009). Computers have a lot to learn from the human brain, engineers say.

Scientific America. https://blogs.scientificamerican.com/news-blog/computers-have-a-lot-to-

learn-from-2009-03-10/. Accessed Nov 2016.

Gubrud, Mark. (2014). Is Russia Leading the World to Autonomous Weapons?. International

Committee For Robot Arms Control. http://icrac.net/2014/05/is-russia-leading-the-world-to-

autonomous-weapons/. Accessed Nov 2016.

Hambling, David. (2014). Russia Wants Autonomous Fighting Robots, and Lots of Them

Putin's military is busy building autonomous tanks with mounted machine guns and unmanned,

amphibious Jeep-size vehicles. Popular Mechanics.

http://www.popularmechanics.com/military/a10511/russia-wants-autonomous-fighting-robots-

and-lots-of-them-16787165/. Accessed Nov 2016.

Hanson Robotics (2016). Hanson Robotics Company Overview. Hanson Robotics Limited.

http://www.hansonrobotics.com/wp-content/uploads/2016/09/Hanson-Robotics-Overview.pdf.

Accessed Nov 2016.

Heyns, Christof. (2013). Report of the Special Rapporteur on Extrajudicial, Summary or Arbi-

trary Executions. Human Rights Council Twenty-third Session Agenda Item 3. United Nations.

General Assembly. A/HRC/23/47.

Human Rights Watch., International Human Rights Clinic. (2016). Killer Robots and the Con-

cept of Meaningful Human Control Memorandum to Convention on Conventional Weapons

(CCW) Delegates.

https://www.hrw.org/sites/default/files/supporting_resources/robots_meaningful_human_control

_final.pdf. Accessed Nov 2016.

Page 77: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

69

Human Rights Watch. (2016). Statement to the Convention on Conventional Weapons Fifth Re-

view Conference Preparatory Meeting on Main Committee II. Delivered by Mary Wareham. Ad-

vocacy Director at Human Rights Watch and Global Coordinator for the Campaign to Stop Killer

Robots. https://www.hrw.org/news/2016/09/02/statement-convention-conventional-weapons-

fifth-review-conference-preparatory. accessed October 2016.

IBM Research. (2016). Cognitive Computing. IBM Research Website.

http://www.research.ibm.com/cognitive-computing/index.shtml#fbid=vmpqdEURf04. Accessed

Nov 2016.

IBM Watson Health. (2016). IBM Watson Health Website. https://www.ibm.com/watson/health/.

Accessed Nov 2016.

IBM Watson Oncology. (2016). IBM Watson Oncology Offerings. IBM Watson Health.

http://www.ibm.com/watson/health/oncology/. Accessed Nov 2016.

IBM Watson Products. (2016). IBM Watson Products Website.

http://www.ibm.com/watson/products.html. Accessed Nov 2016.

International Committee of the Red Cross. (2014). Expert Meeting, Autonomous Weapon Sys-

tems Technical, Military, Legal, and Humanitarian Aspects Report. International Committee of

the Red Cross. Geneva, Switzerland.

IEEE. (2016). 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO).

http://www.ieee.org/conferences_events/conferences/conferencedetails/index.html?Conf_ID=39

756&WT.mc_id=uem_robio_16. Accessed Nov 2016.

Israel Aerospace Industries. (2016). Harp Loitering Weapon System. Israel Aerospace Industries.

Missile Division. http://www.iai.co.il/Sip_Storage//FILES/6/41656.pdf. Accessed Nov 2016.

Jeffries, Adrianne. (2014). Only five countries actually want to ban killer robots, Cuba, Pakistan,

Egypt, Ecuador, and the Vatican backed a lethal autonomous weapons ban at the UN; everyone

else wasn't sure. The Verge. http://www.theverge.com/2014/5/16/5724538/what-happened-at-

the-un-killer-robot-debate. Accessed Nov 2016.

Joint Chiefs of Staff. (2016). Joint Operating Environment 2035, The Joint Force in a Contested

and Disordered World. http://www.dtic.mil/doctrine/concepts/joe/joe_2035_july16.pdf. Ac-

cessed Nov 0216.

Kongsberg. (2016). Naval Strike Missile. Website. Kongsberg Defence Systems. Kongsberg

Gruppen. http://www.kongsberg.com/en/kds/products/missilesystems/navalstrikemissile/. Ac-

cessed Nov 2016.

Kumagai, J. (2007). A Robotic Sentry For Korea's Demilitarized Zone. Browse Journals & Mag-

azines. IEEE Spectrum. Volume: 44 Issue: 3.

http://ieeexplore.ieee.org/document/4119213/?part=1. Accessed Nov 2016.

Page 78: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

70

Kurzweil, Ray. (2001). The Law of Accelerating Returns. Kurzweil AI Website.

http://www.kurzweilai.net/the-law-of-accelerating-returns. Accessed Nov 2016.

Kurzweil, Ray. (2005). The Singularity Is Near: When Humans Transcend Biology. Penguin

Publishing Group. Kindle Edition.

Littlefield, Scott. (2016). Anti-Submarine Warfare (ASW) Continuous Trail Unmanned Vessel

(ACTUV) Anti-Submarine Warfare (ASW) Continuous Trail Unmanned Vessel (ACTUV). De-

fense Advanced Research Projects Agency. http://www.darpa.mil/program/anti-submarine-

warfare-continuous-trail-unmanned-vessel. Accessed Nov 2016.

Lockheed Martin Corporation. (2016). Autonomous Mobility Appliqué System (AMAS).

Lockheed Martin Corporation. Website. http://lockheedmartin.com/us/products/amas1.html.

Accessed Nov 2016.

Lubell, Noam. (2014). Robot warriors: technology and the regulation of war Professor Noam

Lubell TEDxUniversityofEssex. YouTube.com.

https://www.youtube.com/watch?v=x8_7XTDnAEk. Accessed Nov 2016.

Machine Intelligence Research Institute. (2016). Research Webpage.

https://intelligence.org/research/. Accessed Nov 2016.

Marshall, Aarian., Alex Davies. (2016). How Pittsburgh Birthed The Age Of The Self-Driving

Car. Transportation. Wired Magazine. https://www.wired.com/2016/08/pittsburgh-birthed-age-

self-driving-car/. Accessed Nov 2016.

Massachusetts Institute of Technology. (2016). CSAIL Mission. Webpage. Computer Science

and Artificial Intelligence Laboratory. https://www.csail.mit.edu/about/CSAIL_Mission. Ac-

cessed Nov 2016.

Massachusetts Institute of Technology. (2016). Speech-Based Interface For Medical Data Ac-

cess. Webpage. Computer Science and Artificial Intelligence Laboratory.

https://www.csail.mit.edu/node/1965. Accessed Nov 2016.

Massachusetts Institute of Technology. (2016). Spoken Dialogue System For English Language

Learning. Webpage. Computer Science and Artificial Intelligence Laboratory.

https://www.csail.mit.edu/node/1966. Accessed Nov 2016.

Massachusetts Institute of Technology. (2016). Vision-Based Automated Behavior Recognition.

Webpage. Computer Science and Artificial Intelligence Laboratory.

https://www.csail.mit.edu/node/1969. Accessed Nov 2016.

MBDA Systems. (2016). MBDA Demonstrates Brimstone Missile Live Firing from

Apache Helicopter. MBDA Missile Systems. http://www.mbda-systems.com/air-

dominance/brimstone/. Accessed Nov 2016.

Page 79: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

71

Mobile Eye. (2016). Company Overview. Mobile Eye Website.

http://www.mobileye.com/about/. Accessed Nov 2016.

Moore, Gordon. (2016). Moore’s Law Home Page. http://www.mooreslaw.org. Accessed Nov

2016.

Nass, Clifford., Scott Brave. (2005). Wired for Speech: How Voice Activates and Advances the

Human-Computer Relationship. Kindle Edition. MIT Press. Cambridge Massachusetts.

Natural Language Processing Group. (2016). Research Software Engineer - Natural Language

Processing Group Website. Stanford University.

https://symsys.stanford.edu/viewing/textadvertisement/12413. Accessed Nov 2016.

Neild, Barry. (2016). Why your next hotel will be staffed by robots. CNN.

http://www.cnn.com/2016/03/17/hotels/hotels-robot-future-travel/. Accessed Nov 2016.

Newnham, Danielle. (2015). The Story Behind Siri And the man who made her. The Startup.

https://medium.com/swlh/the-story-behind-siri-fbeb109938b0#.8016kyncu. Accessed Nov 2016.

Northrop Grumman. (2016). X-47B UCAS Makes Aviation History…Again!. Webpage.

Northrop Grumman.

http://www.northropgrumman.com/Capabilities/X47BUCAS/Pages/default.aspx. Accessed Nov

2016.

Office of The Surgeon General. (2006). Mental Health Advisory Team (MHAT) IV Operation

Iraqi Freedom 05-07 Final Report Office of the Surgeon Multinational Force-Iraq and Office of

The Surgeon General. United States Army Medical Command.

https://archive.org/stream/551721-mental-health-advisory-team-mhat-iv/551721-mental-health-

advisory-team-mhat-iv_djvu.txt. Accessed Nov 2016.

Olick, Diana. (2002). An army of one carries a high price. NBC News.

http://www.nbcnews.com/id/3072945/t/army-one-carries-high-price/#.WEMTFza7p7g. Ac-

cessed Dec 2016.

Omohundro, Stephen. (2007). Stanford Colloquium: Self-Improving Artificial Intelligence.

Youtube.com. https://www.youtube.com/watch?v=CwXQKk_lmzQ. Accessed Nov 2016.

Ord, Toby. (2014). Drones, counterfactuals, and equilibria:Challenges in evaluating new military

technologies. Future of Humanity Institute. https://www.fhi.ox.ac.uk/wp-

content/uploads/Drones-counterfactuals-and-equilibria-Challenges-in-evaluating-new-military-

technologies.pdf. Accessed Nov 2016.

Page 80: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

72

Oxford Brookes University. (2016). Department of Computing and Communications Technolo-

gies. Oxford Brookes University. http://cct.brookes.ac.uk/research/isec/computer-

vision/projects/index.html. Accessed Nov 2016.

Oxford Brookes University. (2016). Tensor modeling for identity recognition from gait and ac-

tivity recognition. Department of Computing and Communications Technologies. Oxford

Brookes University. http://cct.brookes.ac.uk/research/isec/artificial-

intelligence/themes/computer-vision/gait-id.html. Accessed Nov 2016.

Patt, Dr. Daniel. (2016). Aircrew Labor In-Cockpit Automation System (ALIAS). Aircrew Labor

In-Cockpit Automation System. Program Information. Defense Advanced Research Projects

Agency. http://www.darpa.mil/program/aircrew-labor-in-cockpit-automation-system. Accessed

Nov 2016.

Pawlyk, Oriana. (2016). Firm Plans to Build Autonomous Huey Helicopters. Defense Tech.

http://www.defensetech.org/2016/11/11/autonomous-huey-

helicopter/?comp%3D1199442010954%26rank%3D0. Accessed Nov 2016.

Prengaman, Richard J., Edward C. Wetzlar., Robert J. Bailey. (2001). Integrated Ship Defense

Johns Hopkins Apl Technical Digest. Volume 22. Number 4.

http://www.jhuapl.edu/techdigest/TD/td2204/Prengaman.pdf. Accessed Nov 2016.

Pitt, Joel. (2010). Vision. OpenCog.org. http://opencog.org/vision/. Accessed Nov 2016.

Power, Samantha. (2001).”Bystanders to Genocide: Why the United States Let the Rwandan

Tragedy Happen”. The Atlantic Monthly.

http://www.theatlantic.com/magazine/archive/2001/09/bystanders-to-

genocide/304571/?single_page=true%22. Accessed Nov 2016.

Pregmon, Matthew., et al. (2015). Final Report Robotics and Autonomous Systems. The Dwight

D. Eisenhower School for National Security and Resource Strategy. National Defense Universi-

ty. Fort McNair, Washington D.C. http://es.ndu.edu/Portals/75/Documents/industry-

study/reports/2015/es-is-report-robotics-autonomous-systems-2015.pdf. Accessed Nov 2016.

Pyer, Douglas. (2013). The Rise of the Machines, Why Increasingly “Perfect” Weapons Help

Perpetuate our Wars and Endanger Our Nation. Military Review. March-April 2013.

http://usacac.army.mil/CAC2/MilitaryReview/Archives/English/MilitaryReview_20130430_art0

05.pdf. Accessed Nov 2016.

Quintana, Elizabeth. (2012). The Ethics and Legal Implications of Military Unmanned Vehicles

Head of Military Technology & Information Studies Royal United Services Institute for Defence

and Security Studies. http://www.alfredoroma.it/wp-content/uploads/2012/05/RUSI_ethics.pdf.

Accessed Nov 2016.

Raytheon. (2016). Phalanx Close-In Weapon System Last Line of Defense for air, land and sea.

Website. http://www.raytheon.com/capabilities/products/phalanx/. Accessed Nov 2016.

Page 81: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

73

Rheinmetall Defence. (2016). Air Defence Systems Guarding against the threat from above.

Rheinmetall Defence. Air Defence Systems. http://www.rheinmetall-

defence.com/en/rheinmetall_defence/systems_and_products/air_defence_systems/index.php.

Accessed Nov 2016.

The Robotics Institute. (2015). Uber, Carnegie Mellon Announce Strategic Partnership. School

of Computer Science. Carnegie Mellon University.

http://www.ri.cmu.edu/news_view.html?news_id=395&menu_id=239. Accessed Nov 2016.

Rowan, David. (2015). DeepMind: inside Google's super-brain. Wired Magazine.

http://www.wired.co.uk/article/deepmind. Accessed Nov 2016.

RT Staff. (2015). Japan to Create More User-Friendly Elderly Care Robots The Japanese gov-

ernment said some of the elderly care robots built have been too large and too expensive.

http://www.roboticstrends.com/article/japan_to_create_more_user_friendly_elderly_care_robots/

medical. Accessed Nov 2016.

Russakovsky, Olga., Jia Deng., Hao Su., Jonathan Krause., Sanjeev Satheesh., Sean Ma.,

Zhiheng Huang., Andrej Karpathy., Aditya Khosla., Michael Bernstein., Alexander C. Berg., Li

Fei-Fei.(2015). ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015 paper.

Salmon, Felix., Jon Stokes. (2010). Algorithms Take Control of Wall Street. Wired Magazine.

https://www.wired.com/2010/12/ff_ai_flashtrading/. Accessed Nov 2016.

Sassoli, Marco., et al. (2014). Autonomous Weapons Systems Under International Law. Geneva

Academy of International Humanitarian Law and Human Rights. YouTube.

https://www.youtube.com/watch?v=Clw2KxBu03c. Accessed Nov 2016.

Scharre, Paul. (2016). Autonomous Weapons and Operational Risk. Ethical Autonomy Project

Senior Fellow and Director of the 20YY Future of Warfare Initiative. Center for a New Ameri-

can Security. https://s3.amazonaws.com/files.cnas.org/documents/CNAS_Autonomous-

weapons-operational-risk.pdf. Accessed Nov 2016.

Scharre, Paul., Horowitz, Michael C. (2015). An Introduction to Autonomy in Weapon Systems.

Center for New American Security. https://s3.amazonaws.com/files.cnas.org/documents/Ethical-

Autonomy-Working-Paper_021015_v02.pdf. Accessed Nov 2016.

Shachtman, Noah. (2007). Robot Cannon Kills 9, Wounds 14. Wired Magazine.

https://www.wired.com/2007/10/robot-cannon-ki/. Accessed Nov 2016.

Sharp. (2016). Sharp INTELLOS™ A-UGV. Website. Intelligence. http://sharpintellos.com. Ac-

cessed Nov 2016.

Page 82: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

74

Shaughnessy, Larry. (2012). One soldier, one year: $850,000 and rising. CNN.

http://security.blogs.cnn.com/2012/02/28/one-soldier-one-year-850000-and-rising/. Accessed

Dec 2016

Shobhit, Seth. (2015). Basics of Algorithmic Trading: Concepts and Examples. Investopedia.

http://www.investopedia.com/articles/active-trading/101014/basics-algorithmic-trading-

concepts-and-examples.asp. Accessed Nov 2016.

Singer, P.W., Jeffrey Lin., Hu Yu., Qian Xiaohu. (2016). China’s Army Hosts An Autonomous

Robot Contest Called “Overcoming Obstacle 2016”. Popular Science.

http://www.popsci.com/chinas-army-hosts-an-autonomous-robots-contest. Accessed Nov 2016.

Singer, P.W. Cole, August. (2015). Ghost Fleet. Houghton Mifflin Harcourt Publishing Compa-

ny. New York, New York.

Singer, P.W. (2009). In the Loop? Armed Robots and the Future of War. Brookings Institute.

https://www.brookings.edu/articles/in-the-loop-armed-robots-and-the-future-of-war/. Accessed

Nov 2016.

Singer, P.W., August Cole. (2015). The weapons the U.S. needs for a war it doesn’t want.

Reuters. http://blogs.reuters.com/great-debate/2015/07/19/the-weapons-we-need-for-a-war-we-

dont-want/. Accessed Nov 2016.

Singer, P.W. (2009). Wired for War The Robotics Revolution and Conflict in the 21st Century.

Penguin Books. New York, New York.

Singh, Abhijit. (2016). Is China Really Building Missiles With Artificial Intelligence? There are

real limits on the amount of AI acceptable to navy commanders. The Diplomat.

http://thediplomat.com/2016/09/is-china-really-building-missiles-with-artificial-intelligence/.

Accessed Nov 2016.

SRI International. (2010). Siri Launches Virtual Personal Assistant for iPhone 3GS. News Re-

lease. Information & Computing Sciences. SRI International.

https://www.sri.com/newsroom/press-releases/siri-launches-virtual-personal-assistant-iphone-

3gs. Accessed Nov 2016.

Stanford Program in AI-assisted Care (PAC). (2016). Stanford Program in AI-assisted Care

(PAC) Website. Stanford University. http://vision.stanford.edu/pac/. Accessed Nov 2016.

Strain, Steve., Sean Kugele., Stan Franklin. (2014). The Learning Intelligent Distribution Agent

(LIDA) and Medical Agent X (MAX): Computational Intelligence for Medical Diagnosis Cogni-

tive Computing Research Group (CCRG). University of Memphis. Memphis, Tennessee USA.

http://ccrg.cs.memphis.edu/assets/papers/2014/PID3418857.pdf. Accessed Nov 2016.

Stout, Kristie Lu. (2016).Cyber warfare: Who is China hacking now?. CNN.

http://www.cnn.com/2016/09/29/asia/china-cyber-spies-hacking/. Accessed Nov 2016.

Page 83: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

75

Turing, Alan. (1950). Computing Machinery and Intelligence. Mind.

http://loebner.net/Prizef/TuringArticle.html. Accessed Nov 2016.

The United States Marine Corps. (2016). The Marine Corps Operating Concept How an Expedi-

tionary Force Operates in the 21st Century. Marine Corps Combat Development Command.

http://www.mccdc.marines.mil/Portals/172/Docs/MCCDC/MOC/Marine%20Corps%20Operatin

g%20Concept%20Sept%202016.pdf?ver=2016-09-28-084156-190. accessed October 2016.

Quantico Virginia.

United Nations Office at Geneva. (2016). 2016 Meeting of Experts on LAWS. United Nations.

http://www.unog.ch/80256EE600585943/(httpPages)/37D51189AC4FB6E1C1257F4D004CAF

B2?OpenDocument. Accessed Nov 2016.

United Nations. (2008). Responsibility of States for International Wrongful Acts 2001. Interna-

tional Law Commission. United Nations.

http://legal.un.org/ilc/texts/instruments/english/draft_articles/9_6_2001.pdf. Accessed Nov 2016.

United Nations. (2002). A/RES/56/83 Responsibility of States for internationally wrongful acts

12 Dec. 2001. https://www.un.org/en/ga/search/view_doc.asp?symbol=A/RES/56/83. Accessed

Nov 2016.

United Nations. (2004). UN Report of the High-Level Panel on Threats, Challenges and Change.

"A More Secure World: Our Shared Responsibility.” UN General Assembly. A/59/565.

The University of Memphis. Projects: IDA and LIDA. Institute for Intelligent Systems. The Uni-

versity of Memphis. http://www.memphis.edu/iis/projects/ida_and_lida.php. Accessed Nov

2016.

The Vision Lab. (2016). The Vision Lab Home Page. The Vision Lab. Computer Science De-

partment. Stanford University. http://vision.stanford.edu/index.html. Accessed Nov 2017.

Wada, Kazuyoshi., Shibata, Takanori., Saito, Tomoko., Tanie, Kazuo. (2004). Effects of Robot-

Assisted Activity for Elderly People and Nurses at a Day Service Center. Invited Paper. Proceed-

ing of the IEEE. Vol. 92. No 11. Page 1780-1788.

Wolens, Doug. (2009). Singularity 101 with Vernor Vinge. Humanity Plus Magazine.

http://hplusmagazine.com/2009/04/22/singularity-101-vernor-vinge/. Accessed Nov 2016.

Work, Robert. (2015). The Third U.S. Offset Strategy and its Implications for Partners and Al-

lies. Deputy Secretary of Defense Speech. As Delivered by Deputy Secretary of Defense Bob

Work, Willard Hotel, Washington, D.C.. http://www.defense.gov/News/Speeches/Speech-

View/Article/606641/the-third-us-offset-strategy-and-its-implications-for-partners-and-allies.

Accessed Nov 2016.

Page 84: POLICIES ON THE EMPLOYMENT OF LETHAL AUTONOMOUS WEAPON ...€¦ · IoT Internet of Things ISR Intelligence, Surveillance, and Reconnaissance LARs Lethal autonomous robotics LAWS Lethal

76

Work, Robert. & Brimley, Shawn. (2014). 20YY Preparing for War in the Robotic Age. Center

for a New American Security.

https://s3.amazonaws.com/files.cnas.org/documents/CNAS_20YY_WorkBrimley.pdf. Accessed

October 2016.

Yan, Jie. (2016). China’s Courts Look to AI for Smarter Judgments Big data and machine learn-

ing to make China’s judicial system more intelligent and efficient. Sixth Tone.

http://www.sixthtone.com/news/china’s-courts-look-ai-smarter-judgments. Accessed Nov 2016.

Yuanchao, Li., Liu Yandong., Han Qide. (2015). Robot Products from Our School were

Unveiled at the World Robot Conference. Harbin Institute of Technology.

http://en.hit.edu.cn/snews.asp?id=1001. Accessed Nov 2016.

Yudkowsky, Eliezer. (2008). Artificial Intelligence as a Positive and Negative Factor in Global

Risk. Machine Intelligence Research Institute. http://intelligence.org/files/AIPosNegFactor.pdf.

Accessed Nov 2016.

Yudkowsky, Eliezer. (2016). Box Experiment Webpage. Yudkowski Website.

http://www.yudkowsky.net/singularity/aibox/. Accessed Nov 2016.

Yudkowsky, Eliezer. (2004). Coherent Extrapolated Volition. Machine Intelligence Research

Institute. https://intelligence.org/files/CEV.pdf. Accessed Nov 2016.

Zipline. (2016). The Future of Healthcare is Out for Delivery. Zipline Homepage.

http://flyzipline.com/product/. Accessed Nov 2016.

Major Ted W. Schroeder is an infantry officer in the United States Marine Corps currently sta-

tioned in Madison Wisconsin. He recently completed is master’s degree in International Service

at American University.