8
2/16/2014 Robots of death, robots of love: The reality of android soldiers and why laws for robots are doomed to failure - Feature - TechRepublic http://www.techrepublic.com/article/robots-of-death-robots-of-love-the-reality-of-android-soldiers-and-why-laws-for-robots-are-doomed-to-failure/ 1/8 Robots of death, robots of love: The reality of android soldiers and why laws for robots are doomed to failure By Steve Ranger Judgement day may have just taken a step closer, for killer robots at least. Amidst concern about the deployment of intelligent robots on the battlefield, governments have agreed to look more closely at the issues that these weapons raise, the first step towards an outright ban before they've even been built. In November, the governments that are part of the Convention on Certain Conventional Weapons (CCW) agreed to meet in Geneva next year to discuss the issues related to so-called "lethal autonomous weapons systems," or what campaigners have dubbed "killer robots." For the military, war robots can have many advantages: They don't need food or pay, they don't get tired or need to sleep, they follow orders automatically, and they don't feel fear, anger, or pain. And, few back home would mourn if robot soldiers were destroyed on the battlefield, either. There are already plenty of examples of how technology has changed warfare from David's Sling to the invention of the tank. The most recent and controversial is the rise of drone warfare. But even these aircraft have a pilot who flies it by remote control, and it is the humans who make the decisions about which targets to pick and when to fire a missile. But what concerns many experts is the potential next generation of robotic weapons: ones that make their own decisions about who to target and who to kill.

Robots of death, robots of love: The reality of android

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Robots of death, robots of love: The reality of android

2/16/2014 Robots of death, robots of love: The reality of android soldiers and why laws for robots are doomed to failure - Feature - TechRepublic

http://www.techrepublic.com/article/robots-of-death-robots-of-love-the-reality-of-android-soldiers-and-why-laws-for-robots-are-doomed-to-failure/ 1/8

Robots of death, robots of love: The reality of android soldiers and why laws for robotsare doomed to failure

By Steve Ranger

Judgement day may have just taken a step closer, for killer robots at least. Amidst concern about the deployment of intelligent robots on the battlefield,

governments have agreed to look more closely at the issues that these weapons raise, the first step towards an outright ban before they've even been built.

In November, the governments that are part of the Convention on Certain Conventional Weapons (CCW) agreed to meet in Geneva next year to discuss the

issues related to so-called "lethal autonomous weapons systems," or what campaigners have dubbed "killer robots."

For the military, war robots can have many advantages: They don't need food or pay, they don't get tired or need to sleep, they follow orders automatically, and

they don't feel fear, anger, or pain. And, few back home would mourn if robot soldiers were destroyed on the battlefield, either.

There are already plenty of examples of how technology has changed warfare from David's Sling to the invention of the tank. The most recent and controversial

is the rise of drone warfare. But even these aircraft have a pilot who flies it by remote control, and it is the humans who make the decisions about which targets

to pick and when to fire a missile.

But what concerns many experts is the potential next generation of robotic weapons: ones that make their own decisions about who to target and who to kill.

Page 2: Robots of death, robots of love: The reality of android

2/16/2014 Robots of death, robots of love: The reality of android soldiers and why laws for robots are doomed to failure - Feature - TechRepublic

http://www.techrepublic.com/article/robots-of-death-robots-of-love-the-reality-of-android-soldiers-and-why-laws-for-robots-are-doomed-to-failure/ 2/8

Banning killer robots

"The decision to begin international discussions next year is a major leap forward for efforts to ban killer robots pre-emptively," said Steve Goose, arms director

at Human Rights Watch. "Governments have recognised that fully autonomous weapons raise serious legal and ethical concerns, and that urgent action is

needed."

While fully autonomous robot weapons might not be deployed for two or three decades, the International Committee for Robot Arms

Control (ICRAC), an international group of academics and experts concerned about the implications of a robot arms race, argues a

prohibition on the development and deployment of autonomous weapons systems is the correct approach. "Machines should not be

allowed to make the decision to kill people," it states.

While no autonomous weapons have been built yet, it's not a theoretical concern, either. Late last year, the U.S. Department of

Defense (DoD) released (http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf)its policy around how autonomous weapons should be used

(http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf) if they were to be deployed in the battlefield. The policy limits how they should

operate, but definitely doesn't ban them.

For example, the DoD guidelines state, "Autonomous and semi-autonomous weapon systems shall be designed to allow

commanders and operators to exercise appropriate levels of human judgment over the use of force," and requires that systems "are

sufficiently robust to minimize failures that could lead to unintended engagements or to loss of control of the system to

unauthorized parties."

The guidelines do however seem to exclude weapons powered by artificial intelligence (AI) from explicitly targeting humans: "Human-supervised autonomous

weapon systems may be used to select and engage targets, with the exception of selecting humans as targets, for local defense to intercept attempted time-

critical or saturation attacks."

In contrast, (http://www.publications.parliament.uk/pa/cm201314/cmhansrd/cm130617/debtext/130617-0004.htm)the UK says it has no plans to develop fully autonomous

weapons (http://www.publications.parliament.uk/pa/cm201314/cmhansrd/cm130617/debtext/130617-0004.htm). Foreign Office minister Alistair Burt told Parliament earlier this

year that the UK armed forces are clear that "the operation of our weapons will always be under human control as an absolute guarantee of human oversight

and authority and of accountability for weapons usage," but then qualifies that slightly: "The UK has unilaterally decided to put in place a restrictive policy

whereby we have no plans at present to develop lethal autonomous robotics, but we do not intend to formalise that in a national moratorium."

Noel Sharkey is chairman of ICRAC and professor of AI and robotics at the University of Sheffield in the UK. When he started reading about military plans

around autonomous weapons he was shocked because "There seemed to be a complete overestimation of the technology. It was more like a sci-fi

interpretation of the technology."

"Governments have

recognised that fully

autonomous

weapons raise

serious legal and

ethical concerns,

and that urgent

action is needed."

— Steve Goose, arms

director at Human

Rights Watch

Page 3: Robots of death, robots of love: The reality of android

2/16/2014 Robots of death, robots of love: The reality of android soldiers and why laws for robots are doomed to failure - Feature - TechRepublic

http://www.techrepublic.com/article/robots-of-death-robots-of-love-the-reality-of-android-soldiers-and-why-laws-for-robots-are-doomed-to-failure/ 3/8

Of ICRAC's intentions he says the campaign is

not against autonomous robots. "My vacuum

cleaner is an autonomous robot, and I've worked

for 30 years developing autonomous robots."

What it wants is a ban on what it calls the "kill

function." An autonomous weapon is one that,

once launched, can select its own targets and

engage them, Sharkey says. "Engage them

means kill them. So it's the idea of the machine

selecting its own targets that's the problem for

us."

For Sharkey robot soldiers can't comply with the

basic rules of war. They can't distinguish between

a combatant or a civilian or between a wounded

soldier and a legitimate target. "There are no AI

robotic systems capable of doing that at all," he

argues, pointing to one UK-built system that can

tell the difference between a human and a car

"but has problems with a dancing bear or a dog

on its hind legs."

A robot weapons system won't be able to judge

proportionality either, he argues; that is, judge whether civilian losses are acceptable and in proportion to the military advantage gained by an attack. "How's a

robot going to know that? PhDs are written on military advantage. It's very contextual. You require a very experienced commander in the field on the ground who

makes that judgment," he said.

But one of the biggest issues is accountability, Sharkey said. A robot can't be blamed if a military operation goes wrong, and that's what really worries the

military commanders that he speaks to: They are the ones who would be held accountable for launching the attack.

"But it wouldn't be fair because these things can crash at any time, they can be spoofed, they can be hacked, they can get tackled in the industrial supply

chain, they can take a bullet through the computer, human error in coding, you can have sensor problems, and who is responsible? Is it the manufacturers, the

software engineers, the engineers, or is it the commander? In war, you need to know, if there's a mishap, who's responsible."

Professor Noel Sharkey

Image: stopkillerrobots.org (http://www.stopkil lerrobots.org/2013/10/timefortalks/)

Page 4: Robots of death, robots of love: The reality of android

2/16/2014 Robots of death, robots of love: The reality of android soldiers and why laws for robots are doomed to failure - Feature - TechRepublic

http://www.techrepublic.com/article/robots-of-death-robots-of-love-the-reality-of-android-soldiers-and-why-laws-for-robots-are-doomed-to-failure/ 4/8

Sharkey's concern is that the weapons will be rolled out gradually despite the limitations in the technology. "The technology itself is just not fit for purpose and

it's not going to be fit for purpose by the time these things are deployed."

As the battlefield adapts to the use of increasingly high tech weapons, the use of autonomous robots become more likely. If an enemy can render drones

useless by blocking their communications (a likely consequence of their increased usage) then an autonomous drone which can simply continue with its

mission without calling home is a useful addition. Similarly, because it takes (roughly) one-and-a-half seconds for a movement on a remote pilot's joystick to

have an effect on a drone, they would be slower to respond than autonomous aircraft if attacked, which is another good reason to make them self-governing.

The ICRAC campaign hopes to use the decision by the Convention on Conventional Weapons to look at autonomous weapons as a first step towards a ban,

using the same strategy that lead to a pre-emptive ban on blinding laser weapons.

One reason for the unreasonable level of expectation around autonomous weapons is the belief that AI is far more capable than it

really is, or what Sharkey describes as the "cultural myth of artificial intelligence that has come out of science fiction." Researchers

working in the field assert that AI is working on projects that are far more mundane (if useful) than building thinking humanoid

robots.

"Every decade, within 20 years we are going to have sentient robots and there is always somebody saying it, but if you look at the

people on the ground working [on AI] they don't say this. They get on with the work. AI is mostly a practical subject developing

things that you don't even know are AI -- in your phone, in your car, that's the way we work."

And even if, at some point in the far future, AI matures to the point at which a computer system can abide by the rules of war, the

fundamental moral questions will still apply. Sharkey said, "You've still got the problems of accountability and people will have to

decide is this morally what we want to have, a machine making that decision to kill a human."

The android rules

Discussing whether robots should be allowed to kill – especially when killer robots don't exist – might seem to be a slightly arcane and obscure debate to be

having. But robots (and artificial intelligence) are playing ever-larger roles in society and we are figuring out piecemeal what is acceptable and what isn't.

What we have been doing so far is building rules for specific situations, such as the DoD policy on autonomous weapons systems. Another less dramatic

example is the recent move by some US states to pass legislation to allow autonomous cars to drive on the road. We're gradually building a set of rules for

autonomous robots in specific situations but rarely looking at the big picture.

However, there have been attempts to create a set of rules, a moral framework, to govern AI and robots. Certainly the most famous attempt to create a set of

"People will have to

decide is this

morally what we

want to have, a

machine making

that decision to kill a

human."

— Noel Sharkey,

chairman of ICRAC

Page 5: Robots of death, robots of love: The reality of android

2/16/2014 Robots of death, robots of love: The reality of android soldiers and why laws for robots are doomed to failure - Feature - TechRepublic

http://www.techrepublic.com/article/robots-of-death-robots-of-love-the-reality-of-android-soldiers-and-why-laws-for-robots-are-doomed-to-failure/ 5/8

laws for robots to date is Isaac Asimov's three laws of robotics which, since they were first defined in 1942, have offered – at least in fiction – a moral framework

for how robots should behave.

Asimov's three laws state:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Robotics and AI haven't come anywhere close to being able to build robots that would be able to comprehend or abide by these or any other sophisticated

rules. A robot vacuum cleaner doesn’t need this level of moral complexity.

"People think about Asimov's laws, but they were

set up to point out how a simple ethical system

doesn't work. If you read the short stories, every

single one is about a failure, and they are totally

impractical," said Dr. Joanna Bryson of the

University of Bath.

Bryson emphasises that robots and AI need to be

considered as the latest set tools – extremely

sophisticated tools, but no more than that. She

argues that AI should be seen as a tool that

extends human intelligence in the same way that

writing did by allowing humans to take memory

out of their heads and put it into a book. “We've

been changing our world with things like artificial

intelligence for thousands of years,” she says.

“What's happening now is we're doing it faster."

But for Bryson, regardless of how autonomous or

intelligent an android is, because it is a tool, it's

Page 6: Robots of death, robots of love: The reality of android

2/16/2014 Robots of death, robots of love: The reality of android soldiers and why laws for robots are doomed to failure - Feature - TechRepublic

http://www.techrepublic.com/article/robots-of-death-robots-of-love-the-reality-of-android-soldiers-and-why-laws-for-robots-are-doomed-to-failure/ 6/8

not the robots that need the rules – it's us. "They

have to be inside our moral framework. They won't

have their own moral framework. We have to make the choice so that robots are positioned within our moral framework so that they don't damage the rest of

the life on the planet."

The UK's Engineering and Physical Sciences Research Council (EPSRC) is one of the few organisations that has tried to create a set of practical rules for

robots, and it quickly realised that laws for robots weren't what is needed right now.

Its (http://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/Pages/principlesofrobotics.aspx)Principles of Robotics

(http://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/Pages/principlesofrobotics.aspx) notes: "Asimov's laws are inappropriate because they try to insist that

robots behave in certain ways, as if they were people, when in real life, it is the humans who design and use the robots who must be the actual subjects of any

law. As we consider the ethical implications of having robots in our society, it becomes obvious that robots themselves are not where responsibility lies."

As such, the set of principles the EPSRC experts - including Dr. Bryson - outlined were for the designers, builders, and users of robots, not for the robots

themselves.

For example, the five principles include: "Humans, not robots, are responsible agents. Robots should be designed; operated as far as is practicable to comply

with existing laws and fundamental rights and freedoms, including privacy."

Dr. Kathleen Richardson of University College London (UCL) also argues that we don't need new rules for robots beyond the ones we have in place to protect us

from other types of machines, even if they are used on the battlefield.

"Naturally, a remote killing machine will raise a new set of issues in relation to the human relationship with violence. In such a case, one might need to know

that that machine would kill the 'right' target...but once again this has got nothing to with something called 'robot ethics' but human ethics," she said.

The robots we are currently building are not like the thinking machines we find in fiction, she argues, and so the important issues are more about standard

health and safety – that we don't build machines that accidentally fall on you – rather than helping them to distinguish between right and wrong.

"Robots made by scientists are like automaton," she said. "It is important to think about entities that we create and to ensure

humans can interact with them safely. But there are no 'special' guidelines that need to be created for robots, the mechanical

robots that are imagined to require ethics in these discussions do not exist and are not likely to exist," she said.

So while we might need rules to make sure a bipedal robot can operate safely in a home, these are practical considerations alone,

the ones you'd require from any consumer electronics in the home.

Image: raidz3ro.com (http://www.raidz3ro.com/isaac-asimovs-three-laws-of-robotics-t-shirt.html)

"We have to make

the choice so that

robots are

positioned within

our moral

Page 7: Robots of death, robots of love: The reality of android

2/16/2014 Robots of death, robots of love: The reality of android soldiers and why laws for robots are doomed to failure - Feature - TechRepublic

http://www.techrepublic.com/article/robots-of-death-robots-of-love-the-reality-of-android-soldiers-and-why-laws-for-robots-are-doomed-to-failure/ 7/8

"Ethics on the other hand implies something well beyond this," she says. "It implies a different set of categorical notions need to be

implemented in relation to robotic machines as special kinds of entities."

Exploitive, loving robots

Indeed, while few of us (hopefully) are likely to encounter a killer robot, with aging populations use of human-like robots for care may become more important,

and this could be a bigger long-term issue. Rather than feeling too much fear of robots, we may become emotionally dependent, and feel too much love.

Another of the EPSRC guidelines (again, one of

the few sets of guidelines in this area that exist)

states: "Robots are manufactured artifacts. They

should not be designed in a deceptive way to

exploit vulnerable users; instead their machine

nature should be transparent." It warns that

unscrupulous manufacturers might use the

illusion of emotions in a robot pet or companion to

find a way to charge more money.

Perhaps one of the biggest risks we face is that,

by giving robots the illusion of emotions and

investing them with the apparent need for a moral

framework to guide them, we risk raising them to

the level of humans – and making it easier to

ignore our fellow humans as a result.

UCL's Richardson argues that robotic scientists

are right to think about the implications but that

the debate risks missing a bigger issue: why are

we using these devices in the first place,

particularly in terms of social care.

The real responsibility

Killer robots and power-mad AIs are the staples of cheap science fiction, but fixating on these types of threats allow us to avoid the complexities of our own

mundane realities. It is a reflection - or indictment - of our society that the roles we are finding for robots – fighting our wars and looking after the elderly – are

framework."

— Dr. Joanna Bryson,

University of Bath

Robotic hand grasps human hand.

Image: Chris Beaumont/CBS Interactive

Page 8: Robots of death, robots of love: The reality of android

2/16/2014 Robots of death, robots of love: The reality of android soldiers and why laws for robots are doomed to failure - Feature - TechRepublic

http://www.techrepublic.com/article/robots-of-death-robots-of-love-the-reality-of-android-soldiers-and-why-laws-for-robots-are-doomed-to-failure/ 8/8

the roles that we are reluctant to fill ourselves.

Putting robots into these roles may fix part of the problem, but doesn't address the underlying issues, and even worse perhaps allows us as a society to ignore

them. Robots fighting our battles make war easier, and robots looking after the elderly makes it easier to ignore our obligations and societal strain that comes

with an aging population. As such, worrying about the moral framework for androids is often a distraction from our own ethical failings.

About Steve Ranger

Steve Ranger is the UK editor of TechRepublic, and has been writing about the impact of technology on people, business and culture for more

than a decade. Before joining TechRepublic he was the editor of silicon.com.