Upload
vulien
View
218
Download
1
Embed Size (px)
Citation preview
12/21/11 12:28 PMDrone-Ethics Briefing: What a Leading Robot Expert Told the CIA - Technology - The Atlantic
Page 1 of 12http://www.theatlantic.com/technology/print/2011/12/drone-ethics-briefing-what-a-leading-robot-expert-told-the-cia/250060/
Print | Close
Drone-Ethics Briefing: What aLeading Robot Expert Told theCIABy Patrick Lin
Last month, philosopher Patrick Lin delivered this briefing about the ethics of drones at an event
hosted by In-Q-Tel, the CIA's venture-capital arm. It's a thorough and unnerving survey of what it
might mean for the intelligence service to deploy different kinds of robots.
Robots are replacing humans on the battlefield--but could they also be used to interrogate and torture
suspects? This would avoid a serious ethical conflict between physicians' duty to do no harm, or
nonmaleficence, and their questionable role in monitoring vital signs and health of the interrogated. A
robot, on the other hand, wouldn't be bound by the Hippocratic oath, though its very existence creates
12/21/11 12:28 PMDrone-Ethics Briefing: What a Leading Robot Expert Told the CIA - Technology - The Atlantic
Page 2 of 12http://www.theatlantic.com/technology/print/2011/12/drone-ethics-briefing-what-a-leading-robot-expert-told-the-cia/250060/
new dilemmas of its own.
The ethics of military robots is quickly marching ahead, judging by news coverage and academic
research. Yet there's little discussion about robots in the service of national intelligence and espionage,
which are omnipresent activities in the background. This is surprising, because most military robots
are used for surveillance and reconnaissance, and their most controversial uses are traced back to the
Central Intelligence Agency (CIA) in targeted strikes against suspected terrorists. Just this month, a
CIA drone --a RQ-170 Sentinel--crash-landed intact into the hands of the Iranians, exposing the secret
US spy program in the volatile region.
The US intelligence community, to be sure, is very much interested in robot ethics. At the least, they
don't want to be ambushed by public criticism or worse, since that could derail programs, waste
resources, and erode international support. Many in government and policy also have a genuine
concern about "doing the right thing" and the impact of war technologies on society. To those ends, In-
Q-Tel--the CIA's technology venture-capital arm (the "Q" is a nod to the technology-gadget genius in
the James Bond spy movies)--had invited me to give a briefing to the intelligence community on ethical
surprises in their line of work, beyond familiar concerns over possible privacy violations and illegal
assassinations. This article is based on that briefing, and while I refer mainly to the US intelligence
community, this discussion could apply just as well to intelligence programs abroad.
BACKGROUND
Robotics is a game-changer in national security. We now find military robots in just about every
environment: land, sea, air, and even outer space. They have a full range of form-factors from tiny
robots that look like insects to aerial drones with wingspans greater than a Boeing 737 airliner. Some
are fixed onto battleships, while others patrol borders in Israel and South Korea; these have fully-auto
modes and can make their own targeting and attack decisions. There's interesting work going on now
with micro robots, swarm robots, humanoids, chemical bots, and biological-machine integrations. As
you'd expect, military robots have fierce names like: TALON SWORDS, Crusher, BEAR, Big Dog,
Predator, Reaper, Harpy, Raven, Global Hawk, Vulture, Switchblade, and so on. But not all are
weapons--for instance, BEAR is designed to retrieve wounded soldiers on an active battlefield.
The usual reason why we'd want robots in the service of national security and intelligence is that they
can do jobs known as the 3 "D"s: Dull jobs, such as extended reconnaissance or patrol beyond limits of
human endurance, and standing guard over perimeters; dirty jobs, such as work with hazardous
materials and after nuclear or biochemical attacks, and in environments unsuitable for humans, such
as underwater and outer space; and dangerous jobs, such as tunneling in terrorist caves, or controlling
hostile crowds, or clearing improvised explosive devices (IEDs).
Robots don't act with malice or hatred or other emotions that can lead to war crimes and other
abuses, such as rape.
12/21/11 12:28 PMDrone-Ethics Briefing: What a Leading Robot Expert Told the CIA - Technology - The Atlantic
Page 3 of 12http://www.theatlantic.com/technology/print/2011/12/drone-ethics-briefing-what-a-leading-robot-expert-told-the-cia/250060/
But there's a new, fourth "D" that's worth considering, and that's the ability to act with dispassion.
(This is motivated by Prof. Ronald Arkin's work at Georgia Tech, though others remain skeptical, such
as Prof. Noel Sharkey at University of Sheffield in the UK.) Robots wouldn't act with malice or hatred
or other emotions that may lead to war crimes and other abuses, such as rape. They're unaffected by
emotion and adrenaline and hunger. They're immune to sleep deprivation, low morale, fatigue, etc.
that would cloud our judgment. They can see through the "fog of war", to reduce unlawful and
accidental killings. And they can be objective, unblinking observers to ensure ethical conduct in
wartime. So robots can do many of our jobs better than we can, and maybe even act more ethically, at
least in the high-stress environment of war.
SCENARIOS
With that background, let's look at some current and future scenarios. These go beyond obvious
intelligence, surveillance, and reconnaissance (ISR), strike, and sentry applications, as most robots are
being used for today. I'll limit these scenarios to a time horizon of about 10-15 years from now.
Military surveillance applications are well known, but there are also important civilian applications,
such as robots that patrol playgrounds for pedophiles (for instance, in South Korea) and major
sporting events for suspicious activity (such as the 2006 World Cup in Seoul and 2008 Beijing
Olympics). Current and future biometric capabilities may enable robots to detect faces, drugs, and
weapons at a distance and underneath clothing. In the future, robot swarms and "smart dust"
(sometimes called nanosensors) may be used in this role.
Robots can be used for alerting purposes, such as a humanoid police robot in China that gives out
information, and a Russian police robot that recites laws and issues warnings. So there's potential for
educational or communication roles and on-the-spot community reporting, as related to intelligence
gathering.
In delivery applications, SWAT police teams already use robots to interact with hostage-takers and in
other dangerous situations. So robots could be used to deliver other items or plant surveillance devices
in inaccessible places. Likewise, they can be used for extractions too. As mentioned earlier, the BEAR
robot can retrieve wounded soldiers from the battlefield, as well as handle hazardous or heavy
materials. In the future, an autonomous car or helicopter might be deployed to extract or transport
suspects and assets, to limit US personnel inside hostile or foreign borders.
In detention applications, robots could also be used to not just guard buildings but also people. Some
advantages here would be the elimination of prison abuses like we saw at Guantanamo Bay Naval Base
in Cuba and Abu Ghraib prison in Iraq. This speaks to the dispassionate way robots can operate.
Relatedly--and I'm not advocating any of these scenarios, just speculating on possible uses--robots can
solve the dilemma of using physicians in interrogations and torture. These activities conflict with their
12/21/11 12:28 PMDrone-Ethics Briefing: What a Leading Robot Expert Told the CIA - Technology - The Atlantic
Page 4 of 12http://www.theatlantic.com/technology/print/2011/12/drone-ethics-briefing-what-a-leading-robot-expert-told-the-cia/250060/
duty to care and the Hippocratic oath to do no harm. Robots can monitor vital signs of interrogated
suspects, as well as a human doctor can. They could also administer injections and even inflict pain in a
more controlled way, free from malice and prejudices that might take things too far (or much further
than already).
And robots could act as Trojan horses, or gifts with a hidden surprise. I'll talk more about these
scenarios and others as we discuss possible ethical surprises next.
ETHICAL AND POLICY SURPRISES
1. Limitations
While robots can be seen as replacements for humans, in most situations, humans will still be in the
loop, or at least on the loop--either in significant control of the robot, or able to veto a robot's course of
action. And robots will likely be interacting with humans. This points to a possible weak link in
applications: the human factor.
For instance, unmanned aerial vehicles (UAVs), such as Predator and Global Hawk, may be able to fly
the skies for longer than a normal human can endure, but there are still human operators who must
stay awake to monitor activities. Some military UAV operators may be overworked and fatigued, which
may lead to errors in judgment. Even without fatigue, humans may still make bad decisions, so errors
and even mischief are always a possibility and may include friendly-fire deaths and crashes.
Some critics have worried that UAV operators--controlling drones from half a world away--could
become detached and less caring about killing, given the distance, and this may lead to more
unjustified strikes and collateral damage. But other reports seem to indicate an opposite effect: These
controllers have an intimate view of their targets by video streaming, following them for hours and
days, and they can also see the aftermath of a strike, which may include strewn body parts of nearby
children. So there's a real risk of post-traumatic stress disorder (PTSD) with these operators.
Another source of liability is how we frame our use of robots to the public and international
communities. In a recent broadcast interview, one US military officer was responding to a concern that
drones are making war easier to wage, given that we can safely strike from longer distances with these
drones. He compared our use of drones with the biblical David's use of a sling against Goliath: both are
about using missile or long-range weapons and presumably have righteousness on their side. Now,
whether or not you're Christian, it's clear that our adversaries might not be. So rhetoric like this might
inflame or exacerbate tensions, and this reflects badly on our use of technology.
One more human weak-link is that robots may likely have better situational awareness, if they're
outfitted with sensors that can let them see in the dark, through walls, networked with other
computers, and so on. This raises the following problem: Could a robot ever refuse a human order, if it
knows better? For instance, if a human orders a robot to shoot a target or destroy a safehouse, but it
turns out that the robot identifies the target as a child or a safehouse full of noncombatants, could it
12/21/11 12:28 PMDrone-Ethics Briefing: What a Leading Robot Expert Told the CIA - Technology - The Atlantic
Page 5 of 12http://www.theatlantic.com/technology/print/2011/12/drone-ethics-briefing-what-a-leading-robot-expert-told-the-cia/250060/
refuse that order? Does having the technical ability to collect better intelligence before we conduct a
strike obligate us to do everything we can to collect that data? That is, would we be liable for not
knowing things that we might have known by deploying intelligence-gathering robots? Similarly, given
that UAVs can enable more precise strikes, are we obligated to use them to minimize collateral
damage?
On the other hand, robots themselves could be the weak link. While they can replace us in physical
tasks like heavy lifting or working with dangerous materials, it doesn't seem likely that they can take
over psychological jobs such as gaining the confidence of an agent, which involves humor, mirroring,
and other social tricks. So human intelligence, or HUMINT, will still be necessary in the foreseeable
future.
Relatedly, we already hear criticisms that the use of technology in war or peacekeeping missions aren't
helping to win the hearts and minds of local foreign populations. For instance, sending in robot patrols
into Baghdad to keep the peace would send the wrong message about our willingness to connect with
the residents; we will still need human diplomacy for that. In war, this could backfire against us, as our
enemies mark us as dishonorable and cowardly for not willing to engage them man to man. This serves
to make them more resolute in fighting us; it fuels their propaganda and recruitment efforts; and this
leads to a new crop of determined terrorists.
Also, robots might not be taken seriously by humans interacting with them. We tend to disrespect
machines more than humans, abusing them more often, for instance, beating up printers and
computers that annoy us. So we could be impatient with robots, as well as distrustful--and this reduces
their effectiveness.
Without defenses, robot could be easy targets for capture, yet they may contain critical technologies
and classified data that we don't want to fall into the wrong hands. Robotic self-destruct measures
could go off at the wrong time and place, injuring people and creating an international crisis. So do we
give them defensive capabilities, such as evasive maneuvers or maybe nonlethal weapons like repellent
spray or Taser guns or rubber bullets? Well, any of these "nonlethal" measures could turn deadly too.
In running away, a robot could mow down a small child or enemy combatant, which would escalate a
crisis. And we see news reports all too often about unintended deaths caused by Tasers and other
supposedly nonlethal weapons.
12/21/11 12:28 PMDrone-Ethics Briefing: What a Leading Robot Expert Told the CIA - Technology - The Atlantic
Page 6 of 12http://www.theatlantic.com/technology/print/2011/12/drone-ethics-briefing-what-a-leading-robot-expert-told-the-cia/250060/
2. International humanitarian law (IHL)
What if we designed robots with lethal defenses or offensive capabilities? We already do that with some
robots, like the Predator, Reaper, CIWS, and others. And there, we run into familiar concerns that
robots might not comply with international humanitarian law, that is, the laws of war. For instance,
critics have noted that we shouldn't allow robots to make their own attack decisions (as some do now),
because they don't have the technical ability to distinguish combatants from noncombatants, that is, to
satisfy the principle of distinction, which is found in various places such as the Geneva Conventions
and the underlying just-war tradition. This principle requires that we never target noncombatants. But
a robot already has a hard time distinguishing a terrorist pointing a gun at it from, say, a girl pointing
an ice cream cone at it. These days, even humans have a hard time with this principle, since a terrorist
might look exactly like an Afghani shepherd with an AK-47 who's just protecting his flock of goats.
Another worry is that the use of lethal robots represents a disproportionate use of force, relative to the
military objective. This speaks to the collateral damage, or unintended death of nearby innocent
civilians, caused by, say, a Hellfire missile launched by a Reaper UAV. What's an acceptable rate of
innocents killed for every bad guy killed: 2:1, 10:1, 50:1? That number hasn't been nailed down and
continues to be a source of criticism. It's conceivable that there might be a target of such high value
that even a 1,000:1 collateral-damage rate, or greater, would be acceptable to us.
Even if we could solve these problems, there may be another one we'd then have to worry about. Let's
say we were able to create a robot that targets only combatants and that leaves no collateral damage--
an armed robot with a perfectly accurate targeting system. Well, oddly enough, this may violate a rule
by the International Committee of the Red Cross (ICRC), which bans weapons that cause more than
12/21/11 12:28 PMDrone-Ethics Briefing: What a Leading Robot Expert Told the CIA - Technology - The Atlantic
Page 7 of 12http://www.theatlantic.com/technology/print/2011/12/drone-ethics-briefing-what-a-leading-robot-expert-told-the-cia/250060/
25% field mortality and 5% hospital mortality. ICRC is the only institution named as a controlling
authority in IHL, so we comply with their rules. A robot that kills most everything it aims at could have
a mortality rate approaching 100%, well over ICRC's 25% threshold. And this may be possible given the
superhuman accuracy of machines, again assuming we can eventually solve the distinction problem.
Such a robot would be so fearsome, inhumane, and devastating that it threatens an implicit value of a
fair fight, even in war. For instance, poison is also banned for being inhumane and too effective. This
notion of a fair fight comes from just-war theory, which is the basis for IHL. Further, this kind of robot
would force questions about the ethics of creating machines that kill people on its own.
Other conventions in IHL may be relevant to robotics too. As we develop human enhancements for
soldiers, whether pharmaceutical or robotic integrations, it's unclear whether we've just created a
biological weapon. The Biological Weapons Convention (BWC) doesn't specify that bioweapons need to
be microbial or a pathogen. So, in theory and without explicit clarification, a cyborg with super-
strength or super-endurance could count as a biological weapon. Of course, the intent of the BWC was
to prohibit indiscriminate weapons of mass destruction (again, related to the issue of humane
weapons). But the vague language of the BWC could open the door for this criticism.
If a soldier could resist pain through robotics or genetic engineering or drugs, are we still
prohibited from torturing that person?
Speaking of cyborgs, there are many issues related to these enhanced warfighters, for instance: If a
soldier could resist pain through robotics or genetic engineering or drugs, are we still prohibited from
torturing that person? Would taking a hammer to a robotic limb count as torture? Soldiers don't sign
away all their rights at the recruitment door: what kind of consent, if any, is needed to perform
biomedical experiments on soldiers, such as cybernetics research? (This echoes past controversies
related to mandatory anthrax vaccinations and, even now, required amphetamine use by some military
pilots.) Do enhancements justify treating soldiers differently, either in terms of duties, promotion, or
length of service? How does it affect unit cohesion if enhanced soldiers, who may take more risks, work
alongside normal soldiers? Back more squarely to robotics: How does it affect unit cohesion if humans
work alongside robots that might be equipped with cameras to record their every action?
And back more squarely to the intelligence community, the line between war and espionage is getting
fuzzier all the time. Historically, espionage isn't considered to be casus belli or a good cause for going
to war. War is traditionally defined as armed, physical conflict between political communities. But
because so much of our assets are digital or information-based, we can attack--and be attacked--by
nonkinetic means now, namely by cyberweapons that take down computer systems or steal
information. Indeed, earlier this year, the US declared as part of its cyberpolicy that we may retaliate
kinetically to a nonkinetic attack. Or as one US Department of Defense official said, "If you shut down
our power grid, maybe we'll put a missile down one of your smokestacks."
As it applies to our focus here: if the line between espionage and war is becoming more blurry, and a
12/21/11 12:28 PMDrone-Ethics Briefing: What a Leading Robot Expert Told the CIA - Technology - The Atlantic
Page 8 of 12http://www.theatlantic.com/technology/print/2011/12/drone-ethics-briefing-what-a-leading-robot-expert-told-the-cia/250060/
robot is used for espionage, under what conditions could that count as an act of war? What if the spy
robot, while trying to evade capture, accidentally harmed a foreign national: could that be a flashpoint
for armed conflict? (What if the CIA drone in Iran recently had crashed into a school or military base,
killing children or soldiers?)
3. Law & responsibility
Accidents are entirely plausible and have happened elsewhere: In September 2011, an RQ-Shadow
UAV crashed into a military cargo plane in Afghanistan, forcing an emergency landing. Last summer,
test-flight operators of a MQ-8B Fire Scout helicopter UAV lost control of the drone for about half an
hour, which traveled for over 20 miles towards restricted airspace over Washington DC. A few years
ago in South Africa, a robotic cannon went haywire and killed 9 friendly soldiers and wounded 14
more.
Errors and accidents happen all the time with our technologies, so it would be naïve to think that
anything as complex as a robot would be immune to these problems. Further, a robot with a certain
degree of autonomy may raise questions of who (or what) is responsible for harm caused by the robot,
either accidental or intentional: could it be the robot itself, or its operator, or the programmer? Will
manufacturers insist on a release of liability, like the EULA or end-user licensing agreements we agree
to when we use software--or should we insist that those products should be thoroughly tested and
proven safe? (Imagine if buying a car required signing a EULA that covers a car's mechanical or digital
malfunctions.)
We're seeing more robotics in society, from Roombas at home to robotics on factory floors. In Japan,
about 1 in 25 workers is a robot, given their labor shortage. So it's plausible that robots in the service of
national intelligence may interact with society at large, such as autonomous cars or domestic
surveillance robots or rescue robots. If so, they need to comply with society's laws too, such as rules of
the road or sharing airspace and waterways.
But, to the extent that robots can replace humans, what about complying with something like a legal
obligation to assist others in need, such as required by a Good Samaritan Law or basic international
laws that require ships to assist other naval vessels in distress? Would an unmanned surface vehicle, or
robotic boat, be obligated to stop and save a crew of a sinking ship? This was a highly contested issue in
World War 2--the Laconia incident--when submarine commanders refused to save stranded sailors at
sea, as required by the governing laws of war at the time. It's not unreasonable to say that this
obligation shouldn't apply to a submarine, since surfacing to rescue would give away its position, and
stealth is its primary advantage. Could we therefore release unmanned underwater vehicles (UUVs)
and unmanned surface vehicles (USVs) from this obligation for similar reasons?
We also need to keep in mind environmental, health, and safety issues. Microbots and disposable
robots could be deployed in swarms, but we need to think about the end of that product lifecycle. How
do we clean up after them? If we don't, and they're tiny--for instance, nanosensors--then they could
12/21/11 12:28 PMDrone-Ethics Briefing: What a Leading Robot Expert Told the CIA - Technology - The Atlantic
Page 9 of 12http://www.theatlantic.com/technology/print/2011/12/drone-ethics-briefing-what-a-leading-robot-expert-told-the-cia/250060/
then be ingested or inhaled by animals or people. (Think about all the natural allergens that affect our
health, never mind engineered stuff.) They may contain hazardous materials, like mercury or other
chemicals in their battery, that can leak into the environment. Not just on land, but we also need to
think about underwater and even space environments, at least with respect to space litter.
For the sake of completeness, I'll also mention privacy concerns, though these are familiar in current
discussions. The worry is not just with microbots, which may look like harmless insects and birds, that
can peek into your window or crawl into your house, but also with the increasing biometrics
capabilities that robots could be outfitted with. The ability to detect faces from a distance as well as
drugs or weapons under clothing or inside a house from the outside blurs the distinction between a
surveillance and a search. The difference is that a search requires a judicial warrant. As technology
allows intelligence-gathering to be more intrusive, we'll certainly hear more from these critics.
Finally, we need to be aware of the temptation to use technology in ways we otherwise wouldn't do,
especially activites that are legally questionable--we'll always get called out for that. For instance, this
charge has already been made against our use of UAVs to hunt down terrorists. Some call it "targeted
killing", while others maintain that it's an "assassination." This is still very much an open question,
because "assassination" has not been clearly defined in international law or domestic law, e.g.,
Executive Order 12333. And the problem is exacerbated in asymmetrical warfare, where enemy
combatants don't wear uniforms: Singling them out by name may be permitted when it otherwise
wouldn't be; but others argue that it amounts to declaring targets as outlaws without due process,
especially if it's not clearly a military action (and the CIA is not formally a military agency).
Beyond this familiar charge, the risk of committing other legally-controversial acts still exists. For
instance, we could be tempted to use robots in extraditions, torture, actual assassinations, transport of
guns and drugs, and so on, in some of the scenarios described earlier. Even if not illegal, there are some
things that seem very unwise to do, such as a recent fake-vaccination operation in Pakistan to get DNA
samples that might help to find Osama bin Laden. In this case, perhaps robotic mosquitoes could have
been deployed, avoiding the suspicion and backlash that humanitarian workers had suffered
consequently.
12/21/11 12:28 PMDrone-Ethics Briefing: What a Leading Robot Expert Told the CIA - Technology - The Atlantic
Page 10 of 12http://www.theatlantic.com/technology/print/2011/12/drone-ethics-briefing-what-a-leading-robot-expert-told-the-cia/250060/
4. Deception
Had the fake-vaccination program been done in the context of an actual military conflict, then it could
be illegal under Geneva and Hague Conventions, which prohibit perfidy or treacherous deceit. Posing
as a humanitarian or Red Cross worker to gain access behind enemy lines is an example of perfidy: it
breaches what little mutual trust we have with our adversaries, and this is counterproductive to
arriving at a lasting peace. But, even if not illegally, we can still act in bad faith and need to be mindful
of that risk.
The same concern about perfidy could arise with robot insects and animals, for instance. Animals and
insects are typically not considered to be combatants or anything of concern to our enemies, like Red
Cross workers. Yet we would be trading on that faith to gain deep access to our enemy. By the way,
such a program could also get the attention of animal-rights activists, if it involves experimentation on
animals.
More broadly, the public could be worried about whether we should be creating machines that
intentionally deceive, manipulate, or coerce people. That's just disconcerting to a lot of folks, and the
ethics of that would be challenged. One example might be this: Consider that we've been paying off
Afghani warlords with Viagra, which is a less-obvious bribe than money. Sex is one of the most basic
incentives for human beings, so potentially some informants might want a sex-robot, which exist
today. Without getting into the ethics of sex-robots here, let's point out that these robots could also
have secret surveillance and strike capabilities--a femme fatale of sorts.
12/21/11 12:28 PMDrone-Ethics Briefing: What a Leading Robot Expert Told the CIA - Technology - The Atlantic
Page 11 of 12http://www.theatlantic.com/technology/print/2011/12/drone-ethics-briefing-what-a-leading-robot-expert-told-the-cia/250060/
The same deception could work with other robots, not just the pleasure models, as it were. We could
think of these as Trojan horses. Imagine that we captured an enemy robot, hacked into it or implanted
a surveillance device, and sent it back home: How is this different from masquerading as the enemy in
their own uniform, which is another perfidious ruse? Other questionable scenarios include
commandeering robotic cars or planes owned by others, and creating robots with back-door chips that
allow us to hijack the machine while in someone else's possession.
5. Broader effects
This point about deception and bad faith is related to a criticism we're already hearing about military
robots, which I mentioned earlier: that the US is afraid to send people to fight its battles; we're afraid
to meet the enemy face to face, and that makes us cowards and dishonorable. Terrorists would use that
resentment to recruit more supporters and terrorists.
But what about on our side: do we need to think how the use of robotics might impact recruitment in
our own intelligence community? If we increasing rely on robots in national intelligence--like the US
Air Force is relying on UAVs--that could hurt or disrupt efforts in bringing in good people. After all, a
robotic spy doesn't have the same allure as a James Bond.
And if we are relying on robots more in the intelligence community, there's a concern about technology
dependency and a resulting loss of human skill. For instance, even inventions we love have this effect:
we don't remember as well because of the printing press, which immortalizes our stories on paper; we
can't do math as well because of calculators; we can't recognize spelling errors as well because of word-
processing programs with spell-check; and we don't remember phone numbers because they're stored
in our mobile phones. In medical robots, some are worried that human surgeons will lose their skill in
performing difficult procedures, if we outsource the job to machines. What happens when we don't
have access to those robots, either in a remote location or power outage? So it's conceivable that robots
in the service of our intelligence community, whatever those scenarios may be, could also have similar
effects.
Even if the scenarios we've been considering end up being unworkable, the mere plausibility of their
existence may put our enemies on point and drive their conversations deeper underground. It's not
crazy for people living in caves and huts to think that we're so technologically advanced that we already
have robotic spy-bugs deployed in the field. (Maybe we do, but I'm not privileged to that information.)
Anyway, this all could drive an intelligence arms race--an evolution of hunter and prey, as spy satellites
had done to force our adversaries to build underground bunkers, even for nuclear testing. And what
about us? How do we process and analyze all the extra information we're collecting from our drones
and digital networks? If we can't handle the data flood, and something there could have prevented a
disaster, then the intelligence community may be blamed, rightly or wrongly.
Related to this is the all-too-real worry about proliferation, that our adversaries will develop or acquire
the same technologies and use them against us. This has borne out already with every military
12/21/11 12:28 PMDrone-Ethics Briefing: What a Leading Robot Expert Told the CIA - Technology - The Atlantic
Page 12 of 12http://www.theatlantic.com/technology/print/2011/12/drone-ethics-briefing-what-a-leading-robot-expert-told-the-cia/250060/
technology we have, from tanks to nuclear bombs to stealth technologies. Already, over 50 nations
have or are developing military robots like we have, including China, Iran, Libyan rebels, and others.
CONCLUSION
The issues above--from inherent limitations, to specific laws or ethical principles, to big-picture effects-
- give us much to consider, as we must. These are critical not only for self-interest, such as avoiding
international controversies, but also as a matter of sound and just policy. For either reason, it's
encouraging that the intelligence and defense communities are engaging ethical issues in robotics and
other emerging technologies. Integrating ethics may be more cautious and less agile than a "do first,
think later" (or worse "do first, apologize later") approach, but it helps us win the moral high ground--
perhaps the most strategic of battlefields.
Image: Eky Studio/Shutterstock.
This article available online at:
http://www.theatlantic.com/technology/archive/2011/12/drone-ethics-briefing-what-a-leading-robot-
expert-told-the-cia/250060/
Copyright © 2011 by The Atlantic Monthly Group. All Rights Reserved.