44

Verbatim Mac  · Web view2021. 1. 25. · Ban can’t solve arms race—AI in LAWs encompasses military use that serves far beyond “lethal” purposes, technological dev necessary

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Verbatim Mac

AT: Arms Race

Ban can’t solve arms race—AI in LAWs encompasses military use that serves far beyond “lethal” purposes, technological dev necessary for nuclear deterrence proves the technology will be there no matter what

Geist 16 [Geist, Edward Moore. MacArthur Nuclear Security Fellow at Stanford University's Center for International Security and Cooperation, “It’s already too late to stop the AI arms race—We must manage it instead,” 08/15/16, Bulletin of the Atomic Scientists, https://www.tandfonline.com/doi/full/10.1080/00963402.2016.1216672?scroll=top&needAccess=true] /Triumph Debate

Any successful strategy to manage AI weaponization needs to acknowledge, and grapple with, this historical baggage. Russia and China have good reason to interpret present-day US weapons programs as merely the latest iteration of a decades-old pattern of Washington seeking to maximize and exploit its technological edge in artificial intelligence for strategic advantage (Long and Green 2015). Artificial intelligence failed to live up to man’s ambitions to abuse it during the Cold War, but we are unlikely to be so lucky in the future. Increased computing power and advances in machine learning have made possible weapons with previously unfeasible autonomous capabilities. The difficulty of defining exactly what constitutes an “autonomous weapon” is likely to forestall a meaningful international agreement to ban them altogether. Because the United States has long deployed systems, such as the Phalanx gun, that choose and engage their own targets, the official Department of Defense policy on autonomous weapons employs a nuanced definition excluding those that Moscow and Beijing would certainly contest in any arms-control negotiation (Defense Department 2012). Attempts to include autonomous weapons in the United Nations Convention on Certain Conventional Weapons have faltered so far for this reason, and there is little reason to believe this will change anytime soon. Furthermore, autonomous weapons are only part of the problem of AI weaponization, and a relatively small one at that. It is telling that “war-fighting applications” were merely one of the five military uses of AI considered by the Defense Science Board 2015 Summer Study on Autonomy. While the Pentagon’s interest in employing AI for “decision aids, planning systems, logistics, [and] surveillance” may seem innocuous compared to autonomous weapons, some of the most problematic military uses of AI involve blind obedience to human instructions absent the use of lethal force (Kendall 2014). Despite the fact that DARPA’s submarine-locating drones are not “autonomous weapons” because they do not engage targets themselves, this technology could potentially jeopardize the global strategic balance and make war more likely. Most nuclear powers base the security of their deterrent on the assumption that missile-carrying submarines will remain difficult for enemies to locate, but relatively inexpensive AI-controlled undersea drones may make the seas “transparent” in the not-too-distant future. The geostrategic consequences of such a development are unpredictable and could be catastrophic. Simply banning “killer robots” won’t stop the AI arms race, or even do much to slow it down. All military applications of artificial intelligence need to be evaluated on the basis of their systemic effects, rather than whether they fall into a particular broad category. If particular autonomous weapons enhance stability and mutual security, they should be welcomed.

Commercialization of AI technology makes spillover—the arms race—inevitable

Horowitz 19 [Horowitz, Michael C. Professor of political science at the University of Pennsylvania. “When speed kills: Lethal autonomous systems, deterrence and stability,” Journal of Strategic Studies, August 2019, https://www.tandfonline.com/doi/abs/10.1080/01402390.2019.1621174] /Triumph Debate

This article assesses the growing integration of AI in military systems with an eye towards the impact on crisis stability, specifically how countries think about developing and deploying weapons, as well as when they are likely to go to war, and the potential for arms control.68 Contrary to some public concern and media hype, unless AI capabilities reach truly science fiction levels, their impact on national and subnational military behaviour, especially interstate war, is likely to be relatively modest. Fundamentally, countries go to war for political reasons, and accidental wars have traditionally been more myth than reality.69 The effects for subnational use of AI could be more significant, especially if military applications of AI make it easier for autocrats to use military force to repress their population with a reduced number of loyalists. The commercial spread of machine learning in the private sector means some form of spillovers to military applications will be inevitable. The desire for faster decision-making, concern about the hacking of remotely piloted systems, and fear of what others may be developing could all incentivise the development of some types of LAWS. However, awareness of the potential risk of accidents regarding these systems, as well as the desire for militaries to maintain control over their weapons to maximise their effectiveness, will likely lead to caution in the development and deployment of systems where machine learning is used to select and engage targets with lethal force. One of the greatest risks regarding applications of AI to military systems likely comes from opacity concerning those applications, especially as it interacts with the potential to fight at machine speed. Unlike missiles or bombers, it will be difficult for countries to verify what, if any, AI capabilities potential adversaries have. Even an international agreement restricting or prohibiting the development of LAWS would be unlikely to resolve this concern. Fear would still exist. Given that uncertainty makes disputes harder to resolve, this could have an impact. These factors make international regulation potentially attractive, in theory, but challenging in application, because the very thing about LAWS that might make international regulations on LAWS attractive – their ability to enable faster and more devastating attacks, as well as the risk of accidents – may also make those regulations harder to implement and increase the risks if cheating occurs. But discussions at the CCW are ongoing, and may yet yield progress, or at the least agreement on considering safety and reliability issues when evaluating the development and use of autonomous systems. Humanity’s worst fears about an intelligent machine turning against it aside, the integration of machine learning and military power will likely be critical area of inquiry for strategic studies in the years ahead.

AT: AI BiasMoral responsibility for actions lies on the programmer—biases (bad moral actions in general) not inherent to the machine but the one who programs the machine

Robillard 17 [Michael Robillard, “No Such Thing as Killer Robots.” Journal of Applied Philosophy, 06/19/2017, https://onlinelibrary.wiley.com/doi/epdf/10.1111/japp.12274] /Triumph Debate

I do not disagree with any of the above claims about the nature of both moral reasons and moral reasoning. Indeed, I agree with Purves et al. that morality is not codifiable. I also agree with them insofar as I believe that fully just actions must follow from the right moral reasons. Where I take issue with them is in their thinking that the actions of the AWS should be the proper object for which these moral concerns actually apply. Put another way, while I do not think morality is subject to formal codification, I do not think that the apparent ‘decisions’ of the AWS stand as something metaphysically distinct from the set of prior decisions made by its human designers, programmers and implementers, decisions that ostensibly do satisfy the conditions for counting as genuine moral decisions. Even if the codified decision-procedures of the AWS amount to only a truncated or simplified version of the programmers’ moral decision-making for anticipated future contexts the AWS might some day find itself in, the act of codifying those decision-procedures into the machine’s software will itself still be a genuine moral decision. The same goes for the condition of being motivated by the right kinds of reasons.21 Accordingly, concerns about the morality of AWS would all amount to contingent worries at best (worries about apportioning moral responsibility among a collective of agents, determining epistemic responsibility, weighing risks in conditions of epistemic uncertainty, etc.). There would, however, be nothing wrong in principle about using AWS. That being said, were the moral stakes high enough, use of AWS over the use of soldiers, under some set of conditions, could be conceivably permissible as well as conceivably obligatory. What obfuscates the situation immensely is the highly collective nature of the machine’s programming, coupled with the extreme lag-time between the morally informed decisions of the programmers and implementers and the eventual real-world actions of the AWS. Indeed, the decision to deploy an AWS in a particular context, in principle, is not any more or any less insensitive to moral reasons than the decision to place a land mine in a particular context. These actions, at base, are both human decisions, responsive to moral reasons through and through. The only difference that separates these actions from more familiar actions on the battlefield is that there is a pronounced lag-time between the latent human decisions built into the causal architecture of the weapons system itself and the anticipated combat effect of that weapon system that later eventuates. This pronounced lag-time combined with the AWS’s collective nature and complex interface has led philosophers to mistake the set of human decisions instantiated in the form of the machine’s programming and implementation as summing up into an additional set of decisions (the AWS’s decisions) that is metaphysically distinct. However, after we have summed the total set of human decisions instantiated in the machine’s software and implementation, I am hard pressed to see what genuine decisions would be left over. Indeed, the AWS might and likely will behave in ways that we cannot predict, but these actions will not fail to be logical entailments of the initial set of programming decisions encoded in its software combined with the contingencies of it unique, unanticipated environment.22 As a final point worth noting, despite the arguments here given, one might still think that we can coherently conceive of the relationship between programmers and AWS as analogous to a parent’s relationship to a small child, insofar as both the AWS and the child can be seen as partial agents. This analogy, however, does not hold. For instance, imagine a radical terrorist parent who trains a small child to carry a bomb into a marketplace, to locate a person whose appearance looks least ethnically similar to their own, and to then press the detonator button. While the child’s individual decisions regarding where and when to detonate the bomb would still count as genuine decisions, we would not say that the parent was somehow absolved of moral responsibility for the resulting harm in virtue of the child’s authentic, though partial, agency. In other words no responsibility ‘gap’ would be present. Accordingly, this analogy breaks down if we think the AWS’s ‘parents’ (i.e. its programmers) should somehow be regarded any differently morally speaking.

Turn - AI can be used to solve back for biases

Fjeld 2020 [Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. "Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI." Berkman Klein Center for Internet & Society, 2020. https://dash.harvard.edu/bitstream/handle/1/42160420/HLS%20White%20Paper%20Final_v3.pdf?sequence=1&isAllowed=y] /Triumph Debate

Algorithmic bias – the systemic under- or overprediction of probabilities for a specific population– creeps into AI systems in a myriad of ways. A system might be trained on unrepresentative, flawed, or biased data. Alternatively, the predicted outcome may be an imperfect proxy for the true outcome of interest or the outcome of interest may be influenced by earlier decisions that are themselves biased. As AI systems increasingly inform or dictate decisions, particularly in sensitive contexts where bias long predates their introduction such as lending, healthcare, and criminal justice, ensuring fairness and nondiscrimination is imperative. Consequently, the Fairness and Non-discrimination theme is the most highly represented theme in our dataset, with every document referencing at least one of its six principles: “non-discrimination and the prevention of bias,” “representative and high-quality data,” “fairness,” “equality,” “inclusiveness in impact,” and “inclusiveness in design.” Within this theme, many documents point to biased data – and the biased algorithms it generates – as the source of discrimination and unfairness in AI, but a few also recognize the role of human systems and institutions in perpetuating or preventing discriminatory or otherwise harmful impacts. Examples of language that focuses on the technical side of bias include the Ground Rules for AI conference paper (“[c]ompaniesbe wary of AI systems making ethically biased decisions”). While this concern is warranted, it points toward a narrow solution, the use of unbiased datasets, which relies on the assumption that such datasets exist. Moreover, it reflects a potentially technochauvinistic orientation – the idea that technological solutions are appropriate and adequate fixes to the deeply human problem of bias and discrimination. The Toronto Declaration takes a wider view on many places bias permeates the design and deployment of AI systems: All actors, public and private, must prevent and mitigate against discrimination risks in the design, development and application of machine learning technologies. They must also ensure that there are mechanisms allowing for access to effective remedy in place before deployment and throughout a system’s lifecycle. Within the Fairness and Non-discrimination theme, we see significant connections to the Promotion of Human Values theme, with principles such as “fairness” and “equality” sometimes appearing alongside other values in lists coded under the “Human Values and Human Flourishing” principle. There are also connections to the Human Control of Technology, and Accountability themes, principles under which can act as implementation mechanisms for some of the higher-level goals set by Fairness and Nondiscrimination principles. The “non-discrimination and the prevention of bias” principle articulates that bias in AI – in the training data, technical design choices, or the technology’s deployment – should be mitigated to prevent discriminatory impacts. This principle was one of the most commonly included ones in our dataset225 and, along with others like “fairness” and “equality” frequently operates as a high-level objective for which other principles under this theme (such as “representative and high-quality data” and “inclusiveness in design”) function as implementation mechanisms. Deeper engagement with the principle of “nondiscrimination and the prevention of bias” included warnings that AI is not only replicating existing patterns of bias, but also has the potential to significantly scale discrimination and to discriminate in unforeseen ways. Other documents recognized that AI’s great capacity for classification and differentiation could and should be proactively used to identify and address discriminatory practices in current systems. The German Government commits to assessing how its current legal protections against discrimination cover – or fail to cover – AI bias, and to adapt accordingly.

AT: Non-State Actors

Ban doesn’t solve - will only incentivize asymmetric research done by non-state actors

Newton 15 [Michael A. Newton, Back to the Future: Reflections on the Advent of Autonomous Weapons Systems, 47 Case W. Res. J. Int'l L. (2015) https://scholarlycommons.law.case.edu/jil/vol47/iss1/5] /Triumph Debate

Machines cannot be prosecuted, and the line of actual responsibility to programmers or policy makers becomes too attenuated to support liability. Of course, such considerations overlook the distinct possibility that technological advances may well make adherence to established jus in bello duties far more regularized. Information could flow across programmed weapons systems in an instantaneous and comprehensive manner, and thereby facilitate compliance with the normative goals of the laws of war. A preemptive ban on autonomous weapons would prevent future policy makers that seek to maximize the values embedded in jus in bello from ever being in position to make informed choices on whether to use humans or autonomous systems for a particular operational task. Of course, any sentient observer knows that we do indeed live in a flawed and dangerous world. There is little precedent to indicate that a complete ban would garner complete adherence. Banning autonomous systems might do little more than incentivize asymmetric research by states or non-state armed groups that prioritize their own military advantage above compliance with the normative framework of the law. There has been far too little analysis of the precise ways that advancing technology might well serve the interests of law-abiding states as they work towards regularized compliance with the laws and customs of war. Proponents of a complete ban on autonomous weapons simply assume technological innovations away, and certainly undervalue the benefits of providing some affirmative vision of a desired end state to researchers and scientists.\

LAWs necessary to combat changing battlefield with non-state actors—current cyber operations prove

Kiggins 18 [Ryan David Kiggins, Currently on faculty in the department of political science at the University of Central Oklahoma, Edmond, OK, USA. He has published on US Internet governance policy, US cyber security policy, and global security and rare earth. “Big Data, Artificial Intelligence, and Autonomous Policy Decision-Making: A Crisis in International Relations Theory?” International Political Economy Series (2018). https://link.springer.com/book/10.1007/978-3-319-51466-6] /Triumph Debate

The combination of big data and semi-autonomous and autonomous militarized machines presages a future for national and global security in which humans have an increasingly decreased role in decision-making, war fighting, and relevance as agents in international relations theory. The use of unmanned aerial vehicles (UAVs or more commonly drones) by the USA, other countries, and non-state actors, such as Hezbollah and ISIS, all portend the further automation of the battlefield where automated, semi-autonomous, or autonomous machines are put in harm’s waywith humans being far from actual kinetic fire and, thus, potential death and or injury. Singer (2009) demonstrates that the push to automate US national security decision-making has, in the twenty-first century, taken on more urgent tone as technological advances in computing power, computer networked communications, and robotics are leading to increased distance of humans from the battlefield. The political risk of committing US military forces to advance US diplomatic and national security objectives is, accordingly, plummeting. If fewer US military personnel are at risk of injury or death due to combat operations, US political leaders may be more apt to employ US military force to advance US interests leading to a more unstable global security environment. Of course, the form of that military force matters: kinetic or digital? Consider US efforts to slow the advance in nuclear weapons technology acquisition and development by Iran. Options included a full-scale air campaign that would have presented considerable political risk should US pilots be killed, captured, and paraded for global media consumption. In the end, the US government opted for a strategy that presented less political risk: the use of a computer virus, the first digital warhead. The Stuxnet computer virus was allegedly designed by the NSA and provided to its Israeli counterpart which then modified the virus before successfully planting the computer virus on the computer network used at Iranian nuclear facilities (Zetter 2015). Significantly, and unlike kinetic munitions, computer viruses and other forms of malware are reusable by friend and foe. Making the use of such weapons perhaps more risky for in utilizing digital weapons, one has disclosed and effectively given a capability to an adversary that can, in turn, be used against you. Furthermore, computer viruses and other malware become autonomous agents; as such, digital weapons perform the functions for which it was programed. Digital weapons simply make decisions based on preprogrammed instructions. In the case of the Stuxnet virus, it was designed to collect data, transmit that data home, and conduct disruptive operations to the production of highly enriched uranium for use in Iranian nuclear weapons. All tasks that it reportedly did well (Zetter 2015).

Non-State Actors using precision weapons is inevitable---they’ve already built up their arsenals, and current I-Law frameworks don’t solve

Lifshitz 16, [Itamar Lifshitz, Contributor and writer, "The Paradox of Precision: Nonstate Actors and Precision-Guided Weapons," 2/16/2016, War on the Rocks, https://warontherocks.com/2020/11/the-paradox-of-precision-nonstate-actors-and-precision-guided-weapons/] /Triumph Debate

The Characteristics of the Threat Nonstate actors are increasingly using precision weapons systems. The destructive effects of this trend are not theoretical. This is evident almost anywhere one looks in the Middle East. In Lebanon, Hizballah is investing heavily in making its vast ballistic arsenal more sophisticated and precise, including efforts to acquire the necessary manufacturing capability and know-how. In Yemen, Houthi rebels have used armed drones to target Saudi oil infrastructure. The abundance of precise systems is not exclusively an outcome of Iranian-backed proliferation — the Islamic State has also used weaponized drones on multiple occasions in Iraq and Syria. This phenomenon is by no means confined to the Middle East. Surface-to-air missiles launched from pro-Russian separatist-controlled territory in the Donbass shot down Malaysia Airlines Flight 17. There have been reports of Boko Haram using attack drones in Nigeria. Simultaneously, across the globe, criminal organizations are beginning to harness aerial capabilities. The adoption of drones for violent uses has apparently already begun in the war between Mexican drug cartels. Using precision technology and standoff capabilities, nonstate actors can cause more damage now than in the past. These technologies are readily available, cost less, do more, and require less expertise. Standoff capabilities challenge the ability to retaliate and to eliminate threats in real time. Some of these systems are being transferred by increasingly persistent “rogue” proliferators, but others are a part of the so-called “support” given by global and regional powers to local actors. As the threshold for the acquisition and use of technologies such as unmanned aerial systems, drones, and quadcopters has been significantly lowered, nonstate actors are also innovatively weaponizing commercially available technologies. In the coming years, as technology for precision-guided systems continues to advance, the challenges are likely to increase. First, these systems are sure to become more lethal — nonstate actors will have more precise weapon systems, with bigger payloads and more of them. Secondly, the ability to deploy these systems will improve significantly and they may be increasingly autonomous. Operating these systems from longer ranges will become easier, and the use of predetermined GPS targets (or other Global Navigation Satellite Systems) or AI algorithms could reduce the role of humans in the decision-making process. The use of space-related commercial intelligence gathering platforms, such as Google Earth, helps different entities to easily acquire precise targets. These new technologies might also simplify the use of nonconventional weapons, as, for example, a drone or quadcopter could easily disperse chemical agents. An Inadequate Existing Legal and Multilateral Framework Existing multilateral arms control frameworks don’t curtail the proliferation of precision standoff capabilities to nonstate actors and fail to address it as a unique problem. Current efforts regarding nonstate actors are focused on weapons of mass destruction and voluntary commitments regarding arms trades. Precise systems are generally perceived as preferred weapons since they help to minimize collateral damage and to distinguish between militants and civilians. The use of standoff precision weapons can assist commanders in meeting the requirements of the principle of distinction between civilians and combatants in international humanitarian law, and the general imperative to avoid the use of indiscriminate weapons. The growing importance of nonstate actors in international conflicts in the last 20 years has led scholars to grapple with their legal obligations. Many claim that while the rules of armed conflict and international humanitarian law still apply to nonstate actors, their enforcement faces significant challenges and is seldom sufficient. Some nonstate actors, of course, ignore legal norms or abuse them cynically. While states are accountable for their actions, violent nonstate actors can evade responsibility. These groups usually thrive in war zones and states with low governance, which lack effective institutions that might hold them accountable for their deeds. Furthermore, nonstate actors often don’t share the basic values that underpin international law and norms, demonstrated by their intentional targeting of civilians. In the hands of nonstate actors, precision-guided weapons cease to be a tool that decreases collateral damage. Instead, they become weapons of strategic, and potentially mass, destruction. The lack of accountability for nonstate actors also drives some states to seek impunity for their actions by conducting them through proxy groups. Therefore, it seems blatantly clear that the international community should do everything in its power to prevent nonstate actors from acquiring these weapons. To date, there are no adequate international norms on the proliferation of standoff precision capabilities to nonstate actors. The existing “binding” arms control framework regarding nonstate actors focuses on weapons of mass destruction. The 2004 U.N. Security Council Resolution 1540 determined that all states must refrain from providing support to nonstate actors to attain nuclear, chemical, or biological weapons and their means of delivery. It also requires states to adopt laws and regulations to prevent proliferation of these systems.

AT: ILaw

LAWS will follow I-LAW to the capacity that humans do. No action will allow LAWs to operate completely independent of humans at the point where LAWs are being used for a human end.

Kevin Neslage 19, associate in the Miami office of Lewis Brisbois, “Does "Meaningful Human Control" Have Potential for the Regulation of Autonomous Weapon Systems?”, April 2019, University of Miami School of Law Institutional Repository, https://repository.law.miami.edu/cgi/viewcontent.cgi?article=1092&context=umnsac

As a separate way of framing the human involvement, Mark Roorda asks those debating the legality of AWS to consider the approach taken by NATO in its targeting and engagement process. Roorda proposes that it is the human process of planning, targeting, and the legal analysis before the launch of an AWS that allows compliance with IHL. The "launch" point would be the time at which the AWS is operating on its own and will engage targets with no further human involvement. 69 The AWS is not operating completely independent of humans because it must still rely on the programing and planning that was done before the launch point. This way of looking at human involvement is not so much that a human will be "out of the loop," but rather that a human will have significant enough involvement before the launch to be in compliance with IHL. It is possible that this is an approach states could take with weapons systems, but it also raises the question of whether or not this is still autonomy. What is described fits under the DoD definition, and would depend on whether the weapon could "select and engage" on its own following the launch point of human involvement.70 The examples given by the U.S. Air Force and NATO represent the varied forms of what it may look like to have human control. It is not a simple "yes or no" question of whether human control exists. And even if Sharkey's levels of control can help to answer some of the questions about where "on" or "in the loop" a human must be placed to create the "human control" of MHC, it still does not answer all the questions about what may actually be meaningful control by a human. This would suggest that MHC would be just as difficult to interpret if it were adopted as language into a treaty.

AT: U.S doesn’t cause changeA U.S. Policy change creates momentum for a broader deal and getsempirics prove it pressures Russia and other major powers on

Lisa A Bergstrom 19, Technology and security specialist in Berkeley, California "The United States should drop its opposition to a killer robot treaty," Bulletin of the Atomic Scientists, 11-7-2019, https://thebulletin.org/2019/11/the-united-states-should-drop-its-opposition-to-a-killer-robot-treaty/

Landmines, cluster munitions, incendiary weapons, blinding lasers, exploding bullets, and much more: The list of weapons banned or regulated by international humanitarian law has grown steadily over the past 150 years. If an international campaign of civil society organizations—supported by about two dozen countries and growing—is successful, there could soon be another to add: autonomous weapons.

Given the unprecedented risks autonomous weapons pose, and the strength of the movement against them, a new treaty regulating such weapons is both desirable and viable. Whether that treaty is effective, however, will depend primarily on whether the United States decides to engage in negotiating it and convinces other militarily important countries to do the same.

Not yet deployed. Autonomous weapons, or “killer robots,” as their opponents and the media often call them, are weapons that select and attack targets without direct human control. Think of a drone scanning the battlefield and using artificial intelligence to identify and fire upon a suspected enemy combatant, without waiting for a human operator to approve the strike.

The exact definition of a lethal autonomous weapon is hotly contested. While critics also express concern about non-lethal, anti-materiel, or semi-autonomous weapons, for now international talks have focused only on fully autonomous, lethal anti-personnel weapons. Under this broad definition, no military has deployed such weapons yet, but the technology to do so already exists and is developing rapidly.

To address the humanitarian risks of autonomous weapons, about 100 countries have been discussing the possibility of negotiating a new treaty within the Convention on Certain Conventional Weapons (CCW), a little-appreciated, United Nations-affiliated forum for regulating inhumane weapons. Since 2014, the slow-moving CCW has agreed to renew talks on the issue without being able to reach the consensus the convention requires to actually start negotiating a treaty.

Too soon to regulate? One of the driving forces behind these discussions is an international movement of groups and activists opposed to the unrestricted use of autonomous weapons. Chief among these are the ubiquitous International Committee of the Red Cross and the more militant Campaign to Stop Killer Robots, a coalition of nongovernmental organizations, including Human Rights Watch, that have been active in earlier campaigns to ban landmineslandm ines and cluster munitions. So far, the campaign has managed to convince about two dozen countries—including Austria, Brazil, and Mexico—to support a preemptive ban on the development and deployment of lethal autonomous weapons. Several more countries, like Germany and France, support a political declaration, but not a legally binding treaty.

The Campaign to Stop Killer Robots and other critics charge that autonomous weapons are immoral and dangerous because they lack the human traits (like mercy) needed for moral decision making, as well as the ability to distinguish between civilians and combatants and to judge the proportionate use of force, two key principles of international humanitarian law. The critics argue convincingly that if the development of autonomous weapons is left unregulated it could lead to a destabilizing arms race. This threat would be made worse by the difficulty in determining who is responsible for the actions of an autonomous weapon, meaning a small incident could spark an international crisis. As with drones, autonomous weapons could make it easier for countries to start unnecessary wars by keeping soldiers off the battlefield, offering the illusion of “risk-free” military intervention but providing no protections for civilians.

The United States, Russia, Israel, and a few other countries oppose either a new treaty or a political declaration. These countries are investing heavily in robots and artificial intelligence. They argue it is too soon to know how autonomous weapons might be used in the future and therefore too soon to know how best to regulate them, if at all. The United States has stated that autonomous weapons could even improve compliance with international law by being better than humans at identifying civilians and judging how to use force proportionately.

Prospects for a standalone treaty. Unhappy with the lack of progress in the CCW, the Campaign to Stop Killer Robots is increasingly urging countries to consider bypassing the convention  entirely to negotiate a separate treaty, stating, “If the CCW cannot produce a credible outcome [at its annual meeting on November 15], alternative pathways must be pursued to avoid a future of autonomous warfare and violence.” Unfortunately, such a decision, while understandable and feasible, would be unlikely to produce a truly effective treaty.

One might ask what chance nongovernmental organizations like Human Rights Watch have for achieving a standalone treaty against the opposition of some of the world’s most powerful militaries. Plenty, actually.

By the 1990s, the widespread and indiscriminate use of landmines had become a humanitarian disaster, and the members of the CCW tried to solve the crisis by strengthening an existing CCW treaty regulating this weapon. Frustrated by the perceived weakness of the CCW agreement, the International Campaign to Ban Landmines pushed for a new treaty, under the auspices of the Canadian government, that would ban all landmines without requiring the burdensome consensus decision-making of the CCW. The resulting Mine Ban Treaty mostly ended the large-scale use of landmines outside of a few conflict zones and earned the campaign a Nobel Peace Prize.

In 2008, a similar coalition of nongovernmental organizations repeated this feat, successfully pushing for a Convention on Cluster Munitions outlawing this once-ubiquitous weapon, after years of talks in the CCW had produced only modest results. Even though the United States, Russia, and other major military powers have not joined either treaty, the treaties have created a powerful stigma against landmines and cluster munitions.

Given this history of success, it is tempting to conclude that a strong, standalone treaty is the best way to deal with the threat posed by autonomous weapons, despite the fact that countries like the United States and Russia would almost certainly refuse to join. Autonomous weapons, however, are not landmines or cluster munitions. Landmines and cluster munitions were used around the world for decades in conflicts large and small, in many cases causing great civilian harm. Treaties banning these weapons have value even when the United States, Russia, China, and other major military powers do not participate. In contrast, autonomous weapons are a developing technology likely to be used by only the most advanced militaries for some time. A treaty that excludes almost all the countries with the interest and ability to deploy autonomous weapons would have comparatively little value either as arms control or as a humanitarian norm builder.

At a time when even the taboos against chemical and nuclear weapons appear to be waning, it is hard to imagine that Russia, for example, would consider its autonomous weapons program constrained by the perceived stigma created by a treaty it had no hand in making. A more modest treaty, negotiated in the CCW with the agreement of the world’s major military powers, offers the best chance of providing meaningful restrictions on autonomous weapons in the foreseeable future.

A US policy solution. What could the United States do to achieve such a treaty? The CCW treaty on blinding laser weapons may offer a guide.  While blinding lasers and autonomous weapons differ in terms of their military utility and humanitarian threat, both weapons became the subject of campaigns to ban them before they were ever deployed. Opponents of autonomous weapons point to this analogy as proof that a weapon can be banned preemptively, but it also shows how the United States can use a national policy to help reach a difficult international compromise. The United States had long resisted any attempts to regulate the use of lasers to cause blindness, worried that any such regulation could interfere with unrelated military uses of lasers. Then in 1995, as CCW negotiations were underway, the Defense Department adopted a limited national ban on blinding laser weapons. By using this new policy as a basis for negotiations, the United States was able to broker an agreement in the CCW that satisfied countries that wanted a broader ban, countries that opposed any ban, and the requirements of the US military. In doing so, the United States was able to make sure the treaty did not restrict other, less controversial uses of lasers—a concern that is highly relevant to autonomous weapons as well.

In fact, the United States already has a national policy that could serve as the basis for a new CCW treaty. In 2012, the Department of Defense issued a directive requiring “appropriate levels of human judgment over the use of force,” thereby becoming the first country to publicly adopt a national policy on autonomous weapons. The Pentagon even tasked a committee of ethicists, scientists,  and other experts with creating an ethical framework for artificial intelligence—their just-released report endorses strong principles of responsibility, traceability, and more .

Clearly, the US government shares some of the activists’ concerns over the ethics of autonomous weapons and is comfortable with some limitations on their use. If the United States can strengthen its existing national restriction on autonomous weapons, it would be well placed to negotiate a new treaty in the CCW. While there is no guarantee that Russia and other countries would agree to start negotiations, US support would increase the pressure on them considerably.

“Killer robots” will soon no longer be confined to the realm of science fiction. To address the new risks autonomous weapons will bring, the world needs a new and effective treaty regulating them. The best chance to achieve such a treaty is for the United States to drop its opposition and take an active role in negotiating a new agreement in the existing forum for regulating inhumane weapons.

Drone and cyber norms aren’t followed

Dr. Ian Hurd 19, Director of the International Studies Program at Northwestern University, "“If I Had a Rocket Launcher”: Self-Defense and Forever War in International Law." Houston Law Review, https://houstonlawreview.org/article/7952-if-i-had-a-rocket-launcher-self-defense-and-forever-war-in-international-law

International law has been successful at putting war within a legal frame. The 20th century witnessed an expanding roster of laws around military action—encompassing first the treatment of wounded, then of civilians, then certain classes of weapons, and eventually the motives for war and individual responsibility for war crimes. This history can be summarized by the treaties and institutions that mark each step in this growth—among them the Geneva Conventions of 1906 and 1929,[77] the 1925 Geneva Protocol for the Prohibition of the Use in War of Asphyxiating, Poisonous or Other Gases, and of Bacteriological Methods of Warfare,[78] the Kellogg-Briand Pact of 1928,[79] the U.N. Charter in 1945,[80] and the Rome Statute of the International Criminal Court in 1998.[81] Its history can also be told through the changing uses of law in the political practice of justification. The legal formulations that were once thought to enclose war fully within self-evident and constraining legal categories have turned inside out and now operate to disperse military action throughout the world. As national interests and military technologies have changed, the rules have adapted, both in ratione temporis and ratione materiale. The instrumental utility of expansive self-defense claims for powerful governments is great, and the power of state practice to redefine international law is well-accepted—together these two facts ensure that the operative understanding of international rules will not deviate far from the desires of strong states. As the rule has moved, so has its political effects. Today it serves to legitimize and legalize the turn to “endless war” that has characterized American foreign policy since 2001. With self-defense now anchored on national security interests, it has released its former connections to time and to armed attack. From this new foundation, it became useful to ambitious governments who are eager to attack their enemies abroad. In self-defense defined as national security, these states found a legal justification that matched neatly with their new technologies of drones and cyber. Together, these tools encouraged those with the capabilities to engage in undeclared and perhaps never-ending military operations against those whom they see as enemies of the state. The history of self-defense helps to show the gap between the mythology of international law and its practical life. The myth says that international law provides a stable framework of rules that enable states to act toward their objectives while limiting their capacity to engage in acts that are damaging to the entire community. The reality is that rules become tools which powerful actors aim to use to their advantage. As Rebecca Sanders asserts, “There is nothing inherently progressive about legal culture[]” or international law.[82] The political effects of law depend on who is using it against what and against whom.

AT: AccountabilityJust hold the entire state accountable for any action that breaks ILAW.

Hammond 15 [Hammond, Daniel N. (2015) "Autonomous Weapons and the Problem of State Accountability," Chicag Journal of International Law: Vol. 15: No. 2, Article https://chicagounbound.uchicago.edu/cjil/vol15/iss2/8]

At least in theory, state accountability has the potential to correct this problem. As an initial matter, the concept of state responsibility is well established in international law.92 The International Law Commission93 (ILC) articulated the rule behind this concept in its Articles on the Responsibility of States for Internationally Wrongful Acts (Responsibility Articles),94 which provide that "[e]very internationally wrongful act of a State entails the international responsibility of that State."" A state engages in an "internationally wrongful action" when an act or omission "(a) is attributable to the State under international law; and (b) constitutes a breach of an international obligation of the State."" An action is attributable to a state when it is conducted by an "organ" of the state," which almost certainly includes the military as well as intelligence agencies." Therefore, assuming that AWSs operate under the authority of these institutions, their actions would be attributable to the state. Furthermore, since conduct "constitutes a breach of an international obligation of [a] State" when it violates "a clearly-defined treaty obligation or an unequivocally recognized norm of customary law,"" a state could be culpable under the Responsibility Articles if its AWS violates established norms of International Humanitarian Law (IHL) or International Human Rights Law (IHRL).1oo From a legal standpoint, then, state responsibility is a viable option, at least in the abstract. Normatively, moreover, it is preferable to commander, designer, or manufacturer liability. First, the primary purpose of the Responsibility Articles-to "increase[ ] compliance with international obligations"-applies in the context of AWS crimes.101 If states realize that they will be held accountable for the war crimes of their AWSs, they have an incentive, first, to weigh the potential liability costs against the benefits of using AWSs at all and, second, to make sure the weapons they do use are consistently unlikely to violate international law. The prospect of liability would draw attention to the fact that the widespread, frequent use of AWSs would almost certainly result in at least some violations of international law.102 Should states choose to use AWSs in spite of these risks, liability would give them a reason at the purchase and deployment stages to ensure that their AWSs will comply with international law, because the states themselves would internalize all the costs of crimes committed by their weapons rather than having the cost spread among all buyers via producer strict liability." Specifically, the state could incentivize the manufacturers and designers to produce safe AWSs by setting standards for an acceptable purchase. At the same time, it could limit commanders' discretion in the deployment of these weapons through policy measures. Thus, because the state could both require better design and manufacture and limit commander discretion, it is in the best position to guard against international AWS crimes throughout the entire process. As such, it makes the most sense for the state to bear the liability risk.104 Morally speaking, moreover, the state is arguably the most culpable actor in the use of AWSs that unforeseeably violate international law (that is, absent wrongful intent or negligence on the part of an individual). After all, it would be the state that makes the overarching decision to utilize AWSs in the first place. Given the risks inherent in employing such weapons, this choice renders the state more culpable in a moral sense than the producers, who merely respond to a demand created by the state, and the commanders, who merely carry out the policy decision the state made. By exerting control over both the purchase and deployment phases of AWS use, the state becomes the actor best suited to internalize the costs of its decision-making and, therefore, the most blameworthy. For related reasons, a strict liability regime would be preferable in practice to a negligence regime in assigning responsibility for AWS crimes to states. Commentators have argued that strict liability is often superior to negligence where a particular activity creates nonreciprocal risks and benefits."0 ' Nonreciprocal risks exist if an injurer's action "imposes a risk unilaterally on the victim in situations where the victim's activity does not impose a similar risk on the injurer."10 ' Similarly, nonreciprocal benefits exist where the injurer receives a benefit from his activity that the victim does not share in equal proportion."" A state's use of AWSs creates both nonreciprocal risks and benefits. In all likelihood, individuals in areas where the state deploys an AWS will bear virtually all of the risk that the weapon might commit a war crime, as they will be the ones who suffer if it does so, even though they do not impose a similar risk on the state. The state, meanwhile, is the primary beneficiary of the weapon's usage, enjoying the many tactical and resource advantages that AWSs generateadvantages that victims will not experience.' Shifting the losses created by a state's use of AWSs from the victims to the states "would improve the distribution of burden and benefit" because it would force the state to compensate victims of AWS crimes.10' Strict liability is therefore suitable in this context.11

Comprehensive legal attention to LAWs can provide accountability

Charters 20 [Drew Charters, University of Southern California, 04/19/2020, “Killing on Instinct: A Defense of Autonomous Weapon Systems for Offensive Combat”, Viterbi Conversations in Ethics, https://vce.usc.edu/volume-4- issue-1/killing-on-instinct-a-defense-of-autonomous-weapon-systems-for-offensive-combat/] /

AWS are superior to human beings in maximizing military effectiveness, minimizing collateral damage, and following international law. They should thus be adopted under a rule utilitarian calculus. However, this argument is not without critics. A common criticism is that weapons systems should be semi-autonomous; they can operate independently until a lethal decision must be made, and then require human approval before the lethal action is taken. The semi-autonomous approach falls apart, though, because human operators are too slow to respond to certain scenarios. One non-lethal example of such a situation is the Phalanx close-in weapons system (CIWS). The Phalanx CIWS is mounted on United States Navy ships and can shoot down incoming enemy missiles. This process is completely automated: requiring human approval prior to shooting down an incoming missile could be disastrous [3]. One could imagine a similar example with a lethal AWS, perhaps a ground-based vehicle that is designed to protect troops from enemy forces. If a human operator is required to approve lethal force, the AWS might not defend against the enemy combatants in time, which undermines the function it was meant to perform. If an AWS does act without human approval and somehow makes a mistake leading to property damage or casualties, it becomes difficult to determine who is at fault for the accident, which is the second main criticism of AWS. Some may blame the mistake on the commanding officer of the operation, and others claim that no one is to blame since the machine is autonomous. Some people even believe that the machine itself could be held liable! There is no legal precedent for holding autonomous machines accountable, and this dilemma is referred to as the “responsibility gap” [14]. This criticism is valid; currently, no legal framework exists to deal with these issues. However, the response is not to ban AWS. Rather, new legal frameworks need to be made prior to the widespread adoption of these weapons. International conventions governing new technologies and weapons are commonplace, and as long as organizations like the United Nations deliberate these issues now, there will not be any legal confusion once AWS are adopted on a large scale. Technologies that have the power to take a human life should not be adopted without careful thought. However, if developed carefully, AWS have the power to make war safer and more effective. It is undeniable that automation of military technology is inevitable. Several adversarial countries to the United States are heavily investing in AWS, including, China, Russia, and Iran [15]. These weapons already exist in a primitive form, and it is only a matter of time before they are consistently used in combat. Ultimately, human beings are the ones who will program AWS, and human moral reasoning cannot be discredited. However, the evidence clearly indicates that AWS will make war a safer and more ethical endeavor, which is why the United States should increase its research into and development of this technology.

AT: Status Quo SolvesA ban is unlikely---ubiquitous tech, military demand, and no clear brightline

Paul Scharre 17, Senior fellow and Director of the Technology and National Security Program at the Center for a New American Security, We’re Losing Our Chance to Regulate Killer Robots," 11-14-2017 Defense One, https://www.defenseone.com/ideas/2017/11/were-losing-our-chance-regulate-killer-robots/142517/

Scores of countries are gathering at the United Nations this week to discuss lethal autonomous weapon systems – essentially, robots that would pick their own targets. This marks their fourth year of debate with little to show for it; the group does not even have a shared working definition of “autonomous weapon.” Meanwhile, the technology of autonomy and artificial intelligence is racing forward.

When the countries last met, in April 2016, DeepMind’s AlphaGo had just beaten world champion Go player Lee Sedol in a head-to-head match — an unprecedented feat for a computer. But just a few weeks ago, DeepMind published a paper on its new AlphaGo Zero, which taught itself the game without human-supplied training data and, after a mere three days of self-play, defeated the older program in 100 straight games. Between those two events, the world’s countries held no substantive meetings on autonomous weapons — excepting only last year’s decision to bump the discussions up one rank in the diplomatic hierarchy.

A consortium of over 60 non-governmental organizations has called for an urgent ban on the development, production, and use of fully autonomous weapons, seeking to halt such work before it begins in earnest. Yet at this stage a legally binding treaty is almost inconceivable. The UN forum that the nations are using, the awkwardly named Convention on Certain Conventional Weapons, operates by consensus — meaning that although 19 nations have said they would back a ban, any one of the other 105 can veto it. Even advocates of a ban agree that the diplomatic process is “faltering financially, losing focus [and] lacks a goal.”

Four years ago, the first diplomatic discussions on autonomous weapons seemed more promising, with a sense that countries were ahead of the curve. Today, even as the public grows increasingly aware of the issues, and as self-driving cars pop up frequently in the daily news, energy for a ban seems to be waning. Notably, one recent open letter by AI and robotics company founders did not call for a ban. Rather, it simply asked the UN to “protect us from all these dangers.” Even as more people become aware of the problem, what to do about it seems less and less clear.

There are many reasons why a ban seems unlikely. The technology that would enable autonomous weapons is already ubiquitous. A reasonably competent programmer could build a DIY killer robot in their garage. Militaries are likely to see autonomy as highly useful, as it will give them the ability to operate machines with faster-than-human reaction times and in environments that lack communications, such as undersea. The risk to innocent civilians is unclear – it is certainly possible to envision circumstances in which self-targeting weapons would be better at some tasks than people. And the most difficult problem of all is that autonomy advances incrementally, meaning there may be no clear bright line between the weapon systems of today and the fully autonomous weapons of the future.

So if not a ban, then what?

There has been some halting diplomatic progress over the past few years in exploring the role of humans in using lethal force in war. This idea is expressed in different ways, with some calling for “meaningful human control” and others suggesting “appropriate human judgment” is necessary. Nevertheless, there is growing interest in better understanding whether there is some irreducible place for humans in lethal force decisions. The 2016 diplomatic meetings concluded with states agreeing to continue to discuss “appropriate human involvement with regard to lethal force,” a compromise term. None of these terms are defined, but they express a general sentiment toward keeping humans involved in lethal force decisions on the battlefield. There are good reasons to do so.

AT: Decisionmaking Speed GoodHigh-Speed decisionmaking is unpredictable and sacrifices human control

Dillon Patterson 15, National Security Fellow at Harvard Kennedy School Belfer Center, "Ethical Imperatives for Lethal Autonomous Weapons," 5-5-2015, Belfer Center for Science and International Affairs, https://www.belfercenter.org/publication/ethical-imperatives-lethal-autonomous-weapons//Triumph Debate

Rationalizing positions for the ethics of LAWS requires a shared understanding of the technical concept in question between technologists, military leaders, policymakers, and ethicists. It is essential to understand that autonomy and AI are separate technical matters. Many autonomous systems in development incorporate some form of AI within their architecture; therefore, AI will be treated as an integral component for this discussion.

In a 2019 document addressing autonomy in future combat systems, Dr. Greg Zacharias, US Air Force Chief Scientist, borrows from the Merriam-Webster dictionary in defining autonomy as “the quality or state of self-governing; the state of existing or acting separately from others.”3 An autonomous system requires internal decision-making capability in-place of a human mind enabling the machine to utilize its network of sensors, information processors, and action nodes to detect, decide, act, and update itself as it operates within the mission environment. The introduction of a mission goal by a human initiates the sequence of autonomous operation. When the goal is simple, the decision mechanism can be simple. When the goal is complex, or the environment is dynamic, the decision mechanism must be complex. Thus, many autonomous systems have artificial intelligence at their core.

Unfortunately, no absolute definition of artificial intelligence exists. Massachusetts Institute of Technology professor Max Tegmark simplifies the matter by first defining intelligence as “the ability to accomplish complex goals.”4 Applying Tegmark’s notion to non-human machines then yields a simple definition of AI as the ability of machines to accomplish complex goals.

The AI employed within modern LAWS accomplishes complex goals through architecture that typically fits into one of two categories: logical processing (LP), and machine learning (ML).5 LP systems require experts in a particular field to develop a mathematical model that defines an environment, such as weather patterns, traffic flows, or financial transactions. Computer scientists and engineers then utilize this model to program a set of exact instructions for the machine to follow when acting within the modeled environment. Machine learning takes a different approach. Instead of following instructions to act within a model, ML techniques begin with large quantities of data from the mission environment.

The machine uses data to discover trends or patterns that a human expert may never identify. Engineers shape the machine learning process by sending the data through a training algorithm that enables the machine to discover mathematical functions that approximately define the environment. Although the approximated functions may not be exact, they are typically more accurate than an expert-derived model because the machine can sort through far more data than a human mind.6 Upon completion of the learning process, a new algorithm is programmed into the machine, which activates the function learned from the data.7 The activation algorithm tells the machine how to utilize what it has learned as it operates in the mission environment to solve the goal presented by the human.

The most advanced contemporary AI systems are formed by combining multiple ML units into stacked layers, commonly referred to as deep learning networks. Each layer in the network is trained to learn a specific aspect of the target environment, providing a critical piece of the overall complex estimation of the environment.8 Deep learning networks are so internally complex that engineers who shape the learning and activation algorithms can never know precisely how their machines come to the output actions, thus earning the nickname “black boxes.”9

The high-powered analytical ability of AI-based autonomous systems will increasingly enable combat machines to accurately and expeditiously detect, decide, and act in battle. However, the manner through which AI is engineered restricts these systems to either the strict set of instructions given in a logical processing architecture or to activation boundaries of an approximated model created within a machine learning system. Ultimately, autonomous systems are limited to action within the domain built into their decision mechanism. The cost of autonomous high-speed precision and accuracy is domain inflexibility, which is a strength of the human mind.

Heightened Precision and speed of decision-making trades off with meaningful human control

Mary Wareham 20, Advocacy director of the Arms Division @ Human Rights Watch, "Robots Aren't Better Soldiers than Humans," , 10-26-2020, Human Rights Watch, https://www.hrw.org/news/2020/10/26/robots-arent-better-soldiers-humans//Triumph Debate

When the international debate over fully autonomous weapons began in 2013, a common question was whether robots or machines would perform better than humans. Could providing more autonomy in weapons systems result in greater accuracy and precision? Could such weapons increase compliance with international humanitarian laws because they would not rape or commit other war crimes? Would they “perform more ethically than human soldiers,” as one roboticist claimed?

For years, roboticist Noel Sharkey, a professor at Sheffield University in England, warned that computers may be better than humans at some tasks, but killing is not one of them. Sharkey and his colleagues became increasingly alarmed that technological advances in computer programming and sensors would make it possible to develop systems capable of selecting targets and firing on them without human control.

They warned that autonomous weapons systems would be able to process data and operate at greater speed than those controlled by humans. Complex and unpredictable in their functioning, such systems would have the potential to make armed conflicts spiral rapidly out of control, leading to regional and global instability. Autonomous weapons systems would be more likely to carry out unlawful orders if programmed to do so, due to their lack of emotion and the fact that morality cannot be outsourced to machines.

With military investments in artificial intelligence and emerging technologies increasing unabated, Sharkey and his colleagues demanded arms control. Yet China, Israel, Russia, South Korea, Britain, the United States, and other military powers have continued their development of air, land, and sea-based autonomous weapons systems.

My organization, Human Rights Watch, took a close look at these investments and the warnings from the scientific community. It didn’t take long to see how allowing weapons systems that lack meaningful human control would undermine basic principles of international humanitarian law and human rights law, including the rights to life and remedy and protecting human dignity. Their use would raise a substantial accountability gap when it comes to removing human control from the use of force, finding that programmers, manufacturers, and military personnel could all escape liability for unlawful deaths and injuries caused by fully autonomous weapons.

As we talked to other groups, the list of fundamental ethical, moral, and operational concerns grew longer. It became clear that delegating life-and-death decisions to machines on the battlefield or in policing, border control, and other circumstances is a step too far. If left unchecked, the move could result in the further dehumanization of warfare.

In 2013, Human Rights Watch and other human rights groups established the Campaign To Stop Killer Robots, to provide a coordinated voice on these concerns and to work to ban fully autonomous weapons and retain meaningful human control over the use of force.

Within months, France convinced more than 100 countries to open diplomatic talks on how to respond to questions raised by lethal autonomous weapons systems. Before then, no government had considered such questions or met with other states to discuss them. As happens so often, there was no response until scientists and civil society raised the alarm.

None of the nine United Nations meetings held since 2014 on killer robots have focused at any length on how better programming could be the solution. There remains a lack of interest in discussing whether there are potential benefits or advantages to removing meaningful human control from the use of force. This shows how technical fixes proposed years ago are, on their own, not an adequate or appropriate regulatory response.

Instead, the legal debate continues over the adequacy of existing law to prevent civilian harm from fully autonomous weapons. There’s growing acknowledgment that the laws of war were written for humans and cannot be programmed into machines.

Indeed, by 2020 the issue of removing human control from the use of force is now widely regarded as a grave threat to humanity that, like climate change, deserves urgent multilateral action. Political leaders are waking up to this challenge and are working for regulation, in the form of an international treaty.

A new international treaty to prohibit and restrict killer robots has been endorsed by dozens of countries, UN Secretary General António Guterres, thousands of artificial intelligence experts and technology sector workers, more than 20 Nobel Peace laureates, and faith and business leaders.

In addition, the International Committee of the Red Cross sees an urgent need for internationally agreed-upon limits on autonomy in weapon systems to satisfy ethical concerns (the dictates of the public conscience and principles of humanity) and ensure compliance with international humanitarian law.

In his address to the United Nations last month, Pope Francis commented on killer robots, warning that lethal autonomous weapons systems would “irreversibly alter the nature of warfare, detaching it further from human agency.” He urged states to “break with the present climate of distrust” that is leading to “an erosion of multilateralism, which is all the more serious in light of the development of new forms of military technology.”

Yet US political leaders, from the Trump administration to Congress, have been largely silent on calls for regulation. US officials claim a 2012 Pentagon directive “neither encourages nor prohibits the development” of lethal autonomous weapons systems." The directive was updated in 2017 with minimal change, and still explicitly permits development of such weapons systems.

A new international treaty to prevent killer robots will happen with or without the United States. As a new report by Human Rights Watch shows, there is ample precedent for such a treaty. Existing international law and principles of artificial intelligence show how it is legally, politically, and practically possible to develop one.

The next US administration should review its position on killer robots in the context of the leadership role it wants to take in the world. It should accept that an international ban treaty is the only logical outcome for the diplomatic talks. Technological fixes in this case are not the answer.

Military AI causes more conflict---AWs fall into the wrong hands, suppress civilians, and easier conflict engagement because of fast decisionmaking and secrecy

Andreas Kirsch 18, Fellow at Newspeak House, "Autonomous weapons will be tireless, efficient, killing machines—and there is no way to stop them,", 7/23/2018, Quartz, https://qz.com/1332214/autonomous-weapons-will-be-tireless-efficient-killing-machines-and-there-is-no-way-to-stop-them/

The world’s next major military conflict could be over quickly.

Our human soldiers will simply not stand a chance. Drones and robots will overrun our defenses and take the territory we are standing on. Even if we take out some of these machines, more of them will soon arrive to take their place, newly trained off our reactions to their last offense. Our own remote-controlled drones will be outmaneuvered and destroyed, as no human operator can react quickly enough to silicon-plotted attacks.

This isn’t a far-off dystopian fantasy, but a soon-to-be-realized reality. In May, Google employees resigned in protest over the company helping the US military develop AI capabilities for drones. (The company ultimately decided to shelve the project.) More recently, 2,400 researchers vowed not to develop autonomous weapons. Many AI researchers and engineers are reluctant to work on autonomous weapons because they fear their development might kick off an AI arms race: Such weapons could eventually fall into the wrong hands, or they could be used to suppress the civilian population.

How could we stop this from happening?

The first option is developing a non-proliferation treaty to ban autonomous weapons, similar to the non-proliferation treaty for nuclear weapons. Without such a treaty, the parties voluntarily abstaining from developing autonomous weapons for moral reasons will have a decisive disadvantage.

Nobody mourns them or asks for their bodies to be returned from war.

That’s because autonomous weapons have many advantages over human soldiers. For one, they do not tire. They can be more precise, and they can react faster and operate outside of parameters in which a human would survive, such as long stints in desert terrains. They do not take years of training and rearing, and they can be produced at scale. At worst they get destroyed or damaged, not killed or injured, and nobody mourns them or asks for their bodies to be returned from war.

It is also easier to justify military engagements to the public when autonomous weapons are used. As human losses to the attacker’s side are minimal, armies can keep a low profile. Recent engagements by the US and EU in Libya, Syria, and Yemen have focused on using drones, bombing campaigns, and cruise missiles. Parties without such weapons will have a distinct handicap when their soldiers have to fight robots.

But even if all countries signed an international treaty to ban the development of autonomous weapons, as they once did for nuclear non-proliferation, it would be unlikely to prevent their creation. This is because there are stark differences between the two modes of war.

There are two properties that make 1958’s nuclear non-proliferation treaty work quite well: The first one is a lengthy ramp-up time to deploying nuclear weapons, which allows other signatories to react to violations and enact sanctions, and the second one is effective inspections.

To build nuclear weapons, you need enrichment facilities and weapons-grade plutonium. You cannot feasibly hide either and, even when hidden, traces of plutonium are detected easily during inspections. It takes years, considerable know-how, and specialized tools to create all the special-purpose parts. Moreover, all of the know-how has to be developed from scratch because it is secret and import-export controlled. And even then, you still need to develop missiles and means of deploying them.

But it’s the opposite with autonomous weapons.

To start, they have a very short ramp-up time: Different technologies that could be used to create autonomous weapons already exist and are being developed independently in the open. For example, tanks and fighter planes have lots of sensors and cameras to record everything that is happening, and pilots already interface with their plane through a computer that reinterprets their steering commands. They just need to be combined with AI, and suddenly they have become autonomous weapons.

AT: Civilian Casualties

Compassion is a necessary check on the act that outcome involves civilian casualties—LAWs don’t have it.

Human Rights Watch 12 [Human Rights Watch, “Losing Humanity: The Case against Killer Robots,” November 19th, 2012, https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots]

By eliminating human involvement in the decision to use lethal force in armed conflict, fully autonomous weapons would undermine other, non-legal protections for civilians. First, robots would not be restrained by human emotions and the capacity for compassion, which can provide an important check on the killing of civilians. Emotionless robots could, therefore, serve as tools of repressive dictators seeking to crack down on their own people without fear their troops would turn on them. While proponents argue robots would be less apt to harm civilians as a result of fear or anger, emotions do not always lead to irrational killing. In fact, a person who identifies and empathizes with another human being, something a robot cannot do, will be more reluctant to harm that individual. Second, although relying on machines to fight war would reduce military casualties—a laudable goal—it would also make it easier for political leaders to resort to force since their own troops would not face death or injury. The likelihood of armed conflict could thus increase, while the burden of war would shift from combatants to civilians caught in the crossfire. Finally, the use of fully autonomous weapons raises serious questions of accountability, which would erode another established tool for civilian protection. Given that such a robot could identify a target and launch an attack on its own power, it is unclear who should be held responsible for any unlawful actions it commits. Options include the military commander that deployed it, the programmer, the manufacturer, and the robot itself, but all are unsatisfactory. It would be difficult and arguably unfair to hold the first three actors liable, and the actor that actually committed the crime—the robot—would not be punishable. As a result, these options for accountability would fail to deter violations of international humanitarian law and to provide victims meaningful retributive justice.

Human error causes civilian casualties, LAWs can’t solve for human error.

Noone and Noone 15 [Gregory Noone, Fairmont State University, and Nancy Noone, Fairmont State University, “The Debate Over Autonomous Weapons Systems,” Case Western Reserve Journal of International Law, 2015, https://scholarlycommons.law.case.edu/cgi/viewcontent.cgi?article=1005&context=jil]

More common ground may be found in that all parties also agree that human error exists and that we collectively strive to eliminate the pain and suffering caused by such error. We have investigated civilian train, ferry, and airline crashes such as the 1985 Japan Airlines that killed 520 people, caused by improper maintenance techniques. 29 We try and compensate for poor witness identification in criminal cases that may lead to the death penalty for an accused. Every civilian law enforcement shooting is thoroughly reviewed. Human error in the medical field results in 100-200 deaths every day in the United States that may lead to litigation and extensive discovery.30 Likewise, in the military, human error has claimed more than its share of lives. A deadly steam fire onboard the USS IWO JIMA killed ten sailors because the civilian maintenance crew used brass nuts instead of steel ones on a steam valve.31 In 1987, during the Iran-Iraq war, in which the U.S. was supporting Iraq, the USS STARK did not adequately identify a threat from an Iraqi fighter jet, that (supposedly) misidentified the STARK as an Iranian ship, and as a result 37 sailors died when two 1,500-pound Exocet missiles impacted the ship. 32 As a result of the STARK's under reaction error, the next year the USS VINCENNES had an overreaction of human error and shot down an Iranian civilian Airbus A300 in the Persian Gulf, killiig all the civilian passengers and crew. The VINCENNES believed the airplane was descending hito an attack profile and was identified as a military aircraft by its "squawk" transmission, when in reality it was ascending after takeoff en route to Dubai and was recorded with a civilian squawk. 33 Nearly all friendly fire incidents are the result of human error. The friendly fire that shot down of a pair of U.S. Army Blackhawks by U.S. Air Force F-15's in northern Iraq's "No Fly Zone" in 1994 was the result of human error by the AWACS crew as well as the F15's that made visual contact prior to shooting. 34 U.S. Army Ranger, and former NFL player, Pat Tillman was killed in Afghanistan as a result of human error by his fellow unit members when he was misidentified as the enemy in a firefight in 2004.31 "Such tragedies demonstrate that a man in the loop is not a panacea during situations in which it may be difficult to distinguish civilians and civilian objects from combatants and military objectives. Those who believe otherwise have not experienced the fog of war. ' 36 In short, human error causes untold deaths perhaps AWS can perform better. C. Machines Instead Of Humans Even more common ground in this debate is the fact that both sides agree there should not be a "robot army" fighting "robot wars." The U.S. Department of Defense has made it clear AWS will not replace humans in combat but will histead reduce their exposure to life threatening tasks (such as at check points dealing with suicide bombers) and reduce the potential cognitive overload of operators and supervisors.17 Another area of agreement can be found in that both sides of this debate understand the inherent weaknesses in AWS. Any system is subject to breakdowns, malfunctions, glitches, interference (i.e. hacking by the enemy or others), and beyond those mechanical issues in a conflict setting, information / intelligence will always be the Achilles' heel of any tasking and deployment of any weapon system. One rather interesting argument against AWS replacing human combatants is that humans are "capable of morally praiseworthy and supererogatory behavior, exemplified by (for example) heroism in battle, something that machines may not be capable of... [and] replacing humans with such machines may also eliminate the occurrence of soldiers 'going beyond the call of duty'... [and] unduly threatens the ability of human soldiers to exhibit morally exceptional behavior, and undermines important aspects of the military profession." 38 This may be true, and a few combatants may seek combat glory, but 99.99% of combatants simply want to get the mission done efficiently with the least amount of casualties as possible. If you are in a situation that requires individuals to "go beyond" what is asked of them, your situation is probably less than ideal and the overall environment you're operating in could be dire. Another point to be made here is that many medals for heroism are for defensive actions (i.e. throwing oneself on the grenade to save your foxhole buddy's life) and AWS would be ideally suited for a unit's overall defensive posture. Another argument put forth against AWS is that it is "disrespectful" to be killed by a machine. First and foremost, it is easy to assume that seeing the man's eyes as he stabs you doesn't make your death any more palatable than the proverbial "you never hear the round that kills you." Secondly, we are in an age of over the horizon weapons, indirect fire, and buried IEDs therefore the concept of being killed by one type of weapon versus another is somehow more "respectful" is misplaced.

Precision-Guided Weapons that aren’t LAWs are more responsible for saving lives

Michael C. Horowitz and Paul Scharre 14, Professor of political science at the University of Pennsylvania, Senior fellow and Director of the Technology and National Security Program at the Center for a New American Security "Do Killer Robots Save Lives?," POLITICO Magazine, 11-19-2014, https://www.politico.com/magazine/story/2014/11/killer-robots-save-lives-113010

Smarter Bombs, Saving Civilians

One of the most significant developments in the twentieth century toward making warfare more humane and reducing civilian casualties came not in the form of a weapon that was banned, but a new weapon that was created: the precision-guided munition. In World War II, in order to have a 90 percent probability of hitting an average-sized target, the United States had to drop over 9,000 bombs, using over 3,000 bombers to conduct the attack. This level of saturation was needed because the bombs themselves were wildly inaccurate, with only a 50/50 chance of landing inside a circle 1.25 miles in diameter. The result was the widespread devastation of cities as nations blanketed each other with bombs, killing tens of thousands of civilians in the process. Aerial warfare was deemed so inhumane, and so inherently indiscriminate, that there were attempts early in the twentieth century to ban bombardment from the air, efforts which obviously failed.

By Vietnam, most US bombs had a 50/50 chance of landing inside an 800-foot diameter circle, a big improvement over 1.25 miles. Even still, over 150 bombs launched from over 40 aircraft were required to hit a standard-sized target. It is not surprising that civilian casualties from air bombing still occurred frequently and in large numbers.

The Gulf War was the first conflict where the use precision-guided weapons entered the public consciousness. Video footage from “smart bombs” replayed on American televisions provided a dramatic demonstration of how far military power had advanced in a half century.

Today, the weapons that the United States and many advanced militaries around the world use are even more precise. Some are even accurate to within 5 feet, meaning targets are destroyed with fewer bombs and, importantly, fewer civilian casualties. Militaries prefer them because they are more effective in destroying the enemy, and human rights groups prefer them because they save civilian lives. In fact, Human Rights Watch recently asserted that the use of unguided munitions in populated areas violates international law.

How Smart is Too Smart?

Lethal autonomous weapon systems (LAWS) stand in stark contrast to homing munitions and “smart” bombs, which use automation to track onto targets selected by humans. Instead, LAWS would choose their own targets. While simple forms of autonomous weapons are possible today, LAWS generally do not currently exist—and, as far as we know, no country is actively developing them.

Yet fearing that the pace of technological advancement means that the sci-fi future may not be far off, in 2013, NGOs launched a Campaign to Stop Killer Robots. Led by Jody Williams and some of the same activists that led the Ottawa and Oslo treaties banning land mines and cluster munitions, respectively, the Campaign has called for an international ban on autonomous weapons to preempt their development.

The NGO campaign against “killer robots” has generally focused, up to this point, on the autonomous weapons of the future, not the smart bombs of today. Campaign spokespersons have claimed that they are not opposed to automation in general, but only to autonomous weapons that would select and engage targets without human approval.

Recent moves by activists suggest their sights may be shifting, however. Activists have now raised concerns about a number of next-generation precision-guided weapons, including the UK Brimstone missile, the U.S. long-range anti-ship missile (LRASM), and Norway’s Joint Strike Missile. While defense contractors love to pepper the descriptions of their weapons with the word “ autonomous,” emphasizing their advanced features, actual technical descriptions of these weapons indicate that a person selects the targets they are engaging. They’re more like the precision-guided weapons that have saved countless civilian lives over the last generation, not the self-targeting “killer robots” of our nightmares.

Nevertheless, some activists seem to think that these further enhancements to weapons’ accuracy go too far towards creating “killer robots.” Mark Gubrud, of the International Committee for Robotic Arms Control, described LRASM in a recent New York Times article as “pretty sophisticated stuff that I would call artificial intelligence outside human control.” Similarly, the Norwegian Peace League, a member of the Campaign to Stop Killer Robots, has spoken out against development of the Joint Strike Missile.

AT: No I-Law FrameworkInternational norms-setting creates cooperation over LAWs that create infrastructure for future cooperation over technology

Garcia 19 [Eugenio Garcia 19, United Nations - Office of the President of the General Assembly, “The militarization of artificial intelligence: a wake-up call for the Global South,” https://www.researchgate.net/publication/335787908_The_militarization_of_artificial_intelligence_a_wake-up_call_for_the_Global_South] /Triumph Debate

Norms and other approaches to mitigate risks are one of the possible responses to the negative side of AI technology. A recent study identified several of these unsettling aspects of AI: increased risk of war or a first strike; disruption in deterrence and strategic parity; flawed data and computer vision; data manipulation; ineffective crisis management; unexpected results; failure in human-machine coordination; backlash in public perception; inaccuracy in decisionmaking; and public sector-private sector tensions.29 The current deficit in explainability on how neural networks reach a given outcome is likewise raising uneasiness: AI’s black box opacity could increase the sense of insecurity rather than provide strategic reassurance. Old-established military doctrines may be discredited by software innovations, hacking, malware, or cyber-attacks, in a manner that strategic superiority is never fully achieved or sustained. This uncertainty could prove extremely difficult to cope with and give no guarantee of security against adversarial or malicious attempts to interfere with defense algorithms. Searching for predictability by means of norm-setting, therefore, is not just a question of inducing appropriate behavior among states or protecting the weak from the dominance of the powerful. Rather, it is a matter of building commonly accepted rules for all and minimum standards to avert strategic uncertainty, undesirable escalations, and unforeseen crises spinning out of control. As illustrated by nuclear weapons, retaliation by the enemy usually has a curbing effect against the actual use of certain arms in war. This self-restraint mechanism is all the more conspicuous when there is almost certainty that reprisals will be devastating. It might happen that, in order to manage rivalries, defuse and ‘pre-de-escalate’ tensions, some military powers may embrace limitations in the use of autonomous weapons to outlaw certain practices, protect civilians and other sensitive targets from attacks, or simply to avoid future vulnerability in the event of a situation where two-way deterrence renders void the strategic advantage supposed to accrue from a first strike. Certainly, there is no doomsday clock for autonomous weapons at the moment, inasmuch as ‘mutual assured destruction’ does not apply to them (yet). This should not serve as a relief though. On the contrary, if AI-enhanced weapons may be seen as tactically effective for specific missions, the threshol