SOURCE:
http://www.eurasiareview.com/05012016-lethal-autonomous-systems-and-the-future-of-warfare-analysis/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+eurasiareview%2FVsnE+%28Eurasia+Review%29
http://www.eurasiareview.com/05012016-lethal-autonomous-systems-and-the-future-of-warfare-analysis/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+eurasiareview%2FVsnE+%28Eurasia+Review%29
Lethal Autonomous Systems
And
The Future Of Warfare – Analysis
Canadian Military Journal
CURTSEY-http://www.journal.dnd.ca/vol16/no1/page44-eng.asp
MILITARY TECHNOLOGY
US Army photo
Lethal Autonomous Systems and the Future of Warfare
by Daniel Sukman
For more information on accessing this file, please visit our help page.
Major Daniel Sukman, U.S. Army, is an Army Strategist. He holds a B.A. from Norwich University and an M.A. from Webster University. During his career, Major Sukman served with the 101st Airborne Division (Air Assault) and United States European Command. His combat experience includes three tours in Iraq.
“I’m sorry Dave, I’m afraid I can’t do that” – Hal, 2001: A Space Odyssey
Introduction
War is fundamentally a human endeavor. It is a clash of wills involving political leaders, soldiers, and civilian populations of opposing states and non-state actors. Within this human endeavour, the U.S. has always sought a technological advantage, since technological advantages have assisted the U.S. in overcoming numerical advantages held by adversaries. Maintaining the technological advantage is paramount as the U.S. moves into a period of fiscal restraint and significantly reduced force size.
Today, the world is approaching a robotics revolution in military affairs that may be on par with the introduction of gunpowder, levée en masse, and the advent of nuclear weapons.1 Unmanned and autonomous systems have the potential to fundamentally change the American way of war. This could change how policy makers posture and apply land forces to achieve strategic ends. Unmanned and autonomous systems may even change the roles and the missions of the Army itself. In order to capitalize on this, there is an overwhelming need to build more detail on top of existing guidance to allow the Services to develop new capabilities with both understanding and confidence.
It is clear that robotics and autonomous systems will have a place in society, and will play an increased role on the battlefield of the future. The question remains, what role will lethal autonomous systems play in the future? This article will examine the history, domestic and international policy trends, and the ethics of lethal autonomous systems on the battlefield of the future.
History of Autonomous Weapons Systems
Department of Defense Directive 3000.09 defines autonomous weapon systems as
"…a weapon system that once activated, can select and engage targets without further intervention by a human operator. This includes human supervised autonomous weapons systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.”
The U.S. military does and has employed various semi-autonomous lethal systems in conflicts. Land and sea mines present one example of lethal autonomous systems. The worldwide community-at-large has made attempts to limit and even eliminate the use of these weapons, as evidenced by the Mine Ban Treaty of 1997, and the Convention on Cluster Munitions of 2008.
The U.S. employs semi-autonomous systems as components of air and missile defence systems. It is generally accepted that the speed of jet bombers and ballistic missiles limits the decision space of humans, who must decide whether to employ a Patriot missile in defence. The U.S. has had much success in the employment of these systems; however, it has not been without deadly mistakes, such as the shooting down of an Iranian commercial airliner in 1988 by an Aegis Air Defense system, or the shooting down of a British Tornado aircraft in the opening stage of Operation Iraqi Freedom in 2003.
DVIDS photo 478841 by Staff Sergeant Maeson Elleman
Examples of other systems that employ a varying degree of autonomy include the US Phalanxsystem for Navy surface ships, the U.S. Counter Rocket, Artillery, and Mortar (C-RAM) system, and the Israeli Harpy, which detects, attacks, and destroys adversary radar emitters.
US Navy photo 150925-N-EH855-225 by Petty Officer 3rd Class Bryan Mai
With the success of unmanned aerial systems, and the demand for unmanned ground vehicles in the recent conflicts in Iraq, Libya, and the current conflict in Afghanistan, the U.S. Department of Defense will most likely continue research and development funding to improve these systems, as well as to build new ones. The counter point to this success, however, is that much of the ‘success’ of unmanned ground systems (UGS) came in one of the most benign ADA/Counter-air operational environment the U.S. has seen since the Civil War. In a world where conflict with a ‘near peer’ competitor may be likely, will UGS be as successful and useful as in Operation Enduring Freedom/Operation Iraqi Freedom? If not, are the costs of near-and- mid-term science and technology worth the payback in such a world of technological parity?
Current U.S. Policy and Trends
Currently, neither the United States nor any other nations employ fully autonomous lethal robots. However, strategic science and technology trends do seem to indicate that in 2025 and beyond, with the rapid advancement of technology, lethal autonomous robots and other systems will be available for use by the U.S. military.
U.S. policy trends on lethal autonomous systems date back to when President Gerald Ford signed Executive Order 11905, which outlawed assassinations in 1976. President Jimmy Carter enhanced this order a year later with Executive Order 12036. These two orders banned political assassinations, both directly and indirectly. Four years later, on 4 December 1981, President Ronald Reagan signed Executive Order 12333, which stated, “No person employed by or acting on behalf of the U.S. Government shall engage in, or conspire to engage in, assassination.”2 On 3 November 2003, with the advent of the Global War on Terrorism, a Predator unmanned aircraft launched aHellfire missile that killed Abu Ali al-Harithi in Yemen, turning the aforementioned executive orders ‘on their heads.’3
With the publication of Department of Defense Directive 3000.09 in November 2012, the U.S. became the first nation to adopt an official public policy on autonomous systems. This policy placed a ten-year moratorium on the development of lethal autonomous systems, allowing only for the development of non-lethal autonomous systems.4 The importance of this directive is that it recognizes the dangers to civilians on the battlefield, and requires a ‘human in the loop’ for the use of lethal force. This directive, however, is for a limited period, and it can be waived by senior DoD officials.
DoDD 3000.09 also addresses lethal autonomous systems used against non-human targets, such as aircraft and incoming ballistic missiles. Specifically, it states:
Human-supervised autonomous weapon systems may be used to select and engage targets, with the exception of selecting humans as targets, for local defense to intercept attempted time-critical or saturation attacks for: Static defense of manned installations and onboard defense of manned platforms.
Although current U.S. policy does not allow for the development of lethal autonomous systems without a ‘human in the loop,’ there are a number of ways autonomous robotics can be employed on the battlefield. Unmanned autonomous systems can maintain line-of-sight communications in contested electromagnetic environments. Autonomous ISR, jamming, decoys, communications relay, and sustainment resupply are just a few of the many missions that autonomous systems can perform, freeing other human capabilities to concentrate upon lethality. These are some of many advantages applicable to these systems.5
International Policy Trends
International policy trends indicate that nations may prohibit the use of autonomous lethal weapons systems and robotics on the battlefield in the future. Currently, international protocols incur an obligation not to use weapons that have indiscriminate effects.
Although U.S. unmanned systems and potential autonomous weapons systems are precise, they do not possess the ability to determine the second and third order effects of killing another human being.
In 1997, with the adoption of the Ottawa Treaty, the international community banned the use of land mines, one of the original forms of unmanned lethal systems. The United States has not signed the Mine Ban Treaty of 1997 due to the need, in conjunction with South Korea, to use mines to defend the inter-Korean border. However, the United States will fall out of international norms if we choose not to sign such treaties, especially if the signatories to the Law of Land Warfare decide to outlaw armed autonomous systems.
Reuters RTR6JZ7
Reuters RTXU1A6 by Hyungwon Kang
In 2010, the United Nations and the international community entered into force the Convention on Cluster Munitions. This treaty, similar to the Ottawa Treaty, prohibits the use and stockpiling of cluster munitions.6
If the international community bans weapons systems, be they mines, chemical, or biological weapons, the ban or restriction comes in six distinct parts. First, in the acquisition, retention, or stockpiling of these weapons, second, in basic or applied research and development, third, in testing, fourth, in their deployment, fifth in their transfer, and sixth, in the use of such weapons.7
The Laws of Land Warfare, written over a century ago, will still apply in the use of autonomous lethal systems now, and in all likelihood, in the future. The Hague Convention (IV) requires any combatant “to be commanded by a person.” The Martens Clause, a binding rule of International Humanitarian Law, specifically demands the application of “the principle of humanity” in armed conflict. Without humans, there is no humanity.8
Recent examples of International policy trends can be attested to with the recent launch of the “Campaign to Stop Killer Robots”9 formed by a coalition of non-governmental organizations, to include Human Rights Watch. The Secretary General of the United Nations called for international action to address the concerns over fully autonomous weapons or ‘killer robots.’ The remarks were the latest in a string of statements that United Nations (UN) officials have made on the topic during 2013 at the Human Rights Council, the Convention on Conventional Weapons, and elsewhere.10
In March 2013, during a debate in the British Parliament House of Lords, Lord Astor of Hever (Parliamentary Under Secretary of State, Defence; Conservative) stated: “Fully autonomous systems rely on a certain level of artificial intelligence for making high-level decisions from a very complex environmental input, the result of which might not be fully predictable at a very detailed level. However, let us be absolutely clear that the operation of weapons systems will always be under human control.”11 Article 36, a United Kingdom-based NGO, has praised this pledge, but still calls for further international agreements and treaties to strengthen commitment not to develop fully autonomous weapons and systems that could undertake attacks without meaningful human control.12
The group Article 36, derives its name from Article 36 of the 1977 Additional Protocol to the Geneva Conventions, which provides the framework for the legal review of new weapons. Specifically, it states: “In the study, development, acquisition or adoption of a new weapon, means or method of warfare,” a party is “…under an obligation to determine whether its employment would, in some or all circumstances, be prohibited,” either by Protocol I, or by “any other rule of international law applicable” to such party.13
Although some international organizations have called for a moratorium on the development of lethal autonomous systems, some other nations have continued with development. In 2006, the government of South Korea began installment of the Techwin SGR-A1 Sentry robots along the DMZ with North Korea. These systems are capable of fully autonomous tracking and targeting, although human approval is still required before firing.14
International policy trends are not the only risk or area of ‘blowback’ the United States may receive in the development of autonomous lethal systems. Our own doctrine and leadership is at risk when they take the human out of the loop.
Autonomous Weapons Systems Change the Operational Environment
Unmanned technologies will continue to improve, and the number of allies, partners and adversaries who possess these systems will continue to rise. Our competitors continue to catch up to the U.S. in unmanned technology. Enemy unmanned systems will complicate air, ground, and maritime operations by adding new low-altitude, ground, and amphibious threats to the force that the United States must be able to counter.15
Adversaries of the United States and our allies and partners will continue to acquire and develop sophisticated weapons systems, including precision guided munitions, ballistic missiles, stealth, and unmanned aerial systems. Advanced competitors, such as China, Russia, and Iran, as well as non-state actors such as Hezbollah, could possess sophisticated guided weapons, battle network technologies, and land-based reconnaissance strike capabilities.16
Although the United States maintains clear advantages over our adversaries today, one cannot accept these advantages as being permanent. Other states and non-state adversaries of the U.S. and our allies are likely to obtain lethal autonomous systems in the future. Some of these adversaries are less likely to follow international protocol on the use of lethal autonomous systems.
Unmanned and autonomous weapons systems add physical distance between U.S. soldiers and the battlefield. They remove humans from the kinetic action that occurs in warfare. These systems allow the U.S. military to strike our enemies from a greater distance. However, the use of unmanned and autonomous robots can remove the element of mutual respect between combatants on the battlefield which has persisted over time. Without the mutual respect, it becomes difficult, if not impossible, to conduct dialogue with our adversaries. Without dialogue, there is no method to achieve our end state, which, in turn, can lead to persistent conflict.
The largest, and arguably the most dangerous aspect of a changing operational environment will be the impact upon the homeland. Today, service members operate unmanned systems, such asPredator drones out of secure facilities safely inside the borders of the United States. Individuals making the decision to use lethal force are not physically present when using that lethal force. Operating lethal systems from the homeland on a daily basis opens the possibilities of lethal conflict within the borders of the U.S., the directly- opposite desired effect of the last thirteen years of conflict.
US Air Force photo 385448-T-ITD28-717 by Technical Sergeant Effrain Lopez
America’s adversaries have learned that the most effective way of attacking the U.S. strategic centre of gravity (support of the American people), has been through attrition warfare. The more soldiers, airmen, sailors, and marines that appear on television, or ‘come home in a body bag,’ the lower support for action overseas becomes. Drone warfare, and the introduction of unmanned autonomous systems on the battlefield, be they supply trucks or tanks, will remove the danger to American service members on the battlefield. Adversaries will look for asymmetric ways to attack American service members, and probably the most effective way to do it will be in the United States.
America’s adversaries, although they will continue to look for devastating terrorist-type attacks as we saw on 9/11, or even at the Boston Marathon, will look for ‘legitimate targets’ outside air bases in Nevada from where drones are being operated. They will also probably seek to target headquarters of relevant contracting companies. The attacks will not occur on the bases, but rather, when targets of opportunity present themselves. A drone operator stopping at the local 7-11 after shift is but one example. Arguably, one can view lethal UAS operators as legitimate targets, whether they are in a combat zone, or stopping at the local convenience store for milk on their way home from work.17
The targeting of individuals away from the battlefield is not new to warfare. In fact, it has been demonstrated in the past few years with the assassinations of nuclear scientists in Iran. There is no reason to think that our enemies would not adopt these types of tactics to target individuals in the homeland. This will be different from what we have seen from Al Qaida, in that states that the U.S. engages in hostilities with will look to conduct these asymmetric attacks. They will not be limited to non-state actors.
In most cases, while the UAS pilot and weapons systems operator are not in the area of the target, there is usually someone on the ground confirming the target and giving the command to ‘shoot.’ This is especially true for ‘high value’ human targets. However, attacking the operators of the UAS, no matter where they are located, either via cyber or kinetic means, is a ‘game changer’ in the operational environment, and must be appreciated by the DoD enterprise. It may be possible to cripple the UAS fleet with one or two control centres being taken ‘off-line’ via cyber or kinetic attack in the homeland.
Ethical Considerations
Soldiers, marines, and fighter pilots on the battlefield must often make instantaneous decisions with respect to the use lethal force. They consider not just whether the person seen through the scope is an enemy, but what taking his/her life will mean for the local populace, the tribal leaders in the area, the effects upon the individuals family, and if taking that individual’s life will create more enemies in the future. It is difficult, if not impossible, to think that robots will consider all these factors, or at least, have the capability to sort the relevant factors from the irrelevant.
Making the decision to go to war and kill other human beings easier is a significant risk in the decision to pursue lethal autonomous systems. When U.S. service members are immune to the dangers of combat, and there is no friendly human cost to war, the implication is that there is less of a debate on the decision. The seeming ease of use of drones to kill Taliban in Pakistan and nations within the Horn of Africa, all of which are sovereign nations, was made without much debate, due to the lack of physical risk that U.S. service members face when these operations are conducted. Historical evidence backs this up, as seen with the decision in 1998 to launch Tomahawkmissiles towards Sudan and Afghanistan. This may change, however, if and when drone and other autonomous systems proliferate to other nations and non-state actors that may respond in kind to U.S. attacks.
DVIDS photo 1840730 by Petty Officer 1st Class Martin Wright
In addition to easing the decision to go to war, lethal autonomous systems lack the human feelings of empathy and common sense. Soldiers working at an entry control point, or standing guard in a tower, can look at the face of a human approaching, or see a family of women and children in the back seat of a car, and make decisions with respect to whether to use lethal force, based upon the aforementioned common sense and empathy. A robotic system does not have this capability.18Robots and other autonomous systems can be compared somewhat to ‘benign psychopaths,’ lacking a frame of reference to understand or make moral or ethical decisions, based upon consequences.
The concept of ethical and legal responsibility continually rears its head in the discussion of lethal autonomous systems. In addition to developing procedures for immediate responsibility when autonomous systems injure, maim, or kill the wrong person, or destroy the wrong facility, the Department of Defense will have to develop mechanisms to preclude humanitarian organizations from suing industry. Failure to consider these second and third order effects could produce a situation similar to ‘Union Carbide and napalm’ during the Vietnam era.
To ensure that the United States lives up to ethical concerns in the development of lethal autonomous systems, it should develop and publish emerging laws and ethics in parallel with each system. This means integrating legal and ethical frameworks into science and technology organizations.
‘Error free war’ will always be a myth. Keeping a ‘human in the loop’ is not a panacea to all the ethical risks of lethal autonomous systems. In fact, human judgment can prove to be less reliable than technical indicators in the heat of battle. For instance, during the 1994 friendly fire shoot down of two U.S. Army Blackhawks in the no-fly zone over northern Iraq, the U.S. Air Force F-15s involved made a close visual pass of the targets before engaging them.19 Pilot error (and human error aboard the AWACS monitoring the situation) contributed to their misidentification as Iraqi military helicopters. Similarly, in 1988, the USS Vincennes engaged an Iranian airliner that it mistakenly believed was conducting an attack on the ship. The warship’s computers accurately indicated that the aircraft was ascending. In this case, human error led the crew to believe it was descending in an attack profile, and, in order to defend the ship, they shot down the aircraft.20Finally, the bombing of the Chinese Embassy in Belgrade during Operation Allied Force involved precision weaponry, satellite photos, and an efficient planning process, the human error of misidentifying the embassy as something else led to the loss of innocent civilians on the battlefield.
Reuters RTRUZOK
Considering all the ethical considerations, it is important to look at what the risks are in regards to the development of lethal autonomous systems.
Ethical Questions
- Can the decision of life or death be left to a machine?
- Can autonomous systems evaluate proportionality?
- Can autonomous systems anticipate second or third order effects?
- Who is accountable or responsible when autonomous systems make the wrong decision?
- Can a legal system of accountability be devised to cover the use of autonomous systems?
- Does the use of autonomous systems increase the likelihood of a decision to use military force to resolve international affairs?
- Will the unavailability of a human military target increase the likelihood of attacks on civilians?
- Would the use of autonomous systems encourage retaliations, reprisals and terrorist attacks on the U.S. homeland?
Risk to Leader Development
The use of autonomous and semi-autonomous systems presents a challenge to the development of leaders throughout our nation’s military. ADRP 6.0: Mission Command defines Mission Command as “…the exercise of authority and direction by the commander using mission orders to enable disciplined initiative within the commander’s intent to empower agile and adaptive leaders in the conduct of unified land operations.”21 Moreover, according to Marine Corps doctrine, overcoming or reducing the impacts of fog, friction, and chance on the battlefield require trust in subordinates.22
Lethal autonomous systems risk losing experienced judgment in unified land operations, contradicting the concept of mission command. Over the past thirteen years of conflict, commanders have given junior officers and NCOs a level of trust and autonomy never experienced before in the U.S. military. What these junior leaders have learned over the past decade will influence them as they grow into senior leaders. In a world of autonomous and semi-autonomous systems, the need for lower- level operators who make life-and-death decisions shrinks. Removing a level of operators who live in the world of tactics may remove a cohort which needs that experience when they become operational and strategic leaders.23 We cannot create twenty-seven-year-old captains with the “experience, maturity and wisdom” of forty-five-year-old colonels, if those captains have never made the decisions or experienced the combat that forms combat maturity and wisdom.
The DoD Unmanned Systems Roadmap is very direct in the challenge posed by the continued automation of warfare and its impacts to leadership development:
“The automating of actual operation/fighting of platforms will decrease the need for people to crew them, while the personnel needed to simply maintain the vehicles are likely to increase. This has the potential to radically alter the ‘tooth to tail’ ratio in combat forces to heavily favor support personnel vice combat personnel. At the same time, the need for middle-to-senior combatant leaders and decision makers will not change, since they will know the tactics and strategy necessary to operate and direct the autonomous systems. The challenge will be developing middle to senior combatant leaders needed in an environment allowing fewer junior leaders.”24
If the United States is to adopt a wide range of lethal autonomous systems, the Army and the Joint Force will have to make deep, long-lasting changes across Doctrine, Organization, Training, Materiel, Leadership, and Education factors (DOTMLPF), and incorporate these systems throughout military culture. From the time that new soldiers enter initial training, they must be prepared to accept that autonomous and unmanned systems will be a part of their arsenal. The autonomous systems may be medical devices designed to assist in lifesaving on the battlefield, or lethal systems designed to kill an adversary that the human cannot sense on the battlefield. Whether lethal or lifesaving, autonomous systems will define the battlefield of the future. Despite the many risks we face in continuing the pursuit of lethal autonomous systems, there are a number of distinct advantages to the use of such systems.
USMC photo 150916-M-TE668=001 by Corporal Eric Keenan
Advantages of Lethal Autonomous Systems
There are clear advantages to the continued pursuit of lethal autonomous and unmanned systems. The ability to project power and lethality increases with the use of unmanned and autonomous systems. From a personnel management standpoint, unmanned and autonomous systems do not carry the same psychological baggage as humans. Robots do not return from deployments and suffer from PTSD. Severely damaged robots can be scrapped, rather than requiring care from the Veterans Administration. Unmanned and autonomous systems increase the range at which U.S. forces can operate, while at the same time the endurance of systems in use increases as well. Unmanned ISR systems can remain on station while pilots change out after eight hours of operational time. Lethal autonomous robots and other systems have clear, distinct advantages in the future of warfare. However, the United States must take ethical considerations into account as policies develop and acquisitions strategy moves forward.
In complex war, the objective is not just to win but to do so while minimizing our losses and minimizing collateral damage, in addition to other goals. Lethal autonomous systems allow us to achieve a reduction, or in the extreme, an elimination of losses, but at what cost? Autonomous systems may prove to be more adept at distinguishing between combatant and non-combatants on the battlefield, thereby enabling better and more precise targeting. Human decision making may be in error due to fear, anger, or fatigue, which narrows the differences with autonomous systems. However, as adept as they may become, robots and autonomous systems will still lack the capability to discern, decide, and understand beyond the engagement at hand.
The use of lethal autonomous systems in battlefield environments, in which there are few if any civilians, further reduces the risk and highlights the advantages of such systems. Autonomous systems to counter adversary submarines or anti-missile systems are examples of wherein the risk to civilians is slight-to-none. As stated earlier in this article, war is fundamentally a human endeavour. Commanders often have to make hard decisions involving life and death in war in achieving objectives. Lethal autonomous unmanned systems provide them with another series of options in situations where they must select from among the lesser of evils.
Conclusion
Countries around the world are continuing to invest in robotics and autonomous systems for a variety of reasons. In Japan, the use of autonomous robotics in the medical field has assisted with caring for an increasingly elderly population. In fact, 30 percent of all commercial robots in the world exist in Japan.25 These robots are strictly non-lethal in nature and designed to assist the population as a whole, not just the Japanese military. By contrast, the United States has invested in both non-lethal and lethal autonomous systems.
If America’s use of drones over the past thirteen years of conflict is any indication, this may become an irreversible trend. The international community, through the use of legal challenges, human rights conventions, and international treaties, will continue to restrict the development and use of lethal autonomous systems on the battlefield.
It will be prudent to continue investment in non-lethal autonomous systems, such as ISR, mine clearing, and CBRN reconnaissance. In addition to saving lives and lowering the risk to service members on the battlefield, having these types of systems unmanned will open positions where a human interface is paramount, such as with respect to civil affairs.
To presume that no one will eventually arm autonomous robots simply because the United States chooses not to do so is naĂŻve, as is the belief that any international convention to prohibit their creation/use will have any binding effect upon those nations that routinely ignore or subvert such treaties. Unfortunately, even without autonomous lethal U.S. systems, non-U.S. made/controlled lethal autonomous robots will likely be a condition of the battlefield under which U.S. troops will operate. The U.S. military should anticipate that other cultures and threats will have no problem crossing this threshold, and we should plan how to counter them now- not later.
This article represents the author’s views and not necessarily the views of the U.S. Army or Department of Defense.
US DoD photo 735648-F-UHW96-158
Notes
- Peter Singer, Wired For War ~ The Robotics Revolution and Conflict in the 21st Century. (London: Penguin Group, 2009), pp. 179, 203.
- “Executive Orders,” at Archives.gov.
- Shane M. Riza, Killing Without Heart, Limits on Robotic Warfare in an Age of Persistent Conflict. (Washington: Potomac Books, 2013), p. 37.
- U.S. Department of Defense Directive 3000.09 “Autonomy in Weapon Systems” 21 November 2012.
- Ibid.
- “Where global solutions are shaped for you | Disarmament | Signatories and Ratifying States,” at Unog.ch.
- Bonnie Docherty, “The Time is Now: A Historical Argument for a cluster Munitions Convention,” in Harvard Human Rights Law Journal 20, 2007, p. 53.
- Christof Heyns, “Report of the Special Rapporteur on extrajudicial, summary, or arbitrary executions,” Human Rights Council, 9 April 2013.
- For more information on the Campaign to Stop Killer Robots, see stopkillerrobots.org.
- Report of the Secretary-General on the protection of civilians in armed conflict. 22 November 2013.
- http://www.publications.parliament.uk/pa/ld201213/ldhansrd/text/130326-0001.htm#st_14.
- “Killer Robots: UK Government Policy on Fully Autonomous Weapons,” Article 36, April 2013.
- Kenneth Anderson and Matthew Waxman. “A National Security and Law Essay Law and Ethics for Autonomous Weapon Systems Why a Ban Won’t Work and How the Laws of War Can,” Stanford University, 2013.
- Ty McCormick. “Lethal Autonomy,” in Foreign Policy Magazine, January/February 2014, p. 18.
- DoD Unmanned Systems Integrated Roadmap 2013-2038.
- Robert O. Work and Shawn Brimley, 20YY Preparing for War in the Robotic Age. Center for New American Security, January 2014.
- If today a member of the Taliban were to ambush a drone operator on a Nevada highway, could he make a case in court that he is a legitimate actor on the battlefield and should be considered a POW with all the rights and protections that come with that status?
- Consider that your average two-year-old can tell the difference between a red apple and a red ball instantaneously, a computer or robotic system cannot now, nor will in the foreseeable future, be able to accomplish this very simple task.
- Aircraft Accident Investigation Board Report, U.S. Army UH-60 Blackhawk Helicopters 87- 26000 and 88-26060, vol. 1 (Executive Summary) 3 (May 27, 1994), available athttp://www.dod.mil/pubs/foi/Reading_Room/Other/973-1.pdf.
- Formal Investigation into the Circumstances Surrounding the Downing of Iran Air Flight 655 on 3 July 1988, 19 August 1988, at pp. 37, 42–45, available athttp://www.dtic.mil/dtic/tr/fulltext/u2/a203577.pdf. The report concluded that “[s]tress, task fixation, and unconscious distortion of data may have played a major role in this incident.” Ibid. at p. 45. It also noted “scenario fulfillment,” that is, the distortion of “dataflow in an unconscious attempt to make available evidence fit a preconceived scenario.”
- ARDP 6-0 Mission Command May 2012 (1-1).
- MCWP 6-11. Leading Marines. 27 November 2002.
- Shane M. Riza. Killing Without Heart, Limits on Robotic Warfare in an Age of Persistent Conflict. Potomac Books, Washington D.C. 2013 (p 104).
- DOD Unmanned Systems Roadmap, p. 39.
- Michio Kaku. The Physics of the Future, How Science will Shape Human Destiny and Our Daily Lives by the Year 2100. (New York: Random House Publishing, 2011).
Top Comment
A lot of unqualified & Wrongly informed persons are posting misleading comments. Let me clarify. In order to keep de... Read MoreSunil