AUTONOMOUS WEAPON SYSTEM: A MORAL DILEMMA
1.All of us have grown up with Terminator Arnold’s famous words; ‘I will be back” That humanoid had developed human tendencies of hatred and then there was Will Smith movie; I Robot where a robot develops murderous tendencies. Science fiction of yesterday in Hollywood is to be the reality of tomorrow and so would be the challenges of managing this latest precipice of disruption in human history. Artificial Intelligence (AI) would impact the military domain as all others Disruption Points have; agricultural revolution, industrial revolution, nuclear age and information age triggered RMA. AI is the new rubicon to be crossed and the trepidation, which has many dimensions and stakeholders, is whether AI machines should be allowed to execute military missions, especially if human life is at stake and the concomitant moral dilemma whether machines can play the role of judge, jury and executioner with no human interface. The drone strike by US in Afghanistan and then the apology thereafter from a nation state which is anything but ‘Apologetic’ is a harbinger for the future where mistakes by machines may lead to apologies by humans who designed them.
2.Given the complexity of the matter, a working definition of AI is needed. A general definition of AI is the capability of a computer system to perform tasks that normally require human intelligence, such as visual perception, speech recognition and decision-making. This definition is, however, inherently oversimplified and by it even an environment control unit is intelligent because it can perceive and adjust the temperature. This is substantially different from AI whereby a weapon system completes the OODA cycle without meaningful human control, which is the common assumption for autonomous weapons. This paper presents a framework explaining the strengths and weaknesses of AI, and what the future likely holds on imposition of AI into the military sector with complications on policy on autonomous weapon system (AWS).
Thinking Machines
3.To understand the nuances of AI, it is sine qua non to understand the difference between an automated and an autonomous system. An automated system is one in which a computer reasons by a clear binary algorithmic rule-based structure, and does so deterministically, ensuring for a specific input the output generated would always be the same (except if something fails). An autonomous system is one that reasons probabilistically given a set of inputs and hypothesizes best possible course of action given sensor data input.
4.Human cognitive intelligence follows a sequence known as the perception — cognition — action loop, in that individuals perceive something in the world around them, think about what to do, and then, once they have weighed up the options, make a decision to act. AI is programmed to do respond similarly, in that a computer senses the world around it, and then processes the incoming information through optimization and verification algorithms, with a choice of action made in a fashion similar to that of humans. Every autonomous system interacts in a dynamic environment and constructs a world model and updates that model as perceived through tactile sensors, visual and electronic before it can make decisions. The fidelity of the world model and the timeliness of its updates are the keys to an effective autonomous system.
5.This understanding of the autonomous system, brings us to the most pertinent conundrum, in terms of the extent to which humans should be involved? Skills-based behaviour of humans is psycho-motor response that for humans become automatic after training. Such responses are inextricably coupled with the perception — cognition — action loop; effectively alluding to the fact that actions must archetypally come within seconds of a stimulus. The landing of US Airways Flight 1549 on the Hudson River in 2009, is an example of a knowledge based behaviour during an emergency, no autopilot system had the capability to respond in such a manner. Conversely the mishap of Lion Airways Flight No 904 took 189 hapless passengers to their watery grave, antithesis to the logic of autonomous systems.
6.Key challenge for an autonomous weapon system would be its ability to resolve ambiguity in order to achieve acceptable outcomes. It is
conceivable that an autonomous weapon system could be given a mission to hit a static target on a military installation with a high probability of success, however, the same can be done by missiles. The dilemma would arise while targeting an individual discerning the target from its real-time imagery that has a positive identity and assurance that release of a weapon will not lead to collateral damage. The power of human induction — i.e. the ability to form general rules from specific pieces of information — is critical in a situation that requires both visual and moral judgment and reasoning. For humans, induction that drives such judgments is necessary to combat uncertainty. Computer algorithms — especially those that are data-driven like typical algorithms that fall under the category of AI — are inherently brittle, which means that such algorithms cannot generalize and can only consider the quantifiable variables identified early on in the design stages when the algorithms are originally coded. Replicating the intangible concept of intuition, knowledge-based reasoning and true expertise is a challenge for computers. One critical limitation of machine learning is that, as a data-driven approach, it fundamentally relies on the quality of the underlying data and thus can be very brittle. There are no guarantees that a computer leveraging machine learning algorithms can detect a pattern or event never previously encountered, or even scenarios that are only slightly different. As uncertainty grows, these tools become less useful.
The Big Picture
7.The future of AI in military systems is directly tied to the ability of engineers to design autonomous systems that demonstrate independent capacity for knowledge-and expert-based reasoning. Systems are presently operating on an automated and not autonomous level, and do not reason on the fly as true autonomous systems would. There are significant global efforts in the research and development (R&D) of autonomous systems. Incremental progress in such military system development is occurring in many countries in air, ground, on water and underwater vehicles with varying degrees of success. Several types of autonomous helicopter that can be directed with a smartphone by a soldier in the field are in development in the US, in Europe and in China. Autonomous ground vehicles such as tanks and transport vehicles are in development worldwide, as are autonomous underwater vehicles. In almost all cases, however, the agencies developing these technologies are struggling to make the leap from development to operational implementation. URAN-9 an autonomous armed unmanned ground vehicle was tested by Russia in Syria but did not produces desired results.
8.There are many reasons for the lack of success in bringing these technologies to maturity, including cost and unforeseen technical issues, but equally problematic are organizational and cultural barriers. The US has, for instance, struggled to bring autonomous UAVs to operational status, primarily as a result of organizational in-fighting and prioritization in favour of manned aircraft. For example, despite the fact that the F-22 aircraft has experienced significant technical problems and has flown little in combat, the US Air Force is considering restarting the F-22 production line — in itself an extremely costly option — as opposed to investing in more drone acquisitions. Beyond the production line, moreover, the hourly operational cost of the F-22 is $68,362, as compared with the Predator’s $3,679 the latter can perform most of the same core functions of an F-22 save for air-to-air combat missions, which the F-22 itself could not previously perform due to technical problems.
9.For many in the military, autonomous weapon system is acceptable only in a support role, but they threaten the status quo if allowed to take the most prestigious, ‘tip-of-the-spear’ assignments. There are, however, other organizational issues limiting the operational implementation of autonomous weapon systems. The inability of the military to advance its autonomy programmes, is evidently linked to the growth in autonomous systems in the commercial market. Figure 1 depicts R&D spending in the three-year period 2014 — 16 in three key sectors: aerospace and defence; automotive; and information and communication. These sectors are core to the development of autonomous systems, and so tracking spending there gives insight into the speed and extent of innovation. The aerospace and defence sector is responsible for the bulk of the development of autonomous weapon systems. However, as shown in Figure 1, R&D spending is far below that of the other two sectors (only around 15 per cent of the global information and communication sector).
(US$ billion)
10.The imbalance in technology development will introduce unforeseen and disruptive dynamics for military operations. For example, if defence companies and governments continue down a path of relative AI illiteracy, could this enable a potential power shift such that critical AI services will be leased via Google, Amazon or Facebook, Google has long distanced itself from military contracts, while also acquiring highly advanced robotics companies and letting these companies’ pre-existing military contracts expire. If militaries are relegated to buying robots and AI services such as image analysis from commercial off-the-shelf suppliers, this would undoubtedly affect military readiness in both the short and the long term. Although it is not in doubt that AI is going to be part of the future of militaries around the world, the landscape is changing quickly and in potentially disruptive ways. AI is advancing, but given the current struggle to imbue computers with true knowledge and expert-based behaviours, as well as limitations in perception sensors, it will be many years before AI will be able to approximate human intelligence in high-uncertainty settings as epitomized by the fog of war. Given the present inability of AI to reason in such high-stakes settings, it is understandable that many people want to ban autonomous weapons, but the complexity of the field means that prohibition must be carefully scoped. Fundamentally, for instance, does the term autonomous weapon describe the actual weapon
— i.e. a missile on a vector — or the vector itself? Autonomous guidance systems for missiles on vectors will likely be strikingly similar to those that deliver packages, so banning one could affect the other.
Laws and Ethics
11. In international humanitarian law, notions of humanity and public conscience are drawn from the Martens Clause, a provision that first appeared in The Hague Conventions of 1899 and 1907, was later incorporated in the 1977 Additional Protocols to the Geneva Conventions, and is considered customary law. It provides that, in cases not covered by existing treaties, civilians and combatants remain under the protection and authority of the principles of humanity and the dictates of the public conscience. The Martens Clause prevents the assumption that anything that is not explicitly prohibited by relevant treaties is therefore permitted — it is a safety net for humanity.
12.The provision is recognized as being relevant to assessing use of autonomous weapon system. Ethical questions about autonomous weapon systems have sometimes been viewed as secondary concerns. Many States have tended to be more comfortable discussing whether new weapons can be developed and used in compliance with international law, particularly international humanitarian law, and with the assumption that the primary factors that limit the development and use of autonomous weapon systems are legal and technical. However, for many experts and observers, and for some States, ethics — the “moral principles that govern a person’s behaviour or the conducting of an activity” — are at the heart of what autonomous weapon systems mean for the human conduct of warfare, and the use of force more broadly. It is precisely anxiety about the loss of human control over this conduct that goes beyond questions of the compatibility of autonomous weapon systems with our laws to encompass fundamental questions of acceptability to our values. Ethical concerns over delegating life-and-death decisions, and reflections on the importance of the Martens Clause, have been raised in different quarters; a UN Special Rapporteur at the Human Rights Council, the ICRC,20 the United Nations Institute for Disarmament Research (UNIDIR), academics and think-tanks, and, increasingly, among the scientific and technical communities.
13.Discussions on autonomous weapon systems have generally acknowledged the necessity for some degree of human control over weapons and the use for force, whether for legal, ethical or military operational reasons. It is clear, however, that the points at which human control is located in the development and deployment, and exercised in the use, of a weapon with autonomy in the critical functions of selecting and attacking targets may be central to determining whether this control is “meaningful”, “effective” or “appropriate” from an ethical perspective (and a legal one). A prominent aspect of the ethical debate has been a focus on “lethal autonomy” or “killer robots” — implying weapon systems that are designed to kill or injure humans, rather than autonomous weapon systems that destroy or damage objects, which are already employed to a limited extent. This is despite the fact that some anti-materiel weapons can also result in the death of humans either directly (humans inside objects, such as buildings, vehicles, ships and aircraft) or indirectly (humans in proximity to objects), and that even the use of non-kinetic weapons — such as cyber weapons — can result in kinetic effects and in human casualties. Of course, autonomy in the critical functions of selecting and attacking targets is a feature that could, in theory, be applied to any weapon system.
14.Ethical discussions have also transcended the context-dependent legal bounds of international humanitarian law and international human rights law. Ethical concerns, relevant in all circumstances, have been at the centre of warnings by UN Special Rapporteur Christoph Heyns that “allowing LARs [Lethal Autonomous Robots] to kill people may denigrate the value of life itself.” Autonomous weapon systems raise many universal ethical concerns, they are:-
(a)Removing human agency from decisions to kill, injure and destroy; leading to a responsibility gap where humans cannot uphold their moral responsibility.
(b)Undermining the human dignity of those combatants who are targeted and of civilians who are put at risk of death and injury as a consequence of attacks on legitimate military targets.
(c)Further increasing human distancing — physically and psychologically from the battlefield, enhancing existing asymmetries and making use of violence easier or less controlled.
15.Responsibility and accountability for decisions to use force cannot be transferred to a machine or a computer program. These are human responsibilities legal and ethical which require human agency in the decision-making process. Therefore, a closely related ethical concern raised by autonomous weapon systems is the risk of erosion of responsibility and accountability for these decisions. One way to address this concern is to assign responsibility to the operator or commander who authorizes the activation of the autonomous weapon system (or programmers and manufacturers, in case of malfunction). This addresses the issue of legal responsibility to some extent, simply by applying a process for holding an individual accountable for the consequences of their actions. And this is how militaries typically address responsibility for operations using existing weapon systems, including, presumably, those with autonomy in their critical functions.
16.For many considering the implications of autonomous weapon systems, the key change in recent years — and a fundamental challenge for predictability — is the further development of artificial intelligence (AI), and especially AI algorithms that incorporate machine learning. In general, machine-learning systems can only be understood at a particular moment in time. The “behaviour” of the learning algorithm is determined not only by initial programming (carried out by a human) but also by the process in which the algorithm itself “learns” and develops by “experience”. This can be offline learning by training (before deployment) and/or online learning by experience (after deployment) while carrying out a task. Deep learning — where an algorithm develops by learning data patterns rather than learning a specific task — further complicates the ability to understand and predict how the algorithm will function, once deployed. It can also add to the problem of biases that can be introduced into an algorithm through limitations in the data sets used to “train” it. Or a learning system may simply have learned in a way that was not intended by the developer.
17.The task for which an autonomous weapon system is used and the environment in which it is used can also be significant for ethical assessments. In situations where there are fewer risks to civilians or civilian objects, some have argued there may also be fewer ethical concerns raised by autonomy — in terms of reduced human agency. For example, it has been suggested that autonomous deep-sea, anti-submarine warfare and autonomous close-in air defence at sea may be more ethically acceptable, owing to the relatively uncluttered and simple nature of the operating environments, and the reduced numbers of civilians and civilian objects, compared with populated areas on the coast or inland — and, therefore, potentially more predictable, in terms of consequences, and lower-risk as a missile or counter-rocket, artillery and mortar defence weapon, or a “sentry” weapon guarding a border — and an “offensive” system, which actively searches for targets. However, others caution that the distinction between “offensive” and “defensive” is not clear operationally (and legally, the same rules apply to the use of force or conduct of hostilities), and that a weapon system introduced for a “defensive” task may later be used in an “offensive” role.
18.The discourse so far has focussed on conventional (physical/ robotic) systems which interact in a 3D reality with other machines or humans. However, there also exist additional ways to weaponize AI. Software with autonomous capacities can be used to act and interact entirely in cyberspace. Those sometimes-called autonomous intelligent agents are of tremendous military interest for ‘conventional’ military operations: Autonomous intelligent agents acting in cyberspace can support the decision-making process, they can identify an adversary’s vulnerabilities and they can enable an ever-greater speed of response. Hence, the use of autonomy for intangible cyber operations (defensive or offensive) could be decisive and much more economic in current/future warfare. Theoretically, it may be possible to create autonomous systems that control processes with the core aim of harming humans, e.g. the malicious use of biotechnology, 5G radiation,80 or products of molecular nanotechnology. Current examples of such linkages do not exist. However, it is crucial to raise this concern early enough in order to trigger both research in this field as well as a comprehensive debate of peace and security implications of both AI and other emerging technologies.
Conclusion
19.Ethics, humanity and the dictates of the public conscience are at the heart of the debate about the acceptability of autonomous weapon systems. From the ICRC’s perspective, ethics provides another avenue — alongside legal assessments and technical considerations — to help determine the necessary type and degree of human control that must be retained over weapon systems, and the use of force, and to elucidate where States must establish limits on autonomy in weapon systems. Considerations of humanity and the public conscience provide ethical guidance for discussions, and there is a requirement to connect them to legal assessments via the Martens Clause — a safety net for humanity. These ethical considerations go beyond whether autonomous weapon systems are compatible with our laws to include fundamental questions of whether they are acceptable to our values. And such debates necessarily require the engagement of various constituents of society.