Recognizing the Dangers Emerged by Lethal Autonomous Weapons]

“Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

Weapons have always been a means of destruction and disaster. These tools were invented and utilized by humans themselves to enforce law and order in society through violence and enhance the effectiveness of warfare. While on the other hand, lethal autonomous weapons (LAWs) are designed by humans to be used with no supervision and direction. The history of lethal autonomous weapons dates back to Leonardo da Vinci’s sketches of a mechanical soldier similar to a human fighter. Then, in 1898, Nicholas Tesla invented remotely controlled devices. Fast-forward to the twenty-first century, the United States Air Force announced the integration of gunships with completely autonomous capacities. The fear of giving machines the decision of taking human life has been a concern ever since (Klincewicz, 2015). Hence, lethal autonomous weapons consist of human-shaped robots, droids, ships, and drones which are composed of various cameras, sensors, processors, and algorithms. The sensors observe the situation and transfer the data to the processors which initiate an action based on the direction of the algorithms rather than humans (Etzioni, 2018).

Due to such risks and concerns regarding human life, as of 2013, several countries have been seeking to ban and outlaw lethal autonomous weapons. Prominently, since 2013, there has not been any nation that has supported the creation as well as the distribution of these weapons. Thousands of protesters and campaigners have gathered together on the campaigns organized by Human Rights Watch to probate lethal autonomous weapons (Sauer, 2016). This paper will explain the dangers associated with the use of lethal autonomous weapons by ordering the paper in three parts; the lethal autonomous weapons timeline, concerns, and their finality.

Timeline

While many assume lethal autonomous weapons are new and modern innovations, these instruments have been encompassing in theories and exercise for hundreds of years. Previously, these weapons were not advanced, and their forms, as well as types, were limited. They resembled human soldiers, planes, and ships in prototypes, not in reality (McCormick, 2014). Presently, when the subject of lethal autonomous weapons is raised, individuals imagine the Terminators. However, that is not the case as these machines are much more modest, so far. For instance, autonomous drones are the most advanced form of LAWs that we can develop thus far. We still do not have an unsupervised autonomous weaponry system. However, in the future, as autonomous weapons are named the third revolution of warfare, it is assumed that humanity will be dominated by these weapons. If these weapons are not regulated, “in the future, humans would be reduced to making only initial policy decisions about war, and they would have mere symbolic authority over automated systems.” This part of the paper will describe the past history, and present LAWs models, along with their technical functionalities of lethal autonomous weapons (Etzioni, 2018).

History

The history of autonomous weapons dates back to a few centuries. The initial models which are described below are the predecessors of LAWs. Initially, an automated knight soldier was found in Leonardo da Vinci’s designs from 1495. Though it was not a perfect design, and the power source was ambiguous, the machine resembled a human soldier perfectly. Following da Vinci’s designs, in 1898, the most remarkable descendant of LAWs was made by Nikola Tesla which resembled a remote vehicle. He tried to create radio-controlled weapons for the US military but was not taken seriously. However, when world wars occurred, autonomous weapons were strengthened and used. Both German and American military were using remote explosive vehicles. Germans also used remote-controlled boats, which assisted them to double their patrolling area.

This led to lethal autonomous weapons becoming more intelligent and notable after the world wars, which was their common utilization era. These weapons had various kinds such as the computer-guided rockets and missiles, laser-guided projectiles used in the Vietnam War, and Navstar satellite. Consequently, in 1994, the United States contracted with a company called General Atomic aiming to develop the Predator Drones. These drones were designed for surveillance missions, and later they were armed with missiles. By 2002, drones became the United States’ most accepted weapons in counter-terrorism, and the era of lethal drones began.

Leonardo da Vinci’s “Mechanical Knight” Design

Presently

The lethal drones are flying in warzones constantly but they are not completely automated as there is a degree of control through monitoring by human operatives. However, algorithms that no longer need human supervision to exist and are implemented in some areas as well. To illustrate, Samsung Techwin has been installed in borders between North Korea and South Korea in 2006. These weapons are fully autonomous and are able to track and fire targets (McCormick, 2014).

Moreover, in 2009, the United States was intending to replace its aircraft with “fully autonomous capabilities” which would have definitely changed the role of humans in air warfare. Thus, the concerns of lethal autonomous weapons rose. In Cambridge University, researchers warned that granting autonomous weapons access to aircraft will vandalize our lives. Since 2012, several protests have occurred with one slogan, “Stop Killer Robots.” The Human Rights Watch has warned us that “A number of countries, most notably the United States, are coming close to producing the technology to make complete autonomy for robots a reality” (McCormick, 2014).

Lethal Autonomous Weapons General Concerns

Artificial Intelligence researches have an abundance of reasons why lethal autonomous weapons should be banned. Giving the decisions of taking a life in a robot’s hand causes thousands of concerns (Horowitz, 2016). This part of the paper intends to categorize these concerns into three main parts, which are ethical concerns, legal concerns, and security concerns.

Ethical Concerns

Once it comes to ethics, one thought arises; LAWs inability to differentiate warriors from civilians, and their lack of understanding of the rules of war. Taking into consideration these concerns, and still granting them the authority to decide someone’s life, will disrupt the whole war theory. These weapons are based on algorithms and programming of events that have been foretold. However, the world is not very predictable. Once they meet operations that are outside their programming, they will be uncontrollable. Their lack of empathy surely will put millions of lives at risk. For instance, a human soldier will let a combatant live if he is giving up, LAWs will not. Additionally, they will not be generous to enemy prisoners which will violate the rules of war as well (Horowitz, 2016).

Legal Concerns

Lethal autonomous weapons are not being supervised by any human operators. Once an unexpected and unacceptable incident occurs, or they commit war crimes, who will be held accountable? In general, rules, once a soldier commits a crime in the warzone, the commander will be held accountable. In the LAWs case, the same constraints apply to which the commander is responsible for crimes of the robotic soldier. However, these weapons may malfunction. Holding a commander accountable for the robot's defects is neither ethical nor legal (Roff, 2014). As long as the matter of accountability is relevant, the military will not use these weapons unless it is an emergency. For example, in 2003 a legal issue occurred, which “a U.S. Patriot missile battery shot down allied aircraft, no one was personally held accountable for the system malfunction.” Considering these legal issues and moral accountabilities, humans might no longer feel accountable for the deaths of innocent civilians by LAWs if they are not banned. Therefore, unless these concerns are resolved, lethal autonomous weapons pose a legal threat (Horowitz, 2016).

Security Concerns

LAWs pose a very formidable threat once it comes to security; they could be hacked and utilized by criminals such as terrorists. Although the governments are guaranteeing that examination, validation, and affirmation of these weapons are necessary, the threat still stands as they are the third revolution in warfare. Their potential to turn into the most powerful weapon, and having a large probability to be hacked, could start a global autonomous weapons arms race. Hence, it will not be long before these weapons enter the black markets, and be used by rulers, terrorists, and bandits. Moreover, their potential to assassinate with their unmeasurable strength will be used to eliminate races and ethnicities in biased wars. They will be used for the next genocides (Klincewicz, 2015).

Lethal Autonomous Weapons Finality

The use of lethal autonomous weapons might be advantageous in a few emergencies. However, their threats and ethical concerns outweigh their advantages. Firstly, allowing algorithms to take a life is unethical and morally unacceptable (Horowitz, 2016). Our dignity will be degraded if a machine chooses to kill an innocent without us able to stop it. Secondly, as they are unable to properly distinguish between an enemy and a civilian, it is legally and morally corrupt to not forbid them. Eventually, LAWs will not be held responsible for the crimes they commit. They might malfunction and murder innocent individuals. Holding the human commander accountable is unfair, when we can ban the use of LAWs altogether before a similar incident occurs (Roff, 2014).

Additionally, lethal autonomous weapons pose higher security concerns than ethical and legal concerns. Initially, this technology will start an era of an artificial intelligence arms race. They can jeopardize world peace and global security. Next, once LAWs are used in the military, the occurrence of wars will escalate. As they do not possess any emotions and fear, their reaction against violence will be prompt. Finally, their self-learning or machine learning ability is an additional threat. They are able to learn swiftly and act rapidly in comparison to human beings. These weapons should be banned for they are a threat to humanity (Klincewicz, 2015).

Thus, in 2013, the chief of the United Nations, along with 117 countries conveyed up the issue of autonomous weapons severely. The chief of the United Nations mentioned that lethal autonomous weapons with the capability to take lives with no human interference are politically intolerable, and should be forbidden by international law. Totally, 29 countries are explicitly against these weapons. These countries include but not limit China, Iraq, Pakistan, Argentina, and Venezuela. Also, in 59 countries, about 118 organizations have been trying to ban these weapons. However, there has not been a treaty established by international law to ban these weapons hitherto. Nevertheless, the higher the public awareness is concerning this matter, the more pressurized governments and the United Nations will get to negotiate the ban of lethal autonomous weapons treaty. With all the threats and uncertainties LAWs pose to humanity, experts believe eventually these weapons will be banned.

Conclusion

Lethal autonomous weapons, unlike any other weapons, have the ability to self-educate and poses a threat, or even destroy humanity. In order to ban these weapons, we should understand their threats and terrorizations. Their hazards, as measured and described in the paper, exceed their benefits. This paper illustrated the history of LAWs, their contemporary abilities, the unsettlement, the dangers they cause, and why they should be banned. Therefore, understanding lethal autonomous weapons will avoid the possibilities of future genocides and the occurrence of the third world war.

References

Etzioni, A. (2018). Pros and Cons of Autonomous Weapons Systems (with Oren Etzioni). Library of Public Policy and Public Administration Happiness Is the Wrong Metric, 253–263. https://doi.org/10.1007/978-3-319-69623-2_16

Horowitz, M. C. (2016). The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons. Daedalus, 145(4), 25–36. https://doi.org/10.1162/daed_a_00409

Klincewicz, M. (2015). Autonomous Weapons Systems, the Frame Problem and Computer Security. Journal of Military Ethics, 14(2), 162–176. https://doi.org/10.1080/15027570.2015.1069013

McCormick, T. (2014, January 24). Lethal Autonomy: A Short History. https://foreignpolicy.com/2014/01/24/lethal-autonomy-a-short-history/.

Roff, H. M. (2014). The Strategic Robot Problem: Lethal Autonomous Weapons in War. Journal of Military Ethics, 13(3), 211–227. https://doi.org/10.1080/15027570.2014.975010

Sauer, F. (2016, October). Arms Control Today. Stopping ‘Killer Robots’: Why Now Is the Time to Ban Autonomous Weapons Systems | Arms Control Association. https://www.armscontrol.org/act/2016-09/features/stopping-%E2%80%98killer-robots%E2%80%99-why-now-time-ban-autonomous-weapons-systems.