Future of War - Artificial Intelligence Weapons

With AI taking over many different sectors from digital art to driving, it appears weapons of warfare could be next. Through this highly controversial technological development, it has been heavily debated and dissected since its very inception. This brief will dive into the recent discourse regarding autonomous weapons and the potential of Artificial Intelligence in looking at the history, arguments, and future legislative implications.

At YIP, nuanced policy briefs emerge from the collaboration of six diverse, nonpartisan students.

HeadingHeading 3

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Support

Abstract

With AI taking over many different sectors from digital art to driving, it appears weapons of warfare could be next. Through this highly controversial technological development, it has been heavily debated and dissected since its very inception.  This brief will dive into the recent discourse regarding autonomous weapons and the potential of Artificial Intelligence in looking at the history, arguments, and future legislative implications.

Executive summary

With recent developments towards autonomous weapon systems in increased research and funding by leading countries, many have come to question the ethics of AI led warfare. The same feature of no human intervention has been praised and also dismantled from different political levels. In this brief, the origins and arguments of autonomous weaponry will be examined to provide comprehensive understanding into the potential future of war. 

Overview

Significant concerns surrounding the legal and ethical compatibility of lethal autonomous weapon systems (LAWS) have emerged over the years, posing the need for calls to implement comprehensive frameworks and solutions to prevent unintended harm to civilians and infrastructure if ever used. Additional concerns also include the possible dehumanization of warfare, unpredictable consequences, and the risk of fatal mistakes due to AI-generated errors. 

According to Mariarosaria Taddeo, a professor of digital ethics and defense technologies at Oxford University, autonomous weapons are self-operating machines designed for military purposes that can make decisions and take lethal actions without direct human control. 

More specifically, Autonomous weapon systems are any weapon systems that choose targets and use force on them without the need for human interaction. When an autonomous weapon is activated, its target is unknown, as is the particular location, time, and/or identity of the strike. This is because an autonomous weapon is activated by software and sensors that compare what the sensors pick up from the surroundings to a "target profile." This may be a person's motion or the form of a military vehicle. The strike is not initiated by the user but rather by the vehicle or the victim. Military strategists, roboticists, and ethicists are debating whether or not to create and use weapons capable of performing more complex tasks, such as force application and targeting, with little to no human oversight. 

Through Allied Market Research, a consulting and advisory firm with over seven thousand clients, the “global autonomous weapons market was valued at $11,565.2 million in 2020 and is projected to reach $30,168.1 million in 2030”. 

Autonomous weapons have been garnering significant praise for their ability to operate at faster speeds than humans, potentially replace human fighters in dangerous conflict zones, and overcome mental and physical constraints faced by human pilots. 

However, their deployment has also faced considerable backlash due to concerns that the weapons could be intentionally programmed to commit inhumane acts against civilians, along with the potential loss of human understanding and control of current conflict conditions. 

As a multitude of countries around the world, including the United States, Russia, and China, continue to invest substantial amounts of funds into research and development surrounding LAWS; it has become increasingly vital to evaluate the development and deployment of these immensely powerful weapons from an ethical and political lens.

History

Autonomous AI-powered weapons are intensely debated in society, owing to their indistinct morals but also the potentially fatal lack of control humans will have. 

Warfare weapons have always been controversial. Starting from German Scientist Fritz Haber’s invention of chemical weapons that could catch victims by unjust surprise and resulting in lifelong disability, this “father of chemical warfare” created weapons crossing moral boundaries for the first time, and such unfair practices of warfare began to take hold in the public’s imagination.

The earliest autonomous vehicle was created in 1925 and was remotely controlled by the car behind it - the beginning steps of what we now know today as autonomous systems. 

Since then, advancements in RADAR, LIDAR, high-res 360-degree cameras, and of course, artificial intelligence have taken us farther beyond the remote-controlled cars of 1925. 

By 1950,  British Mathematician Alan Turing had proposed the idea of “machine thinking”. This was followed in 1953 by the USS Mississippi starting “test-fires”, one of the earliest computer-guided missiles”, that officially marked one of the first incidences of military use of autonomous weapons. 

Now more recently by October of 2013, the “United States Strategic Capabilities Office launched 103 Perdix drones, which communicated using a “distributed brain” to assemble into a complex formation, travel across a battlefield, or regroup into a new formation.” 

Developed by MIT students, these swarms of drones could theoretically equal the threatening nuclear devices - but worse, these swarms are unable to accurately discriminate between civilians and targets, thus potentially leading to more civilian casualties. This harm on a massive scale coupled with the inability of humans to intervene causes many experts to doubt the effectiveness of such weapons. 

Whether or not these autonomous weapons are politically acceptable still remains contentiously debated.  

Policy Problem

A core debate on the ethics of AWS focuses on interpreting international humanitarian law (IHL) since the law needs more clarity on the morals of machine usage in force decisions. 

In 2012, the United States Department of Defense (DoD) issued Directive 3000.09, titled "Autonomy in Weapon Systems,” which established guidelines on using autonomous and semi-autonomous weapons in the military. The directive explicitly states that AWS must be used by laws of war, ensuring that unintended harm does not come to civilians and infrastructure. 

Moreover, it specified that a legal review of new weapons is required to guarantee amenability to legal obligations. Since 2018, the United Nations has held that lethal AWS is reprehensible and has consistently called for prohibition under international law. 

Recently 2023, Secretary-General António Guterres restated that specific multilateral regulations should be concluded to prohibit lethal AWS that function without human control, as utilization can raise humanitarian and ethical concerns that can pose a threat to basic human rights. 

AI systems may make unprecedented decisions that can lead to unaccountable operations potentially harming civilians and infrastructure in a violent conflict. These unpredictable consequences may break humanitarian values and cause unintended harm, which, in turn, may dehumanize warfare. 

The emotional intelligence of a human making wartime decisions varies exceptionally from that of an algorithm, which is inhumane and unable to differentiate human targets as civilians or adversaries. This lack of empathy can give rise to substantial war crimes and pitiless manslaughter in the context of conflict. 

In an offensive situation, an AI machine may override civilian leadership and attack without direction, resulting in a potential arms race and essentially exacerbating global instability. 

Additionally, AI may struggle to differentiate between military targets and civilians, thus disobeying the IHL. While in a defensive strategy, AI can be advantageous due to the insurance of predetermined pathways that can be safely used to eliminate threats, an offensive, rogue AWS can have lethal repercussions.

The possibility of AI making fatal mistakes is not impossible; many instances of rogue AI have made headlines in recent decades. In 2018, a self-driving vehicle fatally struck and murdered Elaine Herzberg, a pedestrian. The accident led visionaries who expected impressive outcomes from the innovation to reconsider the usage of AI in cars considerably. Similarly, in 2003, a US computer-powered interceptor missile recommended an offensive attack on a misidentified Iraqi anti-radiation missile. After shooting down the aircraft and killing two crew members, it was recognized as a British fighter jet. The tragic results of AI-generated errors question the ethical and practical nature of AI in warfare.

These uncertainties beg the question: should we trust artificial intelligence to make vital decisions, essentially dehumanizing warfare efforts by passing the possibility of fatality to computer systems? 

Policy Options

As technological advancements continue to propel us into the future, it becomes clear that AI applications in the military present a complex challenge between ethics and strategy. 

The seemingly foreign concept of machines making life-and-death decisions without human intervention will always raise profound questions regarding accountability and morality. 

Address these concerns in a manner that does not hinder the technological landscape of the future while also maintaining necessary ethical considerations that should always play a critical role when discussing human lives. 

Policymakers must explore a variety of options aimed at mitigating associated risks while fostering a more sustainable future. 

Option 1: International Regulatory Framework 

The most promising route when fully understanding the capabilities of a 100% autonomous weapon is considering an international ban on them. In a world that is growing more and more focused on the development of AI, this ban does not appear feasible on an international scale. Still, at the bare minimum, a regulatory framework is crucial in maintaining vital standards of morality. This comprehensive framework would define clear guidelines and standards for developing, deploying, and using autonomous systems in armed conflict. By fostering multinational cooperation on a global scale, such a framework could help prevent a potent arms race as we have seen with nuclear weapons. 

Promoting transparency and accountability is imperative with this revolutionary battlefield technology still in its early stages. This framework must include a ban on fully autonomous missions that cannot be stopped once sent out, which is the most necessary limitation of implementing autonomous weaponry. When we allow machines to be the sole decision-makers responsible for lives lost, we are setting up the possibility of extreme global conflict, which in turn can lower the threshold for war. 

A simple and common malfunction of AI around a civilian center would be impossible to stop if it were fully autonomous. Ensuring humans are “in the loop” regarding any intersection between AI and military use effectively answers any question of accountability and reduces the occurrence of innocent lives lost. Additionally, robust monitoring mechanisms and verification protocols would be essential to ensure compliance and address violations effectively. While this option depends on collective action, a regulatory framework is the first vital step toward placing limitations in the face of evolving technological threats. 

Option 2: Ethical Design Principles 

Another strategy for addressing the ethical challenge of autonomous weaponry lies in integrating emerging ethical design principles as a focal point in the development process. By prioritizing values such as human dignity, respect for international law, and the overarching sheer value of human life, designers and engineers can help shape autonomous systems that align with collective societal norms. 

The main concerns that arise in the discourse regarding autonomous weaponry are the appearance of digital dehumanization and algorithmic biases. Embedding ethical design principles as a key focal point in constructing these models' foundation would help mitigate the weaponry’s perception of human life.

 If we can train these models away from perceiving humans as sheer data from a large pool, we can strive to eliminate the risks of stereotypes and bias interfering with what can be a helpful tool. Getting past these obstacles is difficult, especially when they are significantly apparent in the very essence of AI. 

Still, it remains a crucial tweak for the ethical concerns to be dealt with. Interdisciplinary collaboration between technologists, ethicists, and policymakers would be crucial to ensure that ethical standards remain robust and adaptive in the face of emerging threats. 

In promoting ethical considerations as the number one priority when constructing this weaponry, we harness the transformative potential of autonomous technology while minimizing the risks of misuse and unintended harm. 

When the inevitable intersection of AI and combat comes into play, the above options will provide the necessary groundwork for charting the global military scene toward ethical benefit. As this technology advances and the global desire for efficiency grows with it, autonomous weaponry will only become more apparent in the warfare conversation. 

That is why, whether through international cooperation or simply domestic priorities, it is imperative we strive to secure a morally grounded world in the age of autonomous warfare. 

Conclusions

While there is much debate on the use of autonomous warfare, including experts who claim using autonomous weapons systems is morally superior to using humans when offering significant strategic and tactical benefits on the battlefield, detractors contend that for a number of moral and legal grounds, the debate over restrictions still remains. 

Advocates of expanding the research and application of autonomous weapon systems typically cite several military benefits. Autonomous weapon systems have been claimed to serve as a multiplier of force and have the capacity to analyze incoming sensory data with human emotions of fear or frenzy. With statistics of  lowering the number of deaths and other advantages of technology, much funding towards advancements have already been funded. On the other hand, proponents attribute the expansion of the battlefield to autonomous weapons systems, which enable warfare to penetrate previously unreachable locations. They also channel ideas of morals and values that must be upheld in combat. Because AI-equipped computers make judgments on their own, it can be challenging to distinguish between software errors and the autonomous decision-making of the so-called "smart" AI-equipped robots. 

Arguments on both sides about the irreplaceability of human conscience and moral judgment have been at the forefront of those who oppose this new age of combat. Where this technology will go and the potential of the good or bad still remains to be seen.

Acknowledgment

The Institute for Youth in Policy wishes to acknowledge Michelle Liou, Joy Park, Nolan Ezzet and other contributors for developing and maintaining the Policy Department within the Institute.

References

  1. “Autonomous Weapon Systems: What the Law Says – and Does Not Say – about the Human Role in the Use of Force.” 2021. Humanitarian Law & Policy Blog. November 11, 2021. https://blogs.icrc.org/law-and-policy/2021/11/11/autonomous-weapon-systems-law-human-role/
  1. Etzioni, Amitai, and Oren Etzioni. “Pros and Cons of Autonomous Weapons Systems.” Army University Press, May 2017. https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/.
  2. “What You Need to Know about Autonomous Weapons.” International Committee of the Red Cross, July 26, 2022. https://www.icrc.org/en/document/what-you-need-know-about-autonomous-weapons.https://blogs.icrc.org/law-and-policy/2021/11/11/autonomous-weapon-systems-law-human-role/.
  3. Taddeo, Mariarosaria, and Alexander Blanchard. 2022. “A Comparative Analysis of the Definitions of Autonomous Weapons Systems.” Science and Engineering Ethics 28 (5). https://doi.org/10.1007/s11948-022-00392-3.
  4. “Background on LAWS in the CCW – UNODA.” n.d. United Nations. https://disarmament.unoda.org/the-convention-on-certain-conventional-weapons/background-on-laws-in-the-ccw/.
  5. What you need to know about autonomous weapons. (2022, July 26). International Committee of the Red Cross. https://www.icrc.org/en/document/what-you-need-know-about-autonomous-weapons
  6. Pros and Cons of Autonomous Weapons Systems. (n.d.). Army University Press. https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/
  7. Dawes, J. (2021, October 13). Autonomous robots could be more destabilizing than nukes. Salon. https://www.salon.com/2021/10/13/autonomous-robots-could-be-more-destabilizing-than-nukes_partner/
  8. Problems with autonomous weapons. (n.d.). Stop Killer Robots. https://www.stopkillerrobots.org/stop-killer-robots/facts-about-autonomous-weapons
  9. Schumann, A. (2023, November 15). Fact Sheet: Autonomous Weapons. Center for Arms Control and Non-Proliferation. https://armscontrolcenter.org/fact-sheet-autonomous-weapons/
  10. Griggs, Troy, and Daisuke Wakabayashi. 2018. “How a Self-Driving Uber Killed a Pedestrian in Arizona.” The New York Times, March 20, 2018. https://www.nytimes.com/interactive/2018/03/20/us/self-driving-uber-pedestrian-killed.html.
  11. Atherton, Kelsey. 2022. “Understanding the Errors Introduced by Military AI Applications.” Brookings. May 6, 2022. https://www.brookings.edu/articles/understanding-the-errors-introduced-by-military-ai-applications/.
  12. Allied Market Research, https://www.alliedmarketresearch.com/. “Autonomous Weapons Market Share, Growth, Analysis by 2030.” Allied Market Research. Accessed February 12, 2024. http://www.alliedmarketresearch.com/autonomous-weapons-market. 
  13. Usmanov, Iurii, and Marta Chernychka. 2022. “Maritime Autonomous Weapon Systems From the Standpoint of International Humanitarian Law.” Lex Portus 8 (2). https://doi.org/10.26886/2524-101x.8.2.2022.2.
  14. Taddeo, Mariarosaria, and Alexander Blanchard. 2022b. “A Comparative Analysis of the Definitions of Autonomous Weapons Systems.” Science and Engineering Ethics 28 (5). https://doi.org/10.1007/s11948-022-00392-3
  15. Riesen, Erich. 2022. “The Moral Case for the Development and Use of Autonomous Weapon Systems.” Journal of Military Ethics 21 (2): 132–50. 
  16. https://doi.org/10.1080/15027570.2022.2124022.
  1. Wood, Nathan Gabriel. 2022. “Autonomous Weapons Systems and Force Short of War.” Journal of Ethics and Emerging Technologies 32 (2): 1–16. 
  2. https://doi.org/10.55613/jeet.v32i2.115.
  1. Altmann, Jürgen. 2019. “Autonomous Weapon Systems – Dangers and Need for an International Prohibition.” In Lecture Notes in Computer Science, 1–17. https://doi.org/10.1007/978-3-030-30179-8_1.
  1. Altmann, Jürgen, and Frank Sauer. 2017. “Autonomous Weapon Systems and Strategic Stability.” Survival 59 (5): 117–42. https://doi.org/10.1080/00396338.2017.1375263.
  2. Asaro, P. (2020, September 17). Autonomous Weapons and the Ethics of Artificial Intelligence. Oxford University Press eBooks. https://doi.org/10.1093/oso/9780190905033.003.0008
  3. Maiti, S., Winter, S., & Kulik, L. (2017, July 1). A conceptualization of vehicle platoons and platoon operations. Transportation Research Part C: Emerging Technologies. https://doi.org/10.1016/j.trc.2017.04.005
  4. McCormick, T. (2014, January 25). Lethal Autonomy: A Short History. Foreign Policy. https://foreignpolicy.com/2014/01/24/lethal-autonomy-a-short-history/
  5. Field, M. (2021, April 22). Meet the future weapon of mass destruction, the drone swarm - Bulletin of the Atomic Scientists. Bulletin of the Atomic Scientists. https://thebulletin.org/2021/04/meet-the-future-weapon-of-mass-destruction-the-drone-swarm/
  6. Lethal Autonomous Weapon Systems (LAWS) – UNODA. (n.d.). https://disarmament.unoda.org/the-convention-on-certain-conventional-weapons/background-on-laws-in-the-ccw/

Policy Brief Authors

Christine Li

Policy Analyst

Christine is a social policy writer for YIP. Raised in Brooklyn, New York, she loves going on walks and watching late night television shows.

Author's Profile

Spencer Samet

Policy Analyst

Spencer Samet is a student at Windward School in Los Angeles California. He is passionate about current events and plans to pursue political science. Spencer works as a technology policy CO-Lead for YIP and is an active member of his highschool’s debate team.

Author's Profile

Natalie Gelman

Policy Analyst

Author's Profile

Tanya Mahesh

Fall 2023 Fellow

Tanya Mahesh is a High School Student from Pearland, Texas and with a keen interest in the intersection of business, technology and policy.

Author's Profile

Vaishnavi Moturi

Energy Policy Lead

Vaishnavi Moturi is a student at Centennial High School and the Energy Policy Team Lead at the Institute for Youth in Policy. She is the founder and director of Hello CitizenZ, where she seeks to help create a generation of global citizens and the creator of ODConnect, an overdose detection, mapping, and connection system for local police departments and nonprofits.

Author's Profile

Suchir Paruchuri

Policy Analyst

Author's Profile

Similar Policy Briefs

No items found.