This Article addresses the use of autonomous weapons systems(“AWS”). This Article only concerns itself with AWS used for offensive purposes. That is distinct from defensive weapons systems, including Israel’s Iron Dome2 and U.S. missile defense systems.3 Similarly, this Article does not address use of AWS for purposes of neutralizing Improvised Explosive Devices (“IED”) or evacuating a wounded soldier.
The use of AWS potentially minimizes risks to soldiers—at least in the short term. It suggests sleek technology. The dead are a hazy visual on a screen. It is antiseptic, as neither the smell of burning flesh nor the sound of agony can be heard by those programming the AWS or those sitting behind a screen observing the effects of a “hit.” Autonomous warfare has also been positively portrayed in Hollywood movies; technological sophistication inherently possesses an undeniable “cool” factor that is engaging, engrossing, and compelling. However, the positive lens with which it is viewed through Hollywood is a limited glimpse of its role.
Weapons created for the purpose of autonomously determining when the nation-state can kill a human being raises profoundly important questions regarding humanity, ethics, and defense. While the use of force by the nation-state is regulated, whether by international law or rules of engagement, the introduction of AWS challenges the notion of whether—and at what point—proposed decision making should be removed from human control and judgment.
Guiora, Amos N.
"Accountability and Decision Making in Autonomous Warfare: Who is Responsible?,"
Utah Law Review: Vol. 2017
, Article 4.
Available at: https://dc.law.utah.edu/ulr/vol2017/iss2/4