Austin Tarullo
In 2023, the United States Department of Defense announced plans to deploy autonomous weapons systems by 2025. These weapons, which can select and fire upon targets without human intervention, are no longer the lore of science fiction. Although fears that such weapons will lower the barriers to entry to war have spurred global calls to ban them, the Department of Defense’s announcement confirmed that the use of autonomous weapons is inevitable. AI applications in other sectors––including consumer products, medical diagnoses, and law enforcement––have illuminated shortcomings inherent to intelligent algorithms, including bias, opaqueness, an inability to comprehend causation, and a failure to understand ethics. When paired with a trigger, these shortcomings will intensify, resulting in more civilian casualties and less military accountability. To preserve military oversight and accountability, the Department of Defense must delay the deployment of autonomous weapons until the development and implementation of a methodology that can reliably provide insight into the decision-making processes of artificial intelligence algorithms. Such accountability methods will realign military processes with international humanitarian law principles and uphold civilian protections.