Earlier this summer, Yoshua Bengio, the McGill-based Artificial Intelligence pioneer and one of the world’s leading authorities on neural networks and machine-learning, added his name to a growing global list of experts and laboratories in the field that are pledging to have no role in the creation of what are called “lethal autonomous weapons” (LAWs). The signatories’ premise is as benign as it is unsound: that machines must never be given the authority to decide when to take a life.
Lombardi: AI thinkers can't duck the ethics debate on autonomous weapons
While Bengio and other leaders in the AI community are nobly intentioned, their pledge is at best an attempt to slam the barn door after the horse has bolted, and at worst an invitation to cede ethical decision-making on the future of warfare to rogue and oppressive regimes in the international system.
The contention by Bengio and others that machines have no part to play in life or death choices is a curious one, given how many of these same experts are currently plying their trade in the race to build the first truly roadworthy autonomous, or self-driving, vehicle.
Self-driving cars will indeed be many times safer than human drivers – but it is a certainty that they will still kill people. Self-driving cars will be forced to make ethical decisions surrounding life and death that inherently prioritize some lives over others. In preparation for this reality, these same members of the AI community are rightly engaged in robust debates about the ethical dilemmas that their self-driving cars will face. Yet when it comes to warfare, a similarly inevitable field for AI proliferation, they choose to abdicate a necessary dialogue around ethics in the naïve hope that the question itself can be forestalled indefinitely.
Self-driving cars will indeed be many times safer than human drivers – but it is a certainty that they will still kill people.
In theatres of war around the globe, however, varying degrees of rapidly advancing autonomous technologies such as computer vision are already in use in weapons systems. Most advanced militaries use semi-autonomous systems in air defences, to detect and disrupt enemy missile fire, for instance. A sentry gun that fires autonomously has long been deployed by South Korea on its side of the DMZ. These are but a few examples of weapons systems that are already pushing boundaries – in some cases, quite literally – while changing the facts on the ground when it comes to machines participating in life or death choices.
Once AI research is published, it is impossible to control how it is used. Thus, the nature of potential technological applications makes its use in weapons systems inevitable by rogue international regimes. Countries such as Iran and North Korea, that were able to surmount massive obstacles such as sourcing technical expertise, building security apparatuses, and securing funding, to develop nuclear weapons, will face no such barriers in getting their hands on LAWs. It will be far too easy in the near future for conventional weapons to be upgraded with a simple white-labelled AI purchased online.
Rather than advocating a ban or pledging never to participate in the development of something that is already a reality, Bengio and other leading minds must understand that the development of a robust ethical framework for LAWs is the only viable path forward. Influencing the dialogue on ethical questions and international standards requires early engagement in the development of the underlying technologies that will be used in these weapons systems by leading thinkers such as Bengio. Without his voice and others, the construction of a framework for oversight in the use of AI in warfare will be lesser, and all of humankind will be worse off.
Matthew Lombardi is a Senior Fellow at theCanadian International Council. He specializes in the role of emerging technologies in foreign affairs