Lethal Autonomous Weapon Systems: Ban or No Ban?

Referencing Iron Man 3 (2013), I attempt to illustrate and portray myself as the pilot of a ‘Lethal Autonomous Weapons Systems’ suit.

———————————————————————

From as early as the 1950s, scientists have pursued fundamental research on the nature of intelligence and with the aim of developing Artificial Intelligence (AI) that can be of benefit to humanity. Alongside its potential for civilian applications, the use of AI towards military applications was also apparent.

The progression of AI technology in the military domain has led to lethal autonomous weapon systems (LAWS) – robots that can select, attack and destroy targets without human intervention. Such are the possibilities surrounding their impact that LAWS have been called the third revolution in warfare, after gunpowder and nuclear arms.

The subject of LAWS is a controversial one that has sparked a debate with many views on its ethical, legal, political and military implications. Recent discussion has revolved around arguments for and against a ban on LAWS.

Calls for a Ban

On 28 July this year, over 1,000 prominent AI experts and researchers signed an open letter warning that a “military AI arms race would not be beneficial for humanity” and calling for a “ban on offensive autonomous weapons”. The letter was presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, and signed by technology luminaries such as Tesla’s Elon Musk, Apple co-founder Steve Wozniak and theoretical physicist Stephen Hawking.

The letter stated that “AI technology has reached a point where the deployment of such systems is – practically if not legally – feasible within years, not decades, and the stakes are high…” The authors argue that while AI can be used to make the battlefield a safer place for human soldiers by reducing casualties, autonomous weapons would also lower the threshold of going to battle, which would invariably see more wars and greater instability.

Crucially, should any major military power push ahead with AI weapon development, it would inevitably start a global arms race. The endpoint of this course of events would be obvious: autonomous weapons will become the Kalashnikovs of tomorrow. In stark contrast to nuclear weapons, LAWS require no costly or hard-to-obtain raw materials and will become ubiquitous and cheap for all significant military powers to mass-produce. And to make matters worse, it will only be a matter of time before LAWS appear on the black market and fall into the hands of terrorists, warlords and dictators.

Opposing the Ban

On the other hand, there have been some in the AI community who disagree with calls for a ban on LAWS on the grounds of a ban being misguided and even reckless.

Those who do not favour a ban call for critical thinking on the issue. One argument, made by a robotics researcher, is that no robot can really kill without human intervention. While robots are probably capable of killing people using sophisticated mechanisms that resemble those used by humans, that does not mean there is no human in the decision-making loop. In fact, humans are very much involved in the process. The role of programmers, cognitive scientists, engineers and others involved in building autonomous systems cannot be overlooked. Add to that the military commanders and government officials who made the decision to use LAWS.

Another argument is that the autonomous weapons of the kind for which a ban is sought are already in existence. As a case in point, the Australian Navy has successfully deployed highly automated weapons in the form of close-in weapons systems for many years. These systems are guns that can fire thousands of rounds of ammunition per minute, either autonomously via a computer-controlled system or under manual control. They are designed to provide surface vessels with a last defence against anti-ship missiles.

But there have not been calls objecting to these systems because they are deployed far out at sea and only in cases where a hostile object is approaching a ship at high speed. In other words, they are deployed only in environments where the risk of killing an innocent civilian is virtually zero, much less than in regular combat.

Instead of calling for an outright ban, the focus should be on existing laws, which stipulate that LAWS be used in the most particular and narrow circumstances.

———————————————————————

In the illustration, my character is taking a pause before continuing to save the day, against the rogue ‘Lethal Autonomous Weapons Systems’ suits, and the nuclear missiles that have just been fired off.

———————————————————————

A Case for Ethical LAWS?

There is a third view on this matter. And one worth thinking about. It comes from Professor Ronald C Arkin, a leading US roboticist and roboethicist from the Georgia Institute of Technology.

The professor focuses the issue of LAWS in relation to the “utterly and wholly unacceptable” state of innocent civilian casualties of war. To be clear, he hopes that LAWS would never need to be used as he is against killing in all its forms.

However, if humanity persists in fighting wars, innocent non-combatants must be protected far better than they currently are. “Technology can, must, and should be used toward that end”, he avers. The judicious design and use of LAWS can lead to the potential saving of non-combatant lives. The use of LAWS should go beyond simply winning wars.

Professor Arkin believes that LAWS may not be able to be perfectly ethical in the battlefield, but that they can ultimately perform more ethically than human soldiers. There is a need to consider not just the decision-making about when to fire, but also deciding when not to fire.

LAWS could take on far more risk on behalf of non-combatants than human soldiers are capable of. Autonomous weapon systems are capable of operating on the principle of “first do no harm” rather than the “shoot first and ask questions later” approach that human soldiers would take.

“It may be possible to save non-combatant life using LAWS and these efforts should not be prematurely terminated by a pre-emptive ban,” says Professor Arkin. That said, he supports a moratorium on the development and deployment of LAWS until the potential for LAWS to save civilian lives is realised.

Parallel to the technological complexity of LAWS, their further development and deployment in the field is an equally complex quandary that encompasses ethical, legal, political and military dimensions. There is a long way for the international community to go before a decision is made on a ban or no ban. And yet, time is running short.

Facebook Comments Box