Time to take LAWS into our own hands

New Zealand should add to its proud legacy of moral leadership and support a ban on lethal autonomous weapons systems, write artificial intelligence experts Professor Michael Winikoff, Professor Stephen Cranefield and Associate Professor Alistair Knott.

Swarm of autonomous lethal drones flying over a street

It is time to ban LAWS. Before lawyers reach for their pitch forks and torches, let us quickly say LAWS stands for “lethal autonomous weapons systems”, an application of artificial intelligence to warfare.

At this point, you’re probably picturing a highly sophisticated and intelligent robot, with an Austrian accent, promising to return. But that’s not what LAWS is about. We’re talking much simpler technology: anything where the decision to select a target and attack it does not involve meaningful human control.

There is a simple ethical argument for banning LAWS. Only a human can appreciate the value of human life and the significance of taking a life. Therefore, the decision should only be made by a human. It is morally wrong for it to be made by a machine.

However, some invoke a counter-argument. Human soldiers are fallible. They make mistakes. What if replacing human soldiers with robots could actually make war more ethical by reducing, or even eliminating, civilian casualties? What if replacing human soldiers with robots also saved soldiers’ lives? Perhaps war could become a wholly bloodless affair? Would LAWS not then be a good thing?

This argument is wrong, because it is based on flawed assumptions. It assumes developing LAWS would not affect how wars are fought. It assumes the only thing that would change is the soldiers, and war would still be fought between nation states on far-off battlefields.

This completely ignores the fact that new technology has effects.

To begin, imagine not some large expensive robot roaming a war-torn desert. Instead, imagine a small drone with a weapon, perhaps some explosive material. But don’t imagine just one drone. Zoom out in your imagination and picture a large swarm of them.

Being autonomous, these drones could be operated by a small team or even a single person. And they would be cheap enough to be widely available. This is not a weapon that would be limited to nation states. It would be available to small (radical and violent) groups and individuals.

Now imagine someone sending out a fleet of drones able to select targets based on any predefined characteristic. Imagine drones targeting people based on their skin colour. Or on their membership of a particular Facebook group or a particular political party or civil society organisation.

If you are having difficulties imagining all this, there is an excellent, and downright terrifying, video at autonomousweapons.org well worth watching.

And the bad news is this is not science fiction. It is not far in the future. It relies only on AI techniques, such as autonomous drone navigation and facial recognition, that are already well developed.

The good news is we still have a window to put a ban in place. And there is mounting pressure for such a ban, with various people, organisations and some countries pushing for one. There has also been United Nations debate.

Such bans can work. Consider chemical weapons. No company will manufacture and sell chemical weapons and any country that uses them faces immediate international condemnation and backlash. This, of course, cannot completely prevent the use of chemical weapons, but it makes their use rare and has prevented a chemical weapons arms race.

There are some existing autonomous weapons systems that are defensive (e.g. shooting down missiles) and the intention of the ban is not to encompass them. On the other hand, we would also want to consider including not just lethal weapons: any autonomous systems capable of autonomously deciding to inflict long-lasting injury to a human ought to be banned, even if the injury is not lethal.

So where does our government sit? Disappointingly, the New Zealand government has avoided taking a position, instead proposing it might be enough to have a human with an ‘abort’ button. But we know humans cannot always monitor systems effectively. And installing an abort button does not change the fact that the system is able to operate autonomously. It is all too easy to disable the abort button, coerce the human overseer or replace them with a system that simply does nothing.

Therefore, we and nearly 60 colleagues, including leading New Zealand AI researchers and other experts, have written to the government calling on it to support a ban on LAWS.

New Zealand has a long and proud history of moral leadership in this area, as seen in its strong position against nuclear weapons and its role in the Convention on Cluster Munitions. We hope the current government can add to this proud legacy.

Professor Michael Winikoff is in the School of Information Management at Victoria University of Wellington, Professor Stephen Cranefield is in the Department of Information Science at the University of Otago and Associate Professor Alistair Knott is in Otago's Department of Computer Science.

Read the original article on Newsroom.