Observers say UK and US are seeking to water down agreement so that any autonomous weapons deployed before talks conclude will be beyond reach of ban
The United Nations has been warned that its protracted negotiations over the future of lethal autonomous weapons – or “killer robots” – are moving too slowly to stop robot wars becoming a reality.
Lobbying for a pre-emptive ban on the weapons is intensifying at the UN general assembly in New York, but a deal may not emerge quickly enough to prevent devices from being deployed, experts say.
“There is indeed a danger now that [the process] may get stuck,” said Christof Heyns, the UN special rapporteur on extrajudicial, summary or arbitrary executions.
“A lot of money is going into development and people will want a return on their investment,” he said. “If there is not a pre-emptive ban on the high-level autonomous weapons then once the genie is out of the bottle it will be extremely difficult to get it back in.”
Observers say the UK and US are seeking to water down an agreement so that it only includes emerging technology, meaning that any weapons put into practice while discussions continue are beyond the reach of a ban.
“China wanted to discuss ‘existing and emerging technologies’ but the wording insisted on by the US and the UK is that it is only about emerging technologies,” said Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield and co-founder of the International Committee for Robot Arms Control, a coalition of robotics experts who are campaigning against the military use of robots.
“The UK and US are both insisting that the wording for any mandate about autonomous weapons should discuss only emerging technologies. Ostensibly this is because there is concern that … we will want to ban some of their current defensive weapons like the Phalanx or the Iron Dome.
“However, if the discussions go on for several years as they seem to be doing, many of the weapons that we are concerned about will already have been developed and potentially used.”
Iron Dome Facebook Twitter Pinterest
A rocket is launched from Israel’s Iron Dome system. Photograph: Dan Balilty/AP
Sharkey said current developments made that a likely scenario. “Governments are continuing to test autonomous weapons systems, for example with the X49B, which is a fighter jet that can fly on its own, and there are contracts already out for swarms of autonomous gun ships. So if we are tied up [discussing a ban] for a long time then the word ‘emerging’ is worrying.”
“People say that the convention on conventional weapons is a graveyard for good ideas because it’s notoriously slow moving. If we see the brakes being applied now that would take discussions into a fourth year.”
No fully autonomous weapons are yet in use, but many semi-autonomous lethal precursors are in development. One such weapon is the South Korean sentry robot SGR-1, which patrols the country’s border with North Korea and detects intruders as far as two miles away using heat and light sensors.
The robots are armed with machine guns and although currently controlled by humans from a distance, they are reportedly capable of making a decision to kill without human intervention.
Israel is deploying machine-gun turrets along its border with the Gaza Strip to target Palestinian infiltrators automatically. And the UK’s Taranis fighter jet flies autonomously and can identify and locate enemies. Although it does not yet act completely autonomously, it was described by a defence procurement minister as having “almost no need for operator input”.
Campaigners from across the fields of robotics and technology have made several high-profile pleas for a pre-emptive ban on offensive autonomous weapons in the past two years, including in a letter in July that was signed by more than 1,000 artificial intelligence researchers. The letter said offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life.
Sharkey has repeatedly argued at the UN that allowing robots to make the decision to kill is wrong and fraught with risk. “We shouldn’t delegate the decision to kill to a machine full stop,” he said. “Having met with the UN, the Red Cross, roboticists and many groups across civil society, we have a general agreement that these weapons could not comply with the laws of war. There is a problem with them being able to discriminate between civilian and military targets, there is no software that can do that.
“The concern that exercises me most is that people like the US government keep talking about gaining a military edge. So the talk is of using large numbers – swarms – of robots.”
The UN convention on conventional weapons (CCW) examines whether certain weapons might be “excessively injurious or have indiscriminate effects”. This same convention was where the discussions that led to a ban on landmines and sight-blinding laser weapons began.
Only five countries have backed a ban so far, with countries such as the US, the UK and France arguing that a human will always have “meaningful control” over a robots decision to kill – a concept that is much debated.
There is high-level support within the UN for serious restrictions on lethal autonomous weapons. One possible route to a ban if the CCW cannot reach agreement on the next stage would be for states who want a ban to reach an agreement outside the UN, as happened with cluster bombs. But this would mean going ahead without the agreement of some of the major producers of the weapons.
Heyns believes the UN is a vital step in any global decision on banning certain weapons. “Some people say the CCW is where ideas go to die but that is not necessarily true,” he said. “They get aired there and we shouldn’t underestimate what has already been done. It’s not insignificant. At both meetings the states sent very high-level representatives. It’s not the end of the road, it’s a step and you can’t skip it.”
read more :