Killer AI drones 'hunted down humans without being told to' warns bombshell UN reports
The explosive-carrying quadcopters, which were deployed during an engagement between rival factions in the Libyan civil war, are thought to have deliberately crashed into targets without being ordered a human controlle
An automous weaponised drone “hunted down” a human target last year and is thought to have attacked them without being specifically ordered to, according to a report prepared for the United Nations
The news raises the spectre of terminator-style AI weapons killing on the battlefield without any human contro
The drone, a Kargu-2 quadcopter produced by Turkish military tech company STM, was deployed in March 2020 during a conflict between Libyan government forces and a breakaway military faction led by Khalifa Haftar, commander of the Libyan National Arm
The Kargu-2 is fitted with an explosive charge and the drone can be directed at a target in a kamikaze attack, detonating on impact
The drone uses on-board cameras with artificial intelligence to identify target
The drone uses on-board cameras and artificial intelligence to identify targets. (
The report from the UN Security Council’s Panel of Experts on Libya, published in March 2021, was obtained by New Scientist magazine
In one passage the repots details how Haftar’s were “hunted down” as they retreated by Kargu-2 drones that were operating in a “highly effective” autonomous mode that required no human controller
“The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability,” the report says
Full details of the incident haven’t been released and it is unclear if there were any casualties
It suggests that the drones were attacking human beings on their own initiative
There is no record of how many casualties, if any, the AI war machines inflicted
Zak Kallenborn at the National Consortium for the Study of Terrorism and Responses to Terrorism in Maryland, could be the first time that drones have autonomously attacked human
He says this development is cause for serious concern, given that AI systems can not always interpret visual data correctl
“How brittle is the object recognition system?” Kallenborn asks. “… how often does it misidentify targets?
Jack Watling at UK defence think tank Royal United Services Institute, told New Scientist that the drones are in something of a grey area when it comes to regulation of AI weapons, because only the drones’ controllers would know whether the machines were being remotely controlled at the time of the attac
“This does not show that autonomous weapons would be impossible to regulate,” he says. “But it does show that the discussion continues to be urgent and important. The technology isn’t going to wait for us
The explosive-carrying quadcopters, which were deployed during an engagement between rival factions in the Libyan civil war, are thought to have deliberately crashed into targets without being ordered a human controlle
An automous weaponised drone “hunted down” a human target last year and is thought to have attacked them without being specifically ordered to, according to a report prepared for the United Nations
The news raises the spectre of terminator-style AI weapons killing on the battlefield without any human contro
The drone, a Kargu-2 quadcopter produced by Turkish military tech company STM, was deployed in March 2020 during a conflict between Libyan government forces and a breakaway military faction led by Khalifa Haftar, commander of the Libyan National Arm
The Kargu-2 is fitted with an explosive charge and the drone can be directed at a target in a kamikaze attack, detonating on impact
The drone uses on-board cameras with artificial intelligence to identify target
The drone uses on-board cameras and artificial intelligence to identify targets. (
The report from the UN Security Council’s Panel of Experts on Libya, published in March 2021, was obtained by New Scientist magazine
In one passage the repots details how Haftar’s were “hunted down” as they retreated by Kargu-2 drones that were operating in a “highly effective” autonomous mode that required no human controller
“The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability,” the report says
Full details of the incident haven’t been released and it is unclear if there were any casualties
It suggests that the drones were attacking human beings on their own initiative
There is no record of how many casualties, if any, the AI war machines inflicted
Zak Kallenborn at the National Consortium for the Study of Terrorism and Responses to Terrorism in Maryland, could be the first time that drones have autonomously attacked human
He says this development is cause for serious concern, given that AI systems can not always interpret visual data correctl
“How brittle is the object recognition system?” Kallenborn asks. “… how often does it misidentify targets?
Jack Watling at UK defence think tank Royal United Services Institute, told New Scientist that the drones are in something of a grey area when it comes to regulation of AI weapons, because only the drones’ controllers would know whether the machines were being remotely controlled at the time of the attac
“This does not show that autonomous weapons would be impossible to regulate,” he says. “But it does show that the discussion continues to be urgent and important. The technology isn’t going to wait for us