A planned UN weapons convention for the international outlawing of autonomous combat robots has failed due to resistance from some countries.
Can autonomously decisive robots be used on the battlefield of the future? There will be no international ostracism: the reform of the UN Arms Convention has failed. It was not about missiles or drones that are controlled or at least fired by humans, but about systems that decide for themselves whether to attack.
The Stop Killer Robots campaign failed due to resistance from the United States, India, Israel, and Russia, reports the Reuters news agency. The other participating states were in favor of adopting the UN Arms Convention. In the final declaration, the topic was only referred to as a new commission of experts.
The critics of autonomous weapon systems see high risks for the civilian population, problems with accountability, and the likelihood that the use of the systems will escalate conflicts.
The talks had been going on for eight years. The use of autonomous weapon systems is ready according to a report by the UN body (PDF) from March 2021. The first autonomous drone attack took place in Libya as early as 2020.
According to the UN report, Kargu-2 autonomous drones manufactured in Turkey carried out so-called swarm attacks against the militias of the warlord Haftar in March 2020: This is said to have been the first time that drones equipped with AI have successfully carried out a successful attack.
The remains of a Kargu-2 were later recovered. The manufacturer of the Kargu-2, Defense Technologies, and Trade (STM) told Turkish media in 2020 that its drones are equipped with facial recognition technology that makes it possible to identify and neutralize individual targets without having to use ground troops. According to the company, Kargu-2 drones can be deployed in swarms.