top of page

Did autonomous drones attack humans for the first time?​​

A UN report recently suggests that military drones may have attacked an individual autonomously in Libya for the first time in history. Is it real? Has AI gone that far? What should we expect in the future?


The event took place during an offensive near Tripoli (Lybia) in March 2020. A report from the United Nations Security Council, covering the Second Libyan Civil War from October 2019 to January 2021, mentions the incident (see page 17/548, paragraph 63, and Annex 30): "the lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true “fire, forget and find” capability". According to The Verge, it's unclear whether this attack by autonomous robots was supervised by humans or not. Indeed, the UN report only states that personnel and vehicles were attacked by a mix of drones and quadcopters that had been programmed to work offline.


Nonetheless, the incident in Libya is interesting as it reveals some major breakthroughs in the use of AI on the battlefield. It also opens our eyes to what military systems could look like in the future. As you can see in the video below from STM (the quadcopter’s Turkish manufacturer), the drone uses machine learning algorithms to respond against targets like a vehicle or a person. It also uses real-time image processing.



This system called “loitering drone” (also called suicide or kamikaze drone) was created for anti-terrorist and asymmetric warfare operations. It has two operating modes: autonomous and manual. As the UN report said, it practically means that the drone has the ability to autonomously "fire, forget and find" through the entry of the target coordinates.



(KARGU- Rotary Wing Attack Drone Loitering Munition System. Source: STM)


Even though killer drones have been around for decades, and used for instance in Afghanistan by the U.S., this event in Libya is special as it shows that future advances in artificial intelligence and machine learning algorithms will certainly raise more and more ethical issues. What will happen if autonomous drones miss their target and attack innocent civilians? The "laws of war", already difficult to enforce, are under new pressure with AI.


The incident also shows that AI-driven weaponry systems are no longer something limited to military superpowers (USA, Russia, China) but also to medium size or regional powers (in that case Turkey) and probably even smaller nations. How long before it ends up in the hands of terrorist or mafia groups?


Fortunately, we are not at the stage of Terminator with the Skynet artificial intelligence yet. However, many nations and international organizations highly criticize the fact that systems can have the capacity to autonomously identify, select and attack targets. The CCW (Convention on Certain Conventional Weapons) agreed in 2013 on a mandate on lethal autonomous weapons systems (LAWS) to put some restrictions on the use of some of them. As technologies are constantly emerging and evolving, new recommendations come out almost every year in the area of LAWS.


As Elke Schwarz says in the Verge, “now is a great time to mobilize the international community toward awareness and action”.



If you are curious about the Turkish made Kargu drone, here is the website of its manufacturer: https://www.stm.com.tr/en/kargu-autonomous-tactical-multi-rotor-attack-uav


You can also read this Futuria article related to the future of the army with robots: https://www.futuria.io/post/an-army-of-robots-in-2030-good-or-bad-news-for-humans



Recent Posts

See All

Comments


bottom of page