Are We Seeing the Rise of Killer Robots?
Are we entering an era of AI-controlled weapons dominating military conflict? We speak to policy researcher Krystyna Marcinek and Nobel Peace Prize Laureate Jody Williams to find out what lethal autonomous weapons systems are, how they work, and what is being done to regulate their rise.
© IE Insights.
Transcription
Killer robots have long been imagined in the world of science fiction. But as AI becomes ever more ubiquitous in all aspects of our lives, the threat of machine-controlled weapons is very much real and upon us. So how advanced are these weapons? How do they work? And what is being done to regulate them? These weapons are known in defense circles as lethal autonomous weapons systems.
And we asked expert and researcher Krystyna Marcinek exactly what they are.
Krystyna Marcinek: So this is a really good question because it immediately shows the challenge with those systems, because actually there is no widely accepted definition of those systems. So the U.S. Department of Defense defines those systems as weapon systems that, once activated, can select and engage targets without further intervention by a human operator. And so there are two elements to that.
It’s select and engage. In fact, when we think about lethal autonomous systems, then of course, by that token, that would be engaging with lethal force. However, the key word here is in fact select, because that’s really what could distinguish autonomous weapon systems from the weapon systems we have right now. And that is an even bigger problem on the international level because, as much as one nation’s definition is not necessarily clear, now imagine having multiple nations and each of them having their own definition.
So the definition of terms around these weapons can be ambiguous and complicated. And it’s perhaps easier to understand if we take a concrete example. Let’s look at drones.
KM: So we have manned aircraft and then we have remotely-piloted aircraft or drones, UAVs, whatever you want to call them. So first in the manned aircraft, the person who pulls the trigger is co-located with the missile. You have a pilot in the cockpit. He pulls a trigger and the missile goes. When we think about remotely-operated systems like drones, for example, then we have this geographical distance essentially between the cockpit, between the operator who pulls the trigger and where the missile is.
And then when we think about semi-autonomous weapons systems, we would have an operator that specifies a target. And then the drone could itself, you know, fly over the area and select the timing of engaging the target. So there could be a distance not only in space, but also in time between selecting the target and pulling the trigger. And then finally, autonomous weapon systems.
If we think about full autonomy, there would also be a distance in specificity. So that means that operator would be really defining a goal more so than a specific target. And then the weapon systems would be selecting the optimal course of action.
The problem here is that optimal would be defined by the A.I. system, and humans may not be able to work out why it chose a particular course of action. The question then arises: who is writing these algorithms? Well, these weapons and algorithms would be created and developed by the defense industry. But would they actually make us any safer?
Jody Williams is an activist who won the Nobel Peace Prize in 1997 for her work campaigning against landmines. We spoke to her about the threat she perceives from lethal autonomous weapons systems or killer robots, as she prefers to call them.
Jody Williams: Killer robots, that’s what we call them. I was doing an article, and in the course of the research I did for the article I came upon the concept of marrying artificial intelligence with weapons so that the weapon itself once programed, would be able to target and attack its… pick its target and attack it. And I found that to be terrifying.
It blew my mind that human beings were capable of creating weapons, that you can just go out there and go out and shoot. And I think it’s ignorance on the part of the weapons developers. Not that humans are perfect by any stretch, but I mean, how does a combatant surrender to a machine that is programed only to kill that person?
These people who are so excited about autonomous weapons don’t think about the bias of the programmer. No matter how neutral we human beings try to be, we’re not neutral. The first ever drone war.
So how close are we to seeing these weapons in action? Given the discrepancy on how different entities define them and the secrets involved in any form of military development, it’s impossible to know for sure. A UN report in 2020 about a skirmish in Libya stated that a lethal autonomous weapons system had been used in the skirmish, although it is unclear if anyone was killed by the weapon.
Such reports, however, are few and far between. What we do know for sure is the technology used in these weapons is actually being used in our daily lives. So it isn’t a huge step to ask the Ministry of Defense of any country to combine the technology to create weapons.
KM: I mean, we can think about completely civilian drones that right now are flying autonomously or are on the verge of flying autonomously. It’s monitoring production facilities or port facilities or whatever else. But they are used for surveillance, essentially. Every company that develops facial recognition system or image recognition system, you know, those are the building blocks to autonomous weapons systems.
I’m not aware of any entity, you know, state-sponsored or otherwise, that would advertise that they are developing lethal autonomous weapons systems. But that’s also because there is a pushback against lethal autonomous weapons systems. But that doesn’t mean that the pieces cannot be put together to actually produce it.
Given the ease with which lethal autonomous weapons systems could be developed using current technology and the danger that these weapons would pose in the wrong hands, a movement to regulate, restrict, and ban these weapons has been launched. Jody Williams herself is part of that effort. So how does she see the correct regulation and bans being brought into place?
JW: Same as we did with landmines. You have to build a coalition of NGOs that will work in different areas. When we started the landmine campaign, there were two NGOs, one in the US, one in Germany, and staff of one me. It grew to 1300 NGOs in 90 countries I think by the time we negotiated the treaty. My husband’s name is Steve Goose and he is the Director of the Arms Division of Human Rights Watch.
And we met banning landmines, and it was actually in our kitchen where I was trying to fight with him and convince him that we needed to create a campaign to ban killer robots. So it was launched in London in April of 2013.
I think. In July 2015, over 1000 experts in artificial intelligence, including Stephen Hawking, Elon Musk and Steve Wozniak, signed a letter warning of the threat of an artificial intelligence arms race and called for a ban on autonomous weapons. However, concrete international regulation has not been created despite these campaigns. The United States has reportedly tried to begin diplomatic discussion with its main military rival, China, to look at regulation of lethal autonomous weapons systems, but Beijing is said to have refused to allow it on the agenda of any meetings between the two countries.
Meanwhile, AI and machine learning capabilities continue to grow, wars continue to rage, and international regulation is lagging behind.