Heather Roff Perkins, a visiting professor at the Josef Korbel School of International Studies, will travel to Geneva, Switzerland, in April to attend the United Nations meeting of member states under the Convention on Conventional Weapons. There, she will speak as an invited expert on lethal autonomous weapons systems (LAWs).
Q: What are lethal autonomous weapons systems?
A: Lethal autonomous weapons systems are weapons systems that can target and fire without the control of a human operator.
Q: Are they in use now by militaries?
A: Most state militaries claim that they are not currently in use. However, much depends on how one defines “control” by a human operator and when target selection must occur. For example, the U.S. uses the Aegis system on several of its naval ships, and that system can track, cue and fire automatically without human intervention.
Israel uses the Iron Dome, which also has this capacity, and the United Kingdom uses the Brimstone missile, which has the potential to select individual targets, from a preselected class, on its own. Lockheed Martin in the U.S. also has the Long-Range Anti-Ship Missile (L-RASM) that has similar functions. Others, like South Korea, have stationary boarder systems that can detect a person through heat seeking and automatically fire.
Q: What is the argument for using them in combat situations?
A: There are many arguments. Depending upon the domain — air, land, sea or cyber — autonomous weapons may permit forces to navigate and act in denied environments — in other words, in environments where the U.S. is unable to communicate, or where there is little freedom of overt action. They may also act as force multipliers and bring intelligence, surveillance, reconnaissance capacities and a forward presence where a large military footprint is unacceptable. In ground situations, some make arguments that the machines will be better at discriminating between combatants and civilians, and thus better uphold the laws of war.
Q: What are the objections to using them?
A: Again, there are many. Chief among them are that the machines are unable to uphold the laws of war, particularly the principles of discrimination and proportionality, as well as violating the Marten’s Clause against using weapons or methods of war that violate public international conscience.
Some make the claim that using weapons that involve little cost to the possessor will lower the barriers to conflict, and thus war will become more likely. Others also warn that the development and deployment of autonomous weapons will start an arms race between major powers, and that the older and less accurate technology will proliferate to the middle and small states.
Q: These systems have been called “killer robots” by some critics. Is that a fair description?
A: These systems were termed “killer robots” in a 1983 Newsweek article called “The Birth of Killer Robots.” In 2013, 30 years after the first mention of “killer robots,” Human Rights Watch launched a Campaign to Stop Killer Robots, which is a coalition of more than 50 NGOs fighting to preemptively ban the development and use of autonomous weapons systems.
The critics of autonomous weapons systems are referring to their lethal nature and the fact that they are robots. They have sensors, actuators, processors and are, for all intents and purposes, robots. Thus, if one is asking about a definition, then yes, it is a correct one.
While proponents of autonomous weapons systems tend to dismiss the idea that they are “killer robots,” this is done from an attitude that the worries of the critics are mere science fiction. However, the worries are not science fiction. Autonomous weapons systems have been on the U.S. Department of Defense’s docket from the early 1980s, and research and development on unmanned systems have been going on since the 1950s.
One Comment