Should a Robot Decide When to Kill?

RobotBy Adrianne Jeffries

The Verge

As the US military pours billions of dollars into increasingly sophisticated robots, people inside and outside the Pentagon have raised concerns about the possibility that machine decision will replace human judgment in war.

Around a year ago, the Department of Defense released directive 3000.09: “Autonomy in Weapons Systems.” The 15-page document defines an autonomous weapon — what Gubrud would call a killer robot — as a weapon that “once activated, can select and engage targets without further intervention by a human operator.”

The directive, which expires in 2022, establishes guidelines for how the military will pursue such weapons. A robot must always follow a human operator’s intent, for example, while simultaneously guarding against any failure that could cause an operator to lose control. Such systems may only be used after passing a series of internal reviews.

The guidelines are sketchy, however, relying on phrases like “appropriate levels of human judgment over the use of force.” That leaves room for systems that can be given an initial command by a human, then dispatched to select and strike their targets. DARPA is working on a $157 million long-range anti-ship missile system, for example, that is about as autonomous as an attack dog that’s been given a scent: it gets its target from a human, then seeks out and engages the enemy on its own.

Some experts say it could take anywhere from five to thirty years to develop autonomous weapons systems, but others would argue that these weapons already exist. They don’t necessarily look like androids with guns, though. The recently tested X-47B is one of the most advanced unmanned drones in the US military. It takes off, flies, and lands on a carrier with minimal input from its remote pilot. The Harpy drone, built by Israel and sold to other nations, autonomously flies to a patrol area, circles until it detects an enemy radar signal, and then fires at the source. Meanwhile, defense systems like the US Phalanx and the Israeli Iron Domeautomatically shoot down incoming missiles, which leaves no time for human intervention.

“A human has veto power, but it’s a veto power that you have about a half-second to exercise,” says Peter Singer, a fellow at the Brookings Institute and author of Wired for War: The Robotics Revolution and Conflict in the 21st Century. “You’re mid-curse word.”

Continue to full article . . .

Picture: Obsidian Soul (Own work) [CC0], via Wikimedia Commons

Advertisements

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s