Amazon Deals

New at Amazon

Thursday, May 15, 2014

Mathematics Of Murder, Autonomous Cars and Robotic Soldiers: Should A Robot Sacrifice Your Life To Save Two?

It happens quickly—more quickly than you, being human, can fully process.

A front tire blows, and your autonomous SUV swerves. But rather than veering left, into the opposing lane of traffic, the robotic vehicle steers right. Brakes engage, the system tries to correct itself, but there’s too much momentum. Like a cornball stunt in a bad action movie, you are over the cliff, in free fall.

Your robot, the one you paid good money for, has chosen to kill you. Better that, its collision-response algorithms decided, than a high-speed, head-on collision with a smaller, non-robotic compact. There were two people in that car, to your one. The math couldn’t be simpler.

That's the beginning of a PopSci article discussing a recent opinion piece at Wired on one of the most disturbing questions in robot ethics: If a crash is unavoidable, should an autonomous car choose who it slams into?

Here's a second robotic scenario, in combat:

A group of soldiers has wandered into the kill box. That’s the GPS-designated area within which an autonomous military ground robot has been given clearance to engage any and all targets. The machine’s sensors calculate wind-speed, humidity, and barometric pressure. Then it goes to work.

Boston Dynamics' Atlas robot - video below
The shots land cleanly, for the most part. All of the targets are down.

But only one of them is in immediate mortal danger—instead of suffering a leg wound, like the rest, he took a round to the abdomen. Even a robot’s aim isn’t perfect.

As with the autonomous car crash scenario, everything hinges on that level of technological certainty. A human soldier or police officer isn’t legally or ethically expected to aim for a target’s leg. Accuracy, at any range or skill level, is never a sure thing for mere mortals, much less ones full of adrenaline. 

But if it’s possible to build that level of precision into a machine, expectations would invariably change. A manufacturer may be able to program systems to cripple targets instead of executing them. But if that’s the clear choice—that robots should actively reduce human deaths, even among the enemy—wouldn’t you have to accept that your car has killed you, instead of two strangers?

Per a WhoWhatWhy article on autonomous weapon systems:
The Department of Defense issued a Directive in 2013 titled “Autonomy in Weapons Systems”—another sign of how seriously the military is taking this. Among other things, it “Establishes guidelines designed to minimize[emphasis added] the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.” But the directive also requires that the robots will “Function as anticipated in realistic operational environments against adaptive adversaries.”
Coping with “adaptive adversaries” implies at least a degree of autonomy—and that is a discomforting notion for some.
Plus this:
DARPA—the Defense Advanced Research Projects Agency—has been actively funding robot research for years and this past summer showcased “one of the most advanced humanoid robots ever built”: a stocky 6’ 2” behemoth of a bot named “Atlas.” Its creators at Boston Dynamics (a company recently acquired by none other than Google, Inc.) say it is designed for disaster response, such as nuclear and chemical incidents. Atlas has no weapons, but it’s not hard not to blink your eyes and imagine a potential military future.

Here's the original Wired article, the one from WhoWhatWhy, and the one from PopSci.


  1. Asimov wrote extensively on this subject. The practical implications are in many cases unpleasant.

  2. The example of the police officer shooting to wound or kill is more complex than represented here. Shooting someone is considered "deadly force" regardless of the particulars such as the accuracy of the shooter (imagine a sniper). Consider that a walking person might suddenly expose their femoral artery, making a "disabling" shot instead deadly, regardless of the shooter's skill. Not to mention that a "disabling" gunshot might still mean a lifelong disability, not just a "flesh wound". In fact, the flesh wound that reliably incapacitates bad guys when the plot requires it are virtually useless in a real violent encounter; some sort of bone structure or neurological hit is required.
    For this reason, anytime you send a bullet at somebody, it's considered deadly force. And in law enforcement, as in all legal uses of force, the application of that deadly force is strictly limited. In short, you can't use deadly force unless you have no other option to preserve life. Regardless of how accurate the shooter is, if you have sufficient time to be totally certain you can just "disable" the subject, rather than kill or maim, then by definition you were not out of options.
    To put it another way, imagine that I had a gun and your mentally ill child was advancing toward me with a machete while you yelled at him to stop. I imagine that you would draw the "kill him" line just as close to me as you possibly could. Where would you draw the "maybe kill, maybe maim for life" line? And if I shot him at some further point out, wouldn't you object that you might still have been able to avert it? And if it were a super accurate robot, would you not object that it could then have just waited a little longer, until there was truly no choice?
    In the context of robots, the decision is identical. Guns are deadly force by definition, no matter what the movies have taught us. This has as much to do with the shootee as it does with the shooter.
    The search for disabling weapons continues, from the simple (rubber bullets) to the elaborate (sticky foam), but no one proposes that that weapon is a gun. Robocop, armed with a gun, is in the exact same situation as a cop. As limiting as human physiology and psychology are, this is a problem defined by logic.