2015 seems to be the year that we've discovered how dumb artificial intelligence can be. Robots, algorithms, and AI systems keep making headlines over mistakes and wrongdoing. They've gotten in trouble with police for buying drugs and making death threats. They've racially slurred people by tagging them in photos as "gorillas." And they've taken the blame when humans have done stupid things, as when a driver slammed his "pedestrian-detecting car" into a group of observers.
This week, in a tragic story, Asimov's primary robotic law was broken. On Monday, a robot took a life, killing a worker at a Volkswagen plant in Germany. And the question yet again is: who's to blame when a robot screws up? Via the Associated Press:
[A VW spokesperson] said initial conclusions indicate that human error was to blame, rather than a problem with the robot, which can be programmed to perform various tasks in the assembly process…. German news agency DPA reported that prosecutors were considering whether to bring charges, and if so, against whom.
“Earlier this week a contractor was injured while installing some machinery in the Kassel factory," said Volkswagen in a statement to Fusion. "He died later in hospital from his injuries and our thoughts are with his family. We are of course carrying out a full investigation into the incident and cannot comment further at this time.”
It's not the first time deaths and robots have been linked. Two years ago, the FDA raised concerns about deaths and injuries resulting from surgical robots used in hospitals.
Volkswagen has declined to say what kind of robot this was, though Local.de reported that it grabbed the man and "thrust him against a metal slab," crushing him. Volkswagen told a German reporter that the robot "was still owned by the company" that made it, because it wasn't fully installed yet, according to a Google-translated version of the article.
That's one possible answer to the question of who's to blame when a robot screws up: the person or company that owns it. Volkswagen may already be preparing to argue to prosecutors that it wasn't responsible for the worker's death, because it hadn't yet taken control of the robot.
The robot-making company could retort that Volkswagen created a dangerous situation by allowing a worker to be so close to a functioning robot, but Volkswagen could argue that the robot made the mistake because it was poorly programmed by the company that made it. The laws around robot-related incidents are complicated and murky, and we're still figuring out how to handle situations like these.
"The law has to come up with a thing to do," robot law expert Ryan Calo told Fusion after a Twitter bot made a death threat, a far less serious offense. "[The law] would probably look at the person who put the technology into play. If someone builds a general-purpose tool, you can’t go after them. In criminal law, you can’t go after person breeding a dangerous dog, but the person who lets it loose.”
Fatalities in industrial settings are, sadly, not rare. Hands and clothing can get caught in heavy machinery. A person can be walking somewhere they shouldn't be. This story is striking in that it involves a "robot," but the novelty of what happened may be more about semantics than technology. (How this particular robot differs from another piece of semi-automated machinery on an assembly line is unclear at this point.) When we hear that a "robot" killed a person, it is much more surprising than hearing that a "machine" killed a person, because we expect robots to know better.
(The danger then is that we are less careful around robots, as when people let a "smart" car run into them, because they thought it would detect them and stop.)
The deadly incident in the Volkswagen plant makes painfully clear the challenges ahead for companies that make robots and develop artificial intelligence. If they can't make their machines smart enough to avoid accidents, they may be putting themselves in the crosshairs.