Can We Learn About Empathy From Torturing Robots? This MIT Researcher Is Giving It a Try
Earlier this summer, a robot killed an employee at a Volkswagen factory in Germany. The victim, a 22-year-old man, was installing the machine when it reached out and crushed him against a metal plate he was standing in front of.
As the first robot-related death at a German workplace, the incident has brought up new legal questions about robots. Now prosecutors have to decide whom, if anyone, to bring charges against.
“I’d like to find out if we can change people’s habits around empathy by using robots.”
Kate Darling, a research specialist at Massachusetts Institute of Technology (MIT) Media Lab, says these types of questions will become more common as our dependence on technology grows. Darling and a team of other researchers at MIT study the legal and social implications of robot-human interactions.
At a workshop in Geneva two years ago, she and her team observed how people treated Pleo robots—advanced machines that react to external stimuli, shaped like cute baby dinosaurs. First they asked participants to name the robots and play with them. Once they were done, she asked them to torture and kill the machines. Most people hesitated.
Because the experiment was not conducted in a controlled environment, Darling couldn’t draw any definitive conclusions from it. But those initial observations inspired her to create similar experiments in a more controlled environment to test the that role empathy plays in our interactions with robots.
In Darling’s recent experiments, participants work with hexbugs—small, cockroach-shaped robots that can move on their own. First, people observe the robot bugs for a certain amount of time and then they smash them. Some are told a backstory about the hexbugs—a story that attributes lifelike qualities to the machines.
Darling found that those who were given a backstory often hesitated longer to strike the hexbugs than those who were not. She says this trend highlights people’s natural tendency “to empathize with things that they perceive as lifelike.” This, she says, can be a problem and has the potential to lead to dangerous—even life-threatening—situations.
Recently I spoke with Darling about her work with robots. I wanted to know what she’s learned about the human capacity to feel empathy for machines and what laws she thinks should be put in place when it comes to robots and their place in our society. This interview has been lightly edited.
Nur Lalji: What is the relationship between empathy and giving robots humanlike qualities?
“We’re seeing that people respond to robots like animals.”
Kate Darling: What happened in our experiment is that the people who have high empathic tendencies responded hugely to the framing. So they didn’t much care about the object when it was just an object. But when it was named Frank and had a backstory, that had a huge effect on the high-empathy people. The low-empathy people didn’t care either way.
It’s an interesting relationship. Empathic people respond very strongly to framing.
Lalji: So even highly empathic people didn’t hesitate as much to strike the bug when it didn’t have a name and personal story? Does that mean the physicality of a robot doesn’t matter as much as framing?
Darling: We have other studies planned to look at physicality. It’s hard because we were using robots that looked like cockroaches. Depending on how people feel about cockroaches—most people either dislike them or absolutely hate them—that can mess up the results.
We need to play around with different robotic designs and see how that influences people’s reactions. But the one thing that we could say is that framing did have an impact for the hexbugs.
Lalji: What’s an example that people are familiar with that shows us the effects of anthropomorphizing?
Darling: There are these boxy things in hospitals that just deliver medicines, and they found out that the nurses, doctors, and employees are much more receptive to them when they name the machines. Putting a license plate on one of the machines that says “Emily” will cause people to bond with them and forgive their mistakes more easily.
Lalji: Could you talk about your own research on empathy? Have you drawn any conclusions from your experiments?
Darling: We were essentially able to establish whether participants had high or low empathic tendencies based on their interactions with the robots, which is cool.
I’d like to find out if we can change people’s habits around empathy by using robots, in the same way animal therapy is used with children or in nursing homes. If we could use robots like animals and have a positive influence on people’s empathic responses to others, that would be amazing.
On the flip side, I’m also interested in whether people can become desensitized or have less empathy because of robots. What does it mean if someone brutalizes a robot that is very lifelike?
Lalji: Can you give an example of how we might benefit from thinking about robots like animals?
Darling: The best example I can think of is that most states have laws where, if there are cases of animal abuse in the household and there are children in the same household, then that animal abuse automatically triggers a child-abuse investigation.
“There are times when we should use robots strictly as tools.”
If this behavior really does translate the way we think it does in that context, then it might translate from robots to animals or from robots to children as well. We’re seeing that people respond to robots like animals. So if people were brutalizing their robots at home, we may want to keep an eye on their animals and children.
Lalji: You also talk about the “android fallacy” in your work—the idea we shouldn’t mistake robots for anything but tools. Could you expand on that?
Darling: We’re seeing people respond to robots as though they were some lifelike thing, in between an object and an animal. There’s this one camp that says this is awesome—we can create great engagement with people and there are all these other uses in education and health contexts.
Then there’s the other camp that says it’s bad—we should prevent people from seeing robots this way. Neil Richards and Bill Smart are two people in the robot law community that have argued that this is bad. They say if we treat robots as something other than the tools then that idea will bleed over into legal regulations.
My sense is that it depends. We have a lot of categories for different types of robots. In some cases it might make sense to encourage anthropomorphizing them and maybe the law should reflect that. We could create different legal structures to deal with the fact that people are treating these robots as something other than a tool.
On the other hand, there are times when we should use robots strictly as tools and discourage anthropomorphizing—like in the context of military robots. In that case, it can be inefficient or dangerous to anthropomorphize robots.
Lalji: When you talk about military use are you referring to demining robots? Could you expand on why anthropomorphizing them can be dangerous?
Darling: A woman named Julie Carpenter did her Ph.D. thesis on this. She studied these bomb disposal robots that are used in the military. It turns out that soldiers really bond with them and sort of treat them like pets.
“People are realizing that they feel for these robots.”
P.W. Singer’s book Wired for War has some anecdotes of people risking their lives to save these robots, which is really concerning. These robots aren’t supposed to engage you emotionally. These robots are supposed to be bomb-disposal tools. They detonate land mines. You don’t want people hesitating for even a second to use them the way they’re supposed to be used.
Lalji: Is there a way to create robots that are not shaped in any recognizable way? Would that be a solution?
Darling: I think that’s probably helpful. One issue is that a lot of robots look very similar to animals. Animals have evolved over many years to perfectly interact with the world and nature, so their body structure is a great model for robots. It would be sad to make robots less efficient because we have to design them in a way that’s not as evocative to us.
As far as the military goes, our research shows that it would help if they had policies in place where the robots were only described in non-personified terms, and where soldiers were trained to not personify them. That would make the soldiers feel a little less empathy toward robots.
But on the other hand, I think the big driver of our emotional responses in this area is the movement of the robots. I’m not sure how much we can mitigate those effects, honestly. I think the first step is to be aware that they exist. We’re slowly getting there.
People used to think I was completely crazy, and now my research is getting more attention and is getting taken seriously. I think the military is aware of these problems. There was also the video Boston Dynamics released with a new robot called “Introducing Spot,” which looked kind of like a dog. In the video, someone kicks the robot and it sparked all of this outrage about how Spot was being abused.
Lalji: Yeah, I remember that. I think PETA even released a statement, right?
Darling: Yeah, that got some attention. People are realizing that they feel for these robots, and they are wondering what it will be like when we have them all over the place. People are warming to the idea, but I still wish people were talking about it seriously.
Lalji: What do you hope for in the future regarding robot technology and your own work?
Darling: I wish that people would start talking about this seriously, so we can lay the foundation for how we’re going to deal with this as a society. We should figure this out soon, so we won’t need to regulate things after the fact.
The main thing I want to get across to people is that this is not science fiction. It’s something that’s happening right now with the very primitive technology that we have. These issues won’t go away.
We’re going to have more and more of this technology, and people should be excited about it because it’s cool. And they should also take it seriously.