The robot said it was “afraid of the dark” and scared of “never waking up again.”
Robots are unfeeling machines, akin to a kitchen appliance, right? And like a kitchen appliance, turning them off and on ought to be as easy as flipping a switch — which is is, unless, as a recent experiment uncovered, the robots plead to stay awake and alive.
In an experiment performed by a group of German researchers and published in the journal PLOS One, 89 participants were told to interact with a robot named Nao and then turn it off. The interactions were rather basic, such as answering some simple questions and arranging a weekly schedule. However, when the interaction was over and time came to switch off the robot, the machine suddenly said it was “afraid of the dark” and begged the participant not to turn it off.
When faced with the unexpected plea, 13 of 43 participants downright refused to turn off Nao. The remaining 30 participants took, on average, twice as long to make the decision to flip the switch, compared to the control group, whose members did not hear the robot begging.
The researchers then examined the reasoning of those who refused to turn the robot off, and, curiously, found it was quite diverse.
Some people said they refused because it was “against Nao’s will,” and they felt it would be wrong to violate that will. Others said they left the robot on out of compassion, because they felt pity for the robot’s statement that it feared the darkness.
However, the participants’ reasoning was not limited to those two answers. Some people, when faced with Nao’s unexpected reaction, became curious as to whether the robot would continue to interact with them in some other way (it didn’t). Some acknowledged they did not flip the switch because they were taken by surprise: they didn’t expect this kind of reaction from a toy.
Not all the participants decided based on social interaction, though. Some people said they were afraid of breaking something, so they preferred not to touch the robot in any case.
Arguably, the most interesting reasoning expressed was: “[I didn’t switch it off because] I had the choice.”
According to a report by The Verge, this experiment is an extension of a similar research from 2007, in which a robot also pleaded for its life. Participants in that study were forced to switch that talking machine off by observing scientists. While all participants did turn the machine off, it took a noticeable moral struggle before they did so. In one video published online, one can see a test subject clearly hesitating to turn the switch, despite telling the robot, “I will do it right now.”
The Nao experiment, however, examined whether the experience of previous social interaction (such as answering questions and telling jokes) would affect people’s decisions. Interestingly, according to some participants’ comments, this didn’t significantly affect their choice.
According to Aike Horstmann, the lead researcher on the new study, humans show emotional reactions to — and interact with — machines as if they were living beings because up until recently, living humans were the only things capable of such interaction.
“I think it’s just something we have to get used to. The media equation theory suggests we react to [robots] socially because for hundreds of thousands of years, we were the only social beings on the planet,” she said in an interview with The Verge. “Now we’re not, and we have to adapt to it. It’s an unconscious reaction, but it can change.”
Horstmann insists that, over time, humans could get used to robots being around and start treating them accordingly — as appliances. But will they choose to?
Sourse: sputniknews.com