In some situations, such as with socially supportive robots, social bonding with robots may be advantageous. For instance, in the care of old people, social bonding with robots could lead to a higher level of compliance about taking medication as prescribed.
According to a new study published by the American Psychological Association, people may perceive that robots are capable of “thinking” or acting on their own beliefs and desires rather than their programs when they interact with people and appear to have human-like emotions.
The study involves three experiments involving 119 participants. Scientists determined how individuals would perceive a human-like robot, the iCub, after socializing with it and watching videos together. Before and after the interaction, participants were given a questionnaire, which they had to complete.
The questionnaire showed them pictures of the robot in different situations and asked them to choose whether the robot’s motivation in each situation was mechanical or intentional. For example, participants viewed three photos depicting the robot selecting a tool and then chose whether the robot “grasped the closest object” or “was fascinated by tool use.”
In the first two experiments, the scientists remotely directed iCub’s behavior, so it would act naturally, introducing itself, greeting individuals, and asking for their names. The robot’s eyes had cameras that could detect the faces of the participants and keep eye contact.
The participants next viewed three brief documentaries with the robot, designed to make sad, amazed, or happy sounds and display the appropriate facial expressions.
In the third experiment, the iCub was programmed to act more like a machine while it watched videos with the participants. The cameras within the eyes were deactivated to avoid eye contact. The robot only spoke recorded sentences to the participants about the calibration process it was undergoing. Scientists also replaced all the emotional reactions to videos with beeps and repetitive movements of its torso, head, and neck.
The results from the experiments revealed that participants who watched the videos with a robot were more likely to rate the robot’s actions as intentional instead of programmed, while those who only interacted with the machine-like robot were not.
This shows that mere exposure to a human-like robot is not enough to make people believe it is capable of thoughts and emotions. Human-like behavior might be crucial for being perceived as an intentional agent.
Study author Agnieszka Wykowska, Ph.D., a principal investigator at the Italian Institute of Technology, said, “The relationship between anthropomorphic shape, human-like behavior, and the tendency to attribute independent thought and intentional behavior to robots is yet to be understood. As artificial intelligence increasingly becomes a part of our lives, it is important to understand how interacting with a robot that displays human-like behaviors might induce a higher likelihood of attributing intentional agency to the robot.”