Humans show sympathy towards bullied AI bots

People protected AI bots who were excluded from playtime.

Follow us onFollow Tech Explorist on Google News

In a fascinating study conducted at Imperial College London, researchers observed humans displaying empathy and protection towards AI bots that were excluded from playtime. The study, which utilized a virtual ball game, sheds light on humans’ natural inclination to perceive AI agents as social entities. These findings emphasize the importance of considering human behavior when designing AI bots.

“This is a unique insight into how humans interact with AI, with exciting implications for their design and our psychology,” said lead author Jianan Zhou from Imperial’s Dyson School of Design Engineering.

In today’s world, people are finding themselves increasingly reliant on AI virtual agents to access services and even to interact socially. However, recent findings indicate that developers should steer clear of creating agents that are overly human-like.

“A small but increasing body of research shows conflicting findings regarding whether humans treat AI virtual agents as social beings. This raises important questions about how people perceive and interact with these agents,” Senior author Dr Nejra van Zalk, also from Imperial’s Dyson School of Design Engineering, said.

“Our results show that participants tended to treat AI virtual agents as social beings because they tried to include them in the ball-tossing game if they felt the AI was being excluded. This is common in human-to-human interactions, and our participants showed the same tendency even though they knew they were tossing a ball to a virtual agent. Interestingly, this effect was stronger in the older participants.”

Humans appear hardwired to feel empathy and take action against unfairness. Previous studies have shown that people tend to support ostracized individuals by including them more while also developing negative feelings towards those who engage in exclusionary behavior.

To further explore this phenomenon, researchers observed 244 participants, aged 18 to 62, as they witnessed an AI virtual agent being excluded from a game called ‘Cyberball’ by another human player. In ‘Cyberball,’ players pass a virtual ball to each other on-screen.

In various game scenarios, the non-participant human either generously included the bot by throwing the ball to it multiple times or unjustly excluded the bot by solely targeting the human participant.

Participants were not only observed but also surveyed to gauge their reactions to these unfair treatments, aiming to understand why they would choose to support the bot by throwing the ball to it after such unfairness.

Screenshot from Ctberball game used in the study.
Screenshot from Ctberball game used in the study. Credit: Imperial College London

The results revealed a compelling trend: most participants sought to right the wrongs by favoring the bot when distributing the ball. Interestingly, it was observed that older participants were more sensitive to instances of unfair treatment.

The researchers argue that as AI virtual agents gain traction in collaborative tasks, heightened interaction with humans could lead to increased familiarity and automatic processing. This could result in users instinctively treating virtual agents as genuine team members and engaging with them socially.

They emphasize that while this could be beneficial for work collaboration, it could also raise concerns when virtual agents are used as substitutes for human relationships or as advisors on physical or mental health.

Jianan said, “By avoiding designing overly human-like agents, developers could help people distinguish between virtual and real interaction. They could also tailor their design for specific age ranges, for example, by accounting for how our varying human characteristics affect our perception.”

The researchers have highlighted a crucial point about the limitations of Cyberball in representing real-life human interactions. They suggest that the typical human interactions through written or spoken language with chatbots or voice assistants differ significantly from the Cyberball scenario. This mismatch might have led to conflicting user expectations and a sense of unfamiliarity among participants, potentially influencing their responses during the experiment.

In response to this, the researchers are working on designing new experiments that involve face-to-face conversations with agents in diverse settings, including controlled laboratory environments and more informal settings. By doing so, they aim to explore the generalizability of their findings and gain a deeper understanding of human-agent interactions.

Journal reference:

  1. Jianan Zhou, Talya Porat, Nejra van Zalk. Humans Mindlessly Treat AI Virtual Agents as Social Beings, but This Tendency Diminishes Among the Young: Evidence From a Cyberball Experiment. Human Behavior and Emerging Technologies, 2024; DOI: 10.1155/2024/8864909
Up next

New AI model imitates sounds more like humans

Teaching AI to communicate sounds like humans do.

4M: a next-generation framework for training multimodal foundation

An open-source training framework to advance multimodal AI.
Recommended Books
The Cambridge Handbook of the Law, Policy, and Regulation for Human–Robot Interaction (Cambridge Law Handbooks)

The Cambridge Handbook of the Law, Policy, and Regulation for Human-Robot...

Book By
Cambridge University Press
Picks for you

Ants vs. humans: Why collaboration works better for them

UniSA research delivers the best moves to reduce dementia risk

Bay Area soda taxes help change people’s minds

Study finds reasons owners choose certain diets for their dogs

Sleep helps the brain to store and learn a new language