Humanoid robot with ChatGPT put human social reaction to the test

Real interactions with Pepper reveal mixed emotions and challenges in human-robot communication.
robot Pepper interactuando con las personas

In an experiment conducted by the University of Canberra, the Pepper robot, equipped with the ChatGPT language model (version GPT-3.5 Turbo), was presented to the public during an innovation festival in Australia. The objective: to observe how people react to their first contact with an artificial intelligence. artificial intelligence embodied in a robotic body.

Pepper robot interaction with humans

Over ten days, 88 people interacted with Pepper inside a transparent capsule designed to minimize the noise of the event without isolating participants from the social environment. The conversations were semi-structured and, at the end of each one, participants shared their impressions through open-ended responses.

Most expressed complex and often contradictory emotions, ranging from enthusiasm and surprise to frustration with the technical limitations of the system. The most frequent criticisms were linked to the lack of synchrony between the robot’s verbal and body language, as well as failures in understanding different accents.

YouTube video

Video presentation of Pepper. Source: Aldebaran

Unlike a conventional chatbot, Pepper has a physical presence: it makes eye contact, moves and responds with voice. This aspect provoked mixed responses. Some participants felt a greater connection thanks to the robotic body, while others found it unnecessary or even disappointing when its gestures did not match what it was saying.

Several users also attributed personality and emotions to the robot, referring to it with female pronouns and commenting that “maybe it needed a vacation.” This anthropomorphization phenomenon opens the door to sociotechnical questions about how we perceive AI systems when they are in human form.

Conversation shifts and human expectations

A sensitive point was the non-compliance with basic rules of social interaction. Several participants said that they could not interrupt the robot, that the robot talked too much or did not respond when they tried to direct the conversation. These failures in turn-taking were seen as barriers to smooth communication.

There were also points made about the system’s poor ability to perceive human emotions or respond with empathy, indicating that users expect a type of interaction more akin to what they would have with a person.

The study also revealed significant accessibility issues. Participants from diverse backgrounds, including Spanish speakers and members of indigenous Australian communities, reported that Pepper either did not understand them correctly or responded with generic phrases that did not recognize cultural nuances.

These observations reinforce the need to integrate diversity and inclusion criteria in the development of artificial intelligence systems operating in real and multicultural environments.

This experience marks a breakthrough in human-robot interaction studies in uncontrolled settings. In contrast to laboratory trials, this work highlights how the general public relates to technology when it is part of their everyday lives.

Details of the research were published in the journal Nature.

Follow us on social networks and don’t miss any of our publications!

YouTube LinkedIn Facebook Instagram X (Twitter) TikTok

Source and photo: Nature