Click to Skip Ad
Closing in...

AI mimicked human communication in this fascinating study

Published May 15th, 2025 6:47PM EDT
Two AI models on a date, as imagined by ChatGPT.
Image: Chris Smith, BGR via ChatGPT

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

Since ChatGPT went viral in late 2022, we have seen plenty of research that went into studying how AI models behave. Researchers wanted to see how they operate, whether they cheat for tasks or lie for survival.

These are as important as the research into creating better, smarter models. We can’t reach more advanced versions of artificial intelligence before we can understand the AIs to ensure they remain aligned with our interests.

Most of these studies involve experiments concerning one AI model at a time and studying its behavior. But we’ve reached a point where human-AI interaction will not be the only kind of interaction involving artificial intelligence.

We’re in the early days of AI agents, more advanced ChatGPT and Gemini models that can do things for users, like browsing the web, shopping online, and coding. Inevitably, these AIs will end up meeting other AI models, and these models will have to socialize in a safe way.

That was the premise of a new study from City, St George’s, University of London, and the IT University of Copenhagen. Different AIs will inevitably interact, and the researchers wanted to see how such interactions would go.

They devised a simple game that mimics human speed-dating games. Multiple AIs were given a simple task: to choose a common single-letter name. It only took the AIs some 15 rounds to reach a consensus, whether the experiment involved 24 AI models or up to 200, and whether they could choose between 10 letters or the full alphabet.

The “speed-dating” game was pretty simple. Two AIs were paired and told to pick a letter as a name. When both agents picked the same name, they would get 100 points. They’d lose 50 points if each AI came up with a different letter.

Once the first round was over, the AIs were repaired, and the game continued. Crucially, each model could only remember the last five choices. Therefore, in round 6, they would no longer remember the first letter each model in a pair chose.

The researchers found that by round 15, the AIs would settle on a common name, much like we humans settle on communication and social norms. For example, The Guardian provides a great example of a human social norm we’ve recently established by consensus, as explained by the study’s senior author, City St George’s Andrea Baronchelli.

“It’s like the term ‘spam’. No one formally defined it, but through repeated coordination efforts, it became the universal label for unwanted email,” the professor said. He also explained that the AI agents in the study are not trying to copy a leader. Instead, they’re only coordinating in the pair they’re part of, the one-on-one date, where they’re looking to come up with the same name.

That AI agents eventually coordinate themselves wasn’t the study’s only conclusion. The researchers found that the AI models formed biases. While picking a name composed of a single alphabet letter is meant to increase randomness, some AI models gravitated towards certain letters. This also mimics the bias we, humans, might have in regular life, including communication and social norms.

Even more interesting is the ability of a smaller group of determined AI agents to eventually convince the larger group to choose the letter “name” of the smaller group.

This is also relevant for human social interactions and shows how minorities might often sway public opinion once their beliefs reach critical mass.

These conclusions are especially important for AI safety and, ultimately, for our safety.

In real life, AI agents interact with each other for different purposes. Imagine your AI agent wants to make a purchase from my online store, where my AI agent acts as the seller. Both of us will want everything to be secure and fast. But if one of our agents misbehaves and somehow corrupts the other, whether by design or accident, this can lead to a slew of unwanted results for at least one of the parties involved.

The more AI agents are involved in any sort of social interaction, each acting on a different person’s behalf, the more important it is for all of them to continue to behave safely while communicating with each other. The speed-dating experiment suggests that malicious AI agents with strong opinions could eventually sway a majority of others.

Imagine a social network populated by humans and attacked by an organized army of AI profiles tasked with proliferating a specific message. Say, a nation state is trying to sway public opinion with the help of bot profiles on social networks. A strong, uniform message that rogue AIs would continue to disseminate would eventually reach regular AI models that people use for various tasks, which might then echo those messages, unaware they’re being manipulated.

This is just speculation from this AI observer, of course.

Also, as with any study, there are limitations. For this experiment, the AIs were given specific rewards and penalties. They had a direct motivation to reach a consensus as fast as possible. That might not happen as easily in real-life interactions between AI agents.

Finally, the researchers used only models from Meta (Llama-2-70b-Chat, Llama-3-70B-Instruct, Llama-3.1-70B-Instruct) and Anthropic (Claude-3.5-Sonnet). Who knows how their specific training might have impacted their behavior in this social experiment? Who knows what happens when you add other models to this speed-dating game?

Interestingly, the older Llama 2 version needed more than 15 dates to reach a consensus. It also required a larger minority to overturn an established name.

The full, peer-reviewed study is available in Science Advances.

Chris Smith Senior Writer

Chris Smith has been covering consumer electronics ever since the iPhone revolutionized the industry in 2007. When he’s not writing about the most recent tech news for BGR, he closely follows the events in Marvel’s Cinematic Universe and other blockbuster franchises.

Outside of work, you’ll catch him streaming new movies and TV shows, or training to run his next marathon.