Click to Skip Ad
Closing in...

After a 2-hour interview, Google can make an AI that thinks and acts like you

Published Jan 9th, 2025 10:49AM EST
Robotic hand using laptop.
Image: Kilito Chan/Getty Images

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

Imagine distilling a person’s personality, opinions, and decision-making style into an AI replica. This concept isn’t science fiction—it’s the foundation of a groundbreaking study by Stanford University and Google DeepMind. Researchers crafted AI copies of over 1,000 participants using info gleaned from two-hour interviews.

The project sought to create AI agents that could mimic human behavior. Participants were recruited by the market research firm Bovitz and paid $60 to engage with an AI interviewer. Each interview began with reading lines from The Great Gatsby to calibrate the audio system.

Then, over the course of two hours, participants discussed a range of topics, including politics, family, job stress, and social media. These conversations produced detailed transcripts averaging 6,491 words, which became the foundation for training the AI replicas.

A conversation going on
All it takes is a two-hour interview for Google to create an AI that thinks and acts like you. Image source: ZipRecruiter

The results were impressive. When tested on social surveys like the General Social Survey (GSS) and the Big Five Personality Inventory (BFI), the AI replicated 85 percent of the participants’ responses. However, accuracy faltered in economic decision-making tasks like the Prisoner’s Dilemma, where AI alignment with human behavior dropped to around 60 percent.

Still, the potential applications here are vast. Policymakers and businesses could use AI simulations to predict public reactions to new policies or products. Why rely on focus groups or repeated polling when an AI replica of your constituent could offer insights based on a single, detailed conversation?

The researchers believe this technology could help explore societal structures, pilot interventions, and develop nuanced theories about human behavior. However, these advancements come with their own risks.

Ethical concerns loom over the misuse of AI replicas. Bad actors could exploit AI like this to manipulate public opinions, impersonate individuals, or simulate public will based on synthetic data. These concerns only further raise overall concerns about the rise of these models and how AI might affect humanity’s future.

Josh Hawkins has been writing for over a decade, covering science, gaming, and tech culture. He also is a top-rated product reviewer with experience in extensively researched product comparisons, headphones, and gaming devices.

Whenever he isn’t busy writing about tech or gadgets, he can usually be found enjoying a new world in a video game, or tinkering with something on his computer.