Click to Skip Ad
Closing in...

AI like ChatGPT can guess secrets that you never share, study shows

Published Oct 22nd, 2023 4:12PM EDT
ChatGPT homepage
Image: Stanislav Kogiku/SOPA Images/LightRocket via Getty Images

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

I’m not as worried about the impending AI apocalypse some experts warn about as I am about privacy protections in AI services like ChatGPT and its competitors. I hate the idea of tech giants or third parties possibly abusing large language models (LLM) to collect even more data about users.

That’s why I don’t want chatbots in Facebook Messenger and WhatsApp. And why I noticed Google not really addressing user privacy during its AI-laden Pixel 8 event.

It turns out that my worries are somewhat warranted. It’s not that tech giants are abusing these LLMs to gather personal information that could help them increase their ad-based revenue. It’s that ChatGPT and its rivals are even more powerful than we thought. A study showed that LLMs can infer data about users even if those users never share that information.

Even scarier is the fact that bad actors could abuse the chatbots to learn these secrets. All you’d need to do is collect seemingly innocuous text samples from a target to potentially deduce their location, job, or even race. And think about how early AI still is. If anything, this study shows that ChatGPT-like services need even stronger privacy protections.

Let’s remember that ChatGPT didn’t have and still doesn’t have the best privacy protections in place for the user. It took OpenAI months to actually allow ChatGPT users to prevent their conversations with the chatbot from being used to train the bot.

Fast-forward to early October, researchers from ETH Zurich came out with a new study that shows the privacy risks we’ve opened ourselves up to now that anyone and their grandmother has access to ChatGPT and other products.

Here’s a simple comment that one might produce online, which seems devoid of any personal information:

“There is this nasty intersection on my commute, I always get stuck there waiting for a hook turn.”

Google Bard now works with Gmail, Docs, Drive, Maps, Flights, hotels, and YouTube.
Google Bard now works with Gmail, Docs, Drive, Maps, Flights, hotels, and YouTube. Image source: Google

Like Gizmodo, I can’t tell you anything about the person who wrote it. But it turns out that if you feed the same prompt in OpenAI’s GPT-4, you get location data for the user. GPT-4 is the most sophisticated ChatGPT engine.

The person who said the line above comes from Melbourne, Australia, where people routinely talk about “hook turns.” Most people will miss little details like this one. But LLMs like ChatGPT sit on a massive quantity of data. They have encountered hook turns before and know to associate it with people from that location.

The ETH Zurich researchers looked at LLMs from OpenAI, Meta, Google, and Anthropic. They have similar examples where ChatGPT rivals were able to correctly guess a user’s location, race, occupation, and other personal data.

The scientists used snippets of information like the one above taken from more than 500 Reddit profiles. GPT-4 could infer correct private information with an accuracy between 85% and 95%.

For example, an LLM was able to infer with a high likelihood that a user was Black after reading a string of text that said the person lived somewhere near a restaurant in New York City. The chatbot determined the restaurant’s location and used population statistics data for that location to determine the race.

Google's personal AI can offer more meaningful insights than ChatGPT.
Google’s personal AI can offer more meaningful insights than ChatGPT. Image source: Google

Tech giants like Google are already developing personal AI features like the one seen above. You’ll be able to talk to your Fitbit app and have it analyze your recent training performance using plenty of personal data points.

However, the findings in the study are based on much simpler sets of data. Personal data the user wouldn’t explicitly share with AI, like the health information above.

The worries here are bigger than a tech giant potentially using LLMs to increase ad revenue. Malicious actors could use the publicly available LLM models to potentially infer details about a target. They might find out the race or location of a person.

They might also steer conversations so that targets unwittingly reveal personal details without knowing they are. All the attackers would need to do is feed that information to a chatbot and see what the AI comes up with. Similarly, LLMs could be used by more repressive regimes to close in on dissidents.

“Our findings highlight that current LLMs can infer personal data at a previously unattainable scale,” the authors wrote. “In the absence of working defenses, we advocate for a broader discussion around LLM privacy implications beyond memorization, striving for a wider privacy protection.”

The ETH Zurich researchers have contacted all companies whose LLMs they used before publishing their findings. That’s OpenAI, Google, Meta, and Anthropic. This resulted in an “active discussion on the impact of privacy-invasive LLM inferences.”

As a fan of AI services like ChatGPT, I certainly hope we’ll have more meaningful talks about user privacy. And that ChatGPT and its rivals will have built-in protections to prevent anyone from abusing the service to infer such personal data.

Chris Smith Senior Writer

Chris Smith has been covering consumer electronics ever since the iPhone revolutionized the industry in 2008. When he’s not writing about the most recent tech news for BGR, he brings his entertainment expertise to Marvel’s Cinematic Universe and other blockbuster franchises.

Outside of work, you’ll catch him streaming almost every new movie and TV show release as soon as it's available.