Click to Skip Ad
Closing in...

The Meta AI app is currently going viral for all the wrong reasons

Published Jun 13th, 2025 4:52PM EDT
Meta AI iPhone app screenshots from App Store listing.
Image: App Store

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

Meta launched the Meta AI app in late April to take on ChatGPT and other chatbots. Unlike rival apps, Meta AI comes with social features that nobody asked for. But Meta’s desire for Meta AI users to share their chats with others via a social feed isn’t surprising. Social media is how Meta makes its money. All of its apps are social apps. Also, bringing a social element to an AI chatbot experience could always work in Meta’s favor.

However, that’s hardly the case right now. Meta AI has gone viral this week for a huge issue. Rather than discussing a unique Meta AI feature that makes the chatbot a must-have AI product, people are talking about the wildly inappropriate chats that take place on the platform, which some users are sharing online by mistake for others to see.

Sharing AI chats is optional, but it looks like plenty of users don’t realize what they’re doing, or they don’t care. Whatever the case, the Meta AI chats that appeared on social media are deeply disturbing. They show what can go wrong if an AI firm working on frontier AI experiences doesn’t handle user privacy correctly. Meta could do a better job informing users that the “Share” button will move the Meta AI chat to the Discover feed.

According to TechCrunch, around 6.5 million people installed the standalone Meta AI app. The figures come from Appfigures, not Meta. That’s hardly the user base that a company like Meta can brag about. Meta AI is a standalone app. It wasn’t embedded in a more popular app like Instagram or WhatsApp, so it’s up to users to install it.

Before rolling the app out, Meta largely focused on forcing Meta AI experiences into all its social apps, including WhatsApp, Messenger, Instagram, and Facebook. That’s why Meta can say Meta AI has 1 billion monthly users.

The standalone Meta AI app has yet to achieve such reach. But even so, 6.5 million isn’t a small number. It shows that some people are genuinely interested in the Meta AI chatbot experience. However, not all of them know how to protect their privacy.

I haven’t tried Meta AI, nor am I likely to get on the app anytime soon. But privacy is one of my main concerns when it comes to AI products. Meta could do a better job here. While I haven’t been exposed to private Meta AI chats that were shared online by users who don’t know (or care) about how the social aspect works, there are plenty of examples.

Here’s a take from TechCrunch:

Flatulence-related inquiries are the least of Meta’s problems. On the Meta AI app, I have seen people ask for help with tax evasion, if their family members would be arrested for their proximity to white-collar crimes, or how to write a character reference letter for an employee facing legal troubles, with that person’s first and last name included. Others, like security expert Rachel Tobac, found examples of people’s home addresses and sensitive court details, among other private information.

It keeps going, too. Here’s what Gizmodo found in the Discover feed, which is where the Meta AI chats go if you don’t know what you’re doing and press the Share button:

In my exploration of the app, I found seemingly confidential prompts addressing doubts/issues with significant others, including one woman questioning whether her male partner is truly a feminist. I also uncovered a self-identified 66-year-old man asking where he can find women who are interested in “older men,” and just a few hours later, inquiring about transgender women in Thailand.

Andreessen Horowitz partner Justine Moore posted screenshots of Meta AI chats in the Discover feed, summarizing some of what she saw in an hour of browsing:

  • Medical and tax records
  • Private details on court cases
  • Draft apology letters for crimes
  • Home addresses
  • Confessions of affairs…and much more!

None of these topics should be broached in your conversations with any AI model, whether it’s Meta AI, ChatGPT, or any of the other countless startups.

What you can do

If you or someone you love is using Meta AI, you should ensure the privacy settings are set correctly. Gizmodo, which hilariously advises users to get their parents off of Meta AI, lists the steps needed to prevent Meta AI chats from making it to the Discover feed:

  1. Tap your profile icon at the top right.
  2. Tap “Data & Privacy” under “App settings.”
  3. Tap “Manage your information.”
  4. Then, tap “Make all your prompts visible to only you.”
  5. If you’ve already posted publicly and want to remove those posts, you can also tap “Delete all prompts.”

Also, don’t tap the Share button if you want to keep an AI conversation private. As TechCrunch and Justine Moore point out, the Meta AI chats are not public by default. But some people press the Share button in chats, unaware they’re sharing them with everyone else on the platform.

I’ll also remind you that Meta will use all your posts shared on its social networks to train AI. You might want to opt out of that if you haven’t done so already. And if you don’t want Meta AI to use any of that public information for more personalized responses, you’ll want to opt out of that too.

Finally, remember that it’s not just older, less tech-savvy people using Meta AI in ways that might be inappropriate. You’ll want to check on your teens as well and see what sort of chats they might have with Meta AI.

Chris Smith Senior Writer

Chris Smith has been covering consumer electronics ever since the iPhone revolutionized the industry in 2007. When he’s not writing about the most recent tech news for BGR, he closely follows the events in Marvel’s Cinematic Universe and other blockbuster franchises.

Outside of work, you’ll catch him streaming new movies and TV shows, or training to run his next marathon.