Click to Skip Ad
Closing in...

Google AI Overviews strike again following the fatal Air India crash

Published Jun 13th, 2025 12:17PM EDT
Google announced AI Overviews coming to Google Search at I/O 2024.
Image: Google Inc.

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

Google said at I/O 2025 that AI Overviews are quite popular with users, but I’ve always found them to be the worst kind of AI product. Google is forcing AI results on as many Google Search queries as it can just because it can. It’s not because users want AI Overviews in search.

The separate AI Mode is generative AI in Google Search done the right way. It’s a separate tab or an intentional choice from the user to enhance their Search experience with a Gemini-powered chat.

The reason I don’t like the idea of AI Overviews being forced on users aggressively is their well-known problems with accuracy. We’ve learned the hard way that AI Overviews hallucinate badly. The glue-on-pizza incident won’t be forgotten anytime soon. While Google has improved AI Overviews, the AI-powered Search results still make mistakes.

The latest one concerns the fatal Air India crash from earlier this week. Some people who rushed to Google Search to find out what happened saw an AI Overview claiming that an Airbus operated by Air India crashed on Thursday, soon after takeoff.

Some AI Overviews even mentioned the type of plane, an Airbus A330-243. In reality, it was a Boeing 787.

I’ve said more than once that Google should abandon AI Overviews. The glue-on-pizza hallucinations were one thing. They were funny. Most people probably realized the AI made a mistake. But this week’s hallucination is different. It spreads incorrect information about a tragic event, and that can have serious consequences.

The last thing we want from genAI products is to be misled by fake news. AI Overviews do exactly that when they hallucinate. It doesn’t matter if these issues are rare. One mistake like the one involving the Air India crash is enough to cause harm.

This isn’t just about Google’s reputation. Airbus could be directly impacted. Imagine investors or travelers making decisions based on that search result. Sure, they could seek out real news sources. But not everyone will bother to verify the snippet at the top of the page.

Google’s disclaimer that “AI responses may include mistakes” isn’t enough. Not everyone notices, or even reads, that fine print.

At least Google corrected this hallucination and gave Ars Technica the following statement:

As with all Search features, we rigorously make improvements and use examples like this to update our systems. This response is no longer showing. We maintain a high quality bar with all Search features, and the accuracy rate for AI Overviews is on par with other features like Featured Snippets.

I’ll also point out that not all AI Overviews may have listed Airbus as the crashed plane. Results can vary depending on what you ask and how you phrase it. Some users might have gotten the correct answer on the first try. We don’t know how many times the Airbus detail appeared by mistake.

AI Overviews might make similar mistakes on topics beyond tragic news events. We have no way of knowing how often they hallucinate, no matter what Google says about accuracy.

If you’ve been following AI developments over the past few years, you probably have a sense of why these hallucinations happen. The AI doesn’t think like a human. It might combine details from reports that mention both Airbus and Boeing, then get the facts mixed up.

And it’s not just AI Overviews. We’ve seen other genAI tools hallucinate too. Research has even shown that the most advanced ChatGPT models hallucinate more than earlier ones. That’s why I always argue with ChatGPT when it fails to give me sources for its claims.

But here’s the big difference. You can’t opt out of AI Overviews. Google has pushed this AI search experience on everyone without first ensuring the AI doesn’t hallucinate. AI Mode, by contrast, is a much better use of AI in Search. It can genuinely improve the experience.

I’ll also add that instead of talking about AI Overviews and their hallucinations, I could be praising a different AI initiative from Google. DeepMind is using AI to predict hurricane forecasts, which could be incredibly helpful. But here we are, focusing on AI Overviews and their errors, because misleading users with AI is a serious problem. Hallucination remains an AI safety issue that nobody has solved yet.

Chris Smith Senior Writer

Chris Smith has been covering consumer electronics ever since the iPhone revolutionized the industry in 2007. When he’s not writing about the most recent tech news for BGR, he closely follows the events in Marvel’s Cinematic Universe and other blockbuster franchises.

Outside of work, you’ll catch him streaming new movies and TV shows, or training to run his next marathon.