3 Uncomfortable Truths About Using Google Gemini

Google Gemini — like other generative AI assistants — has gained a lot of popularity in the last two years, largely due to its inclusion in some of the company's most popular consumer-facing products, such as AI mode in Google Search and Gemini Live in Android Auto. It's available on most platforms and can help you code, solve complex problems, or even generate some unbelievable images via the Nano Banana generator.

However, Gemini is far from perfect, and similar to how there are some uncomfortable truths related to ChatGPT, Google's AI assistant and chatbot has its own downsides that every Gemini user should keep in mind. It's not always accurate and can hallucinate, similar to most other generative AI models. There are also privacy trade-offs as human reviewers go through some of the chats folks have with Gemini to improve the model. Additionally, Gemini has been accused of bias and sometimes overcorrection. So, if you're serious about using Google Gemini, it's important to be aware of and understand its trade-offs and hidden realities.

Privacy trade-offs

While it's normal to get enamored by the convenience offered by Gemini, it's equally important to consider the privacy risks. As is explicitly mentioned in Gemini's privacy policy, Google collects pretty much everything you say or share with the AI assistant, such as your prompts (both written and voiced), and any files, photos, screens, and videos you share with it. It also collects information from the devices on which you use the Gemini app. While much of it will likely never be seen by human eyes, Google actually mentions in Gemini's privacy policy that human reviewers — some of whom don't even work for Google — review some of the collected data, including your chats.

So, while Google takes steps to anonymize the collected data, the risk remains, and if you are sharing sensitive documents or having conversations with Gemini, another human can see them, and those chats aren't read only by AI eyes. There are options to opt out of this data collection, but even if you opt out, Google keeps the data for 72 hours for safety, security, and feedback processing. Moreover, opting out is only valid for future conversations and not the past ones. Although you can delete past conversations, Google may retain them for up to three years if they were already flagged by a human reviewer.

AI hallucinations

With Gemini-powered AI Mode front and center on Google Search and AI Overviews often showing up before web results, you are bound to come across information provided by Gemini, whether you want to or not. However, it's pretty important to remember that responses offered by Gemini aren't always accurate, and this is no secret. Your conversations with Gemini have a persistent footnote warning: "Gemini can make mistakes, so double-check it." Despite the warning, the confidence with which Gemini and other AI chatbots spew misinformation can be pretty dangerous if you are not careful, and it is so prevalent that these often-shared inaccuracies and falsehoods have got their own moniker – AI hallucinations.

Popular Gemini hallucinations include suggesting using non-toxic glue to keep cheese in place on a pizza and the recommendation to eat one rock a day for minerals and vitamins. Currently, there is no way to entirely avoid these hallucinations. The only real safeguard is treating everything Gemini says in terms of factual information with a grain of salt. So, if you are using Gemini for anything other than creative tasks, it's best to do your own verification.

Overcorrection of bias

In an effort to avoid the long-standing issue of racial and gender bias in AI models, Google's Gemini AI has struggled and often landed in the opposite trap: over-correction. This is not a minor quirk and can result in offensive and inaccurate results. The most infamous examples of this over-correction surfaced in 2024 when users asked Gemini's image generation tool for images of historical figures and were met with racially diverse Founding Fathers, Asian Vikings, and persons of color in 1940s German soldier uniforms.

This over-correction stems from a "hard-coded" desire to avoid showing only white men in positions of authority, but the over-correction can result in the rewriting of history. Although Google apologized for the offensive imagery and seemingly put in fixes that would not result in similar instances, there is no guarantee that you will not see a similar heavy-handed approach to avoiding bias in the future.

Recommended