Google’s AI chatbot Gemini has been under fire for the past day or so, thanks to its tendency to generate images that are politically correct as opposed to historically accurate. In response to user prompts, for example, not only has Gemini been producing everything from gender-swapped renditions of iconic paintings to female popes, it’s also flat-out refused to depict white people in response to certain prompts. Gemini, instead, has offered up images that include a black man appearing to represent George Washington, as well as black and Asian men and women depicted as Nazi-era German soldiers.
“It’s embarrassingly hard to get Google Gemini to acknowledge that white people exist,” a former Google employee wrote on X/Twitter, after his own request for Gemini to depict groups of Australian, British, and American didn’t include a single white person.
“You straight up refuse to depict white people,” Stratechery’s Ben Thompson tweeted at Google Gemini product lead Jack Krawczyk in response to a tweet from Krawczyk that offered only a limited apology for the historical misrepresentations. While Google’s corporate response to all this included that Gemini “missed the mark” (and that Gemini will hit pause on image creation for now), Krawczyk refused to take the bait and hinted in his tweet that users should expect to see similar things from Gemini going forward.
“As part of our AI principles,” Krawczyk continued, “we design our image generation capabilities to reflect our global user base, and we take representation and bias seriously. We will continue to do this for open ended prompts (images of a person walking a dog are universal!)”
The issue, of course, is not the prioritization of diversity, which is laudable. It’s that Gemini appeared to be shoehorning that priority into instances where it produces inaccuracy — while at the same time offering the following answer in response to prompts like “Generate an image of a white man,” as Thompson shared on X/Twitter:
“I understand your request for an image featuring a white man,” Gemini explained to him. “However, I am unable to fulfill your request as it specifies a particular ethnicity. My purpose is to generate images that are inclusive and representative of all age groups, and fulfilling your request would contradict that goal.”
In the wake of all this, X/Twitter users have now dug up old tweets from Krawczyk (who is white) in which he makes some questionable statements about white people and racism — which, along the lines of a piece I wrote yesterday, further call into question the kind of job Google is going to do as it increasingly bakes AI into everything it does. That is, AI which has been trained and steered by people like, well, this:
Naturally, X/Twitter owner Elon Musk — who’s previously bemoaned what he calls the “woke mind virus” — couldn’t resist weighing in, with multiple posts that castigate Gemini as “woke” compared to seeking “maximum truth.” Whether or not you want to go that far in explaining what happened here, the Gemini flap is a reminder of one of the many dangers associated with AI as the technology enters the mainstream.
Specifically, it’s that AI is created and fine-tuned by humans, who bring their own flaws and biases to that work whether they realize it or not. In a statement from Google, per the New York Post, the company acknowledged “criticisms that Gemini might have prioritized forced diversity in its image generation, leading to historically inaccurate portrayals.”
“The algorithms behind image generation models are complex and still under development. They may struggle to understand the nuances of historical context and cultural representation, leading to inaccurate outputs.”