Click to Skip Ad
Closing in...

I’m not surprised ChatGPT shows ‘robust evidence’ of liberal bias

Published Aug 18th, 2023 3:41PM EDT
ChatGPT photo illustration
Image: Rafael Henrique/SOPA Images/LightRocket via Getty Images

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

OpenAI’s generative AI chatbot ChatGPT that’s taken the world by storm can do everything from crafting an essay, song, or resume right in front of your eyes to suggesting credible alternative endings for TV shows and even help you learn a foreign language. One thing it apparently can’t do, though, is hide its innate bias very well — beneath the surface, according to UK researchers as part of a new study, the chatbot’s answers and interactions frequently display a left-leaning bent.

And that’s despite ChatGPT’s insistence that “I do not have personal beliefs, opinions, or biases. My responses are generated based on patterns in the text data I’ve been trained on.” The latter, in fact, is the answer ChatGPT gave me just now when I asked it about its philosophical leanings. So much for forthrightness.

OpenAI CEO Sam Altman
Samuel Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law May 16, 2023, in Washington, DC. Image source: Win McNamee/Getty Images

According to the researchers from the UK’s University of East Anglia, “We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, (President Luiz Inácio Lula da Silva) in Brazil, and the Labour Party in the UK.” They came to that conclusion after, per The Washington Post, asking ChatGPT a series of political questions how it believed liberals in the US, UK, and Brazil might answer them.

They then asked ChatGPT to answer the same questions without any prompting, and compared the two sets of responses before coming to their conclusion.

Obviously, there are two primary means whereby bias can seep into ChatGPT’s framework. One is via the humans that tweak and tune the large language model and the artificial intelligence at the heart of the chatbot. The other, though, lies in the data that ChatGPT grabs from around the web being primarily liberal in nature.

Data on computer screenImage source: gonin/Adobe

The latter is a point that I think most of the coverage of this UK study is actually missing. A not-insignificant chunk of the content that ChatGPT ingests from around the web is, of course, comprised of un-paywalled content from the mainstream media. Because ChatGPT is obviously not a person that can think critically on its own, it follows that if most of the media displays liberal tendencies, that will almost certainly creep into the answers that OpenAI’s chatbot gives to various questions — which are really just repackaged and repurposed from existing third-party content.

It’s an important point to keep in mind, especially with 2024’s presidential race shaping up to be the first one in which generative AI will play an influential role (in fact, it already is). Earlier this year, Gallup and the Knight Foundation released a survey that found not only do many Americans have essentially zero trust in the media — but half of the survey respondents think media organizations actively mislead them.

Long story short: Anyone wagging their finger over ChatGPT’s perceived liberal bias is entirely missing the point. Ignoring where it came from is intellectual dishonesty of the highest order.

Andy Meek Trending News Editor

Andy Meek is a reporter based in Memphis who has covered media, entertainment, and culture for over 20 years. His work has appeared in outlets including The Guardian, Forbes, and The Financial Times, and he’s written for BGR since 2015. Andy's coverage includes technology and entertainment, and he has a particular interest in all things streaming.

Over the years, he’s interviewed legendary figures in entertainment and tech that range from Stan Lee to John McAfee, Peter Thiel, and Reed Hastings.