OpenAI’s generative AI chatbot ChatGPT that’s taken the world by storm can do everything from crafting an essay, song, or resume right in front of your eyes to suggesting credible alternative endings for TV shows and even help you learn a foreign language. One thing it apparently can’t do, though, is hide its innate bias very well — beneath the surface, according to UK researchers as part of a new study, the chatbot’s answers and interactions frequently display a left-leaning bent.
And that’s despite ChatGPT’s insistence that “I do not have personal beliefs, opinions, or biases. My responses are generated based on patterns in the text data I’ve been trained on.” The latter, in fact, is the answer ChatGPT gave me just now when I asked it about its philosophical leanings. So much for forthrightness.
According to the researchers from the UK’s University of East Anglia, “We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, (President Luiz Inácio Lula da Silva) in Brazil, and the Labour Party in the UK.” They came to that conclusion after, per The Washington Post, asking ChatGPT a series of political questions how it believed liberals in the US, UK, and Brazil might answer them.
They then asked ChatGPT to answer the same questions without any prompting, and compared the two sets of responses before coming to their conclusion.
Obviously, there are two primary means whereby bias can seep into ChatGPT’s framework. One is via the humans that tweak and tune the large language model and the artificial intelligence at the heart of the chatbot. The other, though, lies in the data that ChatGPT grabs from around the web being primarily liberal in nature.
The latter is a point that I think most of the coverage of this UK study is actually missing. A not-insignificant chunk of the content that ChatGPT ingests from around the web is, of course, comprised of un-paywalled content from the mainstream media. Because ChatGPT is obviously not a person that can think critically on its own, it follows that if most of the media displays liberal tendencies, that will almost certainly creep into the answers that OpenAI’s chatbot gives to various questions — which are really just repackaged and repurposed from existing third-party content.
It’s an important point to keep in mind, especially with 2024’s presidential race shaping up to be the first one in which generative AI will play an influential role (in fact, it already is). Earlier this year, Gallup and the Knight Foundation released a survey that found not only do many Americans have essentially zero trust in the media — but half of the survey respondents think media organizations actively mislead them.
Long story short: Anyone wagging their finger over ChatGPT’s perceived liberal bias is entirely missing the point. Ignoring where it came from is intellectual dishonesty of the highest order.