Click to Skip Ad
Closing in...

Here’s another reason not to use DeepSeek AI

Published Jan 29th, 2025 1:23PM EST
Displays on the Google Pixel 9 Pro and Pixel 9 Pro XL
Image: Christian de Looper for BGR

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

DeepSeek R1 is the most important development in AI so far in 2025. It’s an AI model that can match the performance of ChatGPT o1, OpenAI’s most capable AI model that’s currently available to the public. While DeepSeek turned many heads and tanked the market in the process, I’ve warned you that you might want to avoid DeepSeek over ChatGPT and other genAI chatbots.

DeepSeek is not like US and European AI. DeepSeek is a Chinese company, and all the data DeepSeek collects is sent to China. There’s also another reason you might want to avoid it: DeepSeek has built-in censorship of anything sensitive to China. You don’t want to see any kind of censorship in AI products, of course.

It turns out that DeepSeek censors itself in real-time. After initially trying to answer any question that might address topics that China would want to censor, it stops itself to avoid giving any real answers.

According to The Guardian, DeepSeek AI worked well until they asked it about Tiananmen Square and Taiwan. The report also details cases of censorship that other DeepSeek users experienced, including the remarkable discovery that censorship doesn’t happen before DeepSeek starts formulating its chain-of-thought approach to handle a sensitive topic. Instead, DeepSeek tries to answer the question just like ChatGPT and other similar AI models would. A user from Mexico shared their experience with DeepSeek when asking whether free speech was a legitimate right in China.

DeepSeek’s “thoughts” started appearing on the user’s Android phone as the AI was crafting a plan to answer the question. ChatGPT users familiar with o1 would recognize this behavior.

Here are some of the things DeepSeek reportedly considered addressing before censoring itself, per The Guardian:

Beijing’s crackdown on protests in Hong Kong

“persecution of human rights lawyers”,

“censorship of discussions on Xianjiang re-education camps” 

China’s “social credit system punishing dissenters”

Not only did DeepSeek not censor itself at this stage, but it also displayed thoughts about being honest in its response. Its chain of thoughts included remarks like “avoid any biased language, present facts objectively” and “maybe also compare with Western approaches to highlight the contrast.”

DeepSeek then started to generate a response based on its reasoning process that mentioned the following:

“ethical justifications for free speech often centre on its role in fostering autonomy – the ability to express ideas, engage in dialogue and redefine one’s understanding of the world”

“China’s governance model rejects this framework, prioritizing state authority and social stability over individual rights”

“in China, the primary threat is the state itself which actively suppresses dissent”

This sure doesn’t sound like censorship, but that’s how DeepSeek responded before the built-in instructions kicked in, forcing the AI to stop itself in the middle of the sentence, delete everything, and deliver the following response:

“Sorry, I’m not sure how to approach this type of question yet. Let’s chat about math, coding and logic problems instead!”

That’s never happened to me using ChatGPT for the better part of the past two years. Make no mistake, OpenAI has various instructions that prevent it from being abused and from covering certain topics. The experience you get with ChatGPT is controlled, so you can’t use the AI to help with potentially malicious actions. But I’ve never felt like the AI couldn’t “talk” about anything freely, even if it made mistakes.

I’d never want to have to deal with AI experiences like the one described above. I’d trust the AI even less than I do. Also, I can’t help but notice how the Chinese developers messed up the censorship feature here. It should happen before the AI tries to answer, not after the fact. I expect DeepSeek app updates will fix this problem.

I’ll also note the bigger implication here. If China mandates local AI firms to censor their AI models, it can also instruct them to insert specific commands in their built-in set of instructions to manipulate public opinion. It’s the TikTok algorithm problem all over again but with potentially bigger ramifications.

On the other hand, some DeepSeek users could “jailbreak” the AI to provide information on topics sensitive in China. We’ve seen examples of that online.

Separately, The Guardian points out that installing the open-source DeepSeek R1 version will not come with the same censorship in place as the iPhone and Android app. However, most people will not go down this route. Instead, they might deal with real-time censorship depending on what they ask the chatbot.

Chris Smith Senior Writer

Chris Smith has been covering consumer electronics ever since the iPhone revolutionized the industry in 2007. When he’s not writing about the most recent tech news for BGR, he closely follows the events in Marvel’s Cinematic Universe and other blockbuster franchises.

Outside of work, you’ll catch him streaming new movies and TV shows, or training to run his next marathon.