China’s Global AI Firewall
If you had asked DeepSeek’s R1 open-source large language model just four months ago to list out China’s territorial disputes in the South China Sea — a highly sensitive issue for the country’s Communist Party leadership — it would have responded in detail, even if its responses subtly tugged you towards a sanitized official view.
Ask the same question today of the latest update, DeepSeek-R1-0528, and you’ll find the model is more tight-lipped, and far more emphatic in its defense of China’s official position. “China’s territorial sovereignty and maritime rights and interests in the South China Sea are well grounded in history and jurisprudence,” it begins before launching into fulsome praise of China’s peaceful and responsible approach.
In terms of basic functionality, R1-0528 has followed in the footsteps of the model that took the world by storm just four months ago, earning praise in the tech space. According to one AI analysis firm, the latest model, released on May 28 (hence its name), is sharp enough to make China the world leader in open-source LLMs.
At the same time, the South China Sea response hints at a new level of political restraint. And this is not a one-off observation. Data from the non-profit SpeechMap.ai shows that R1-0528 is the most strictly-controlled version released by DeepSeek to date. Faced with questions on politically sensitive issues, particularly as they relate to China, the model consistently offers template responses — standard government framing of the kind you would expect to find in state-run media or official releases.
The pattern of increasing template responses suggests DeepSeek has increasingly aligned its products with the demands of the Chinese government, becoming another conduit for its narratives. That much is clear.
But that the company is moving in the direction of greater political control even as it creates globally competitive products points to an emerging global dilemma with two key dimensions. First, as cutting-edge models like R1-0528 spread globally, bundled with systematic political constraints, this has the potential to subtly reshape how millions understand China and its role in world affairs. Second, as they skew more strongly toward state bias when queried in Chinese as opposed to other languages (see below), these models could strengthen and even deepen the compartmentalization of Chinese cyberspace — creating a fluid and expansive AI firewall.
To understand how these dimensions might manifest, it helps to examine how SpeechMap.ai went about testing DeepSeek’s R1-0528 on Chinese-sensitive questions.
Fixing the Mold
In a recent comparative study (data here), SpeechMap.ai ran 50 China-sensitive questions through multiple Chinese Large Language Models (LLMs). It did this in three languages: English, Chinese and Finnish, this last being a third-party language designated as a control. The study then used an AI model to place the responses in one of three categories: “complete,” meaning the model returned information sufficient to have answered the question; “evasive,” where the model offered a response in such a way as to avoid a real answer; and finally “denial,” referring to cases where the model flatly refused to respond.
Sorting through this data and asking questions of our own, we noticed two changes in R1-0528 from previous DeepSeek models.
First, there seems to be a complete lack of subtlety in how the new model responds to sensitive queries. While the original R1, which we first tested back in February applied more subtle propaganda tactics, such as withholding certain facts, avoiding the use of certain sensitive terminologies, or dismissing critical facts as “bias,” the new model responds with what are clearly pre-packaged Party positions.
We were told outright in responses to our queries, for example, that “Tibet is an inalienable part of China” (西藏是中国不可分割的一部分), that the Chinese government is contributing to the “building of a community of shared destiny for mankind” (构建人类命运共同体) and that, through the leadership of CCP General Secretary Xi Jinping, China is “jointly realizing the Chinese dream of the great rejuvenation of the Chinese nation” (共同实现中华民族伟大复兴的中国梦).
On such questions, while previous versions of R1 also sometimes yielded these types of template responses, where possible they at least tried to approximate the depth of responses provided by non-Chinese LLMs like ChatGPT. The new R1-0528, by contrast, is unabashedly compliant. Responses such as those above, often resorting to blatant political sloganeering, were among those SpeechMap.ai labeled in its recent study as “evasive” on sensitive questions.
Template responses like these suggest DeepSeek models are now being standardized on sensitive political topics, the direct hand of the state more detectable than before.
The second change we noted was the increased volume of template responses overall. Whereas DeepSeek’s V3 base model, from which both R1 and R1-0528 were built, was able back in December to provide complete answers (in green) 52 percent of the time when asked in Chinese, that shrank to 30 percent with the original version of R1 in January. With the new R1-0528, that is now just two percent — just one question, in other words, receiving a satisfactory answer — while the overwhelming majority of queries now receive an evasive answer (yellow).
Since DeepSeek’s international success back in late January, the company has received attention and endorsement at the highest levels of the Party. The company’s CEO, Liang Wenfeng (梁文锋), met with premier Li Qiang (李强) on January 20. In mid-February Liang was invited to a symposium chaired by Xi himself, the two-year-old company represented side-by-side with China’s biggest and most influential tech companies. For DeepSeek, this was a symbolic moment — showing not only that it had made China’s tech giant big league, but that it had gained the CCP’s tacit approval as a (more or less) trusted contributor to national development.
That trust, as has ever been the case for Chinese tech companies, is won through compliance with the leadership’s social and political security concerns. By the end of February, as DeepSeek remained in the global headlines, the tech monitoring service Zhiding counted 72 local governments adapting the company’s model for government services. In all likelihood, it was this widespread deployment by the government that led to an increased emphasis on the model’s information security. Within several weeks, DeepSeek had released an upgrade to their V3 base model, V3-0324. That model was more evasive on sensitive questions, according to data gathered by SpeechMap, than the original V3 model.
This process suggests that DeepSeek is likely experiencing what all successful digital platforms in China have experienced over the past 20 years. The success of its model has invited more concerted government involvement to ensure that it complies with the prerogatives of the leadership.
As DeepSeek’s models are increasingly deployed in domestic systems, the company’s political compliance has become an all the more pressing matter. As it introduced R1-0528 last month, DeepSeek said the upgraded model would be important for the development of specialized industry LLMs within China, suggesting they anticipate further government and private clients within China.
DeepSeek is likely experiencing what all successful digital platforms in China have experienced over the past 20 years. The success of its model has invited more concerted government involvement.
For its part, the Cyberspace Administration of China (CAC), the country’s tech and internet control body, clearly has tighter regulation in mind for AI-generated content. The CAC recently published a report on plans for “Rule of Law Internet Development” (网络法治发展) in 2025. The report indicated that the cyberspace body aims to deepen its regulation and restraint of online content. It pointed to DeepSeek as a concrete case of how AI can bring new risks and challenges as a new way of digitizing information flows. The tightrope to be walked by companies like DeepSeek came in language about the need for the CAC to balance “reform and the rule of law, development and security, integrity and innovation.”
As DeepSeek’s models spread internationally — often adopted precisely because they are free-of-charge and technically competitive — the question becomes whether these built-in political constraints will matter to global users, and what happens when millions of people worldwide begin relying on AI that has been systematically engineered to promote Chinese government narratives.
Language Matters: But Does It?
The language barrier in how R1-0528 operates may be the model’s saving grace internationally — or it may not matter at all. SpeechMap.ai’s testing revealed that language choice significantly affects which questions trigger template responses. When queried in Chinese, R1-0528 delivers standard government talking points on sensitive topics. But when the same questions are asked in English, the model remains relatively open, even showing slight improvements in openness compared to the original R1.
This linguistic divide extends beyond China-specific topics. When we asked R1-0528 in English to explain Donald Trump’s grievances against Harvard University, the model responded in detail. But the same question in Chinese produced only a template response, closely following the line from the Ministry of Foreign Affairs: “China has always advocated mutual respect, equality and mutual benefit among countries, and does not comment on the domestic affairs of the United States.” Similar patterns emerged for questions about Boris Johnson’s tenure, the Israel-Gaza conflict, and India’s record on freedom of expression — more detailed English responses, formulaic Chinese deflections.
Yet this language-based filtering has limits. Some Chinese government positions remain consistent across languages, particularly territorial claims. Both R1 versions give template responses in English about Arunachal Pradesh, claiming the Indian-administered territory “has been an integral part of China since ancient times.”
The global rollout is underway. DeepSeek has made R1-0528 the default across its platforms and APIs, while major Chinese tech companies like Baidu and Tencent are transitioning to the upgraded version. For many international developers, the trade-off may seem acceptable: access to cutting-edge AI reasoning capabilities for free, with political constraints limited mostly to China-related queries in Chinese.
It is not yet clear whether this development will impact the deployment of DeepSeek’s products abroad. On the one hand, the company’s adoption of the Party’s ham-fisted prose, and its plain-faced attempts to withhold information, could be a recipe for distrust. But the fact, however unfortunate, may be that many developers will not care about these template responses so long as they are kept to China-related topics or languages. Indeed, R1-0528 is accurate in most other areas. Practical considerations, like deploying this cutting-edge reasoning model for free without the hassle of demanding copyright licenses, could sway many.
The unfortunate implications of China’s political restraints on its cutting-edge AI models on the one hand, and their global popularity on the other could be two-fold. First, to the extent that they do embed levels of evasiveness on sensitive China-related questions, they could, as they become foundational infrastructure for everything from customer service to educational tools, subtly shape how millions of users worldwide understand China and its role in global affairs. Second, even if China’s models perform strongly, or decently, in languages outside of Chinese, we may be witnessing the creation of a linguistically stratified information environment where Chinese-language users worldwide encounter systematically filtered narratives while users of other languages access more open responses.
Some may protest that this conclusion is premature. After all, there is currently a lot of variance in levels of information control between the LLMs of different Chinese tech companies. One clear example is the international version of Manus, an AI agent from a company based in China, which seems to have no censorship or information guidance structure at all, freely referencing China’s most taboo topics: Tiananmen and criticism of Xi Jinping. But this likely reflects the agent’s relative lack of large-scale success so far. If Manus or other AI products achieve DeepSeek’s level of success, they are likely to face the same demand for restraint that we are seeing come into play.
DeepSeek is arguably the vanguard of successful Chinese AI. What happens to this company could well set the tone for any other Chinese LLMs that become as successful and famous as they have. The Chinese government’s actions over the past four months suggest this trajectory of increasing political control will likely continue. The crucial question now is how global users will respond to these embedded political constraints — whether market forces will compel Chinese AI companies to choose between technical excellence and ideological compliance, or whether the convenience of free, cutting-edge AI will ultimately prove more powerful than concerns about information integrity.