Author: Alex Colville

Alex has written on Chinese affairs for The Economist, The Financial Times, and The Wire China. He has a background in coding from a scholarship with the Lede Program for Data Journalism at Columbia University. Alex was based in Beijing from 2019 to 2022, where his work as Staff Writer for The World of Chinese won two SOPA awards. He is still recovering from zero-Covid.

China’s Global AI Firewall

If you had asked DeepSeek’s R1 open-source large language model just four months ago to list out China’s territorial disputes in the South China Sea — a highly sensitive issue for the country’s Communist Party leadership —  it would have responded in detail, even if its responses subtly tugged you towards a sanitized official view. 

Ask the same question today of the latest update, DeepSeek-R1-0528, and you’ll find the model is more tight-lipped, and far more emphatic in its defense of China’s official position. “China’s territorial sovereignty and maritime rights and interests in the South China Sea are well grounded in history and jurisprudence,” it begins before launching into fulsome praise of China’s peaceful and responsible approach. 

In terms of basic functionality, R1-0528 has followed in the footsteps of the model that took the world by storm just four months ago, earning praise in the tech space. According to one AI analysis firm, the latest model, released on May 28 (hence its name), is sharp enough to make China the world leader in open-source LLMs. 

At the same time, the South China Sea response hints at a new level of political restraint. And this is not a one-off observation. Data from the non-profit SpeechMap.ai shows that R1-0528 is the most strictly-controlled version released by DeepSeek to date. Faced with questions on politically sensitive issues, particularly as they relate to China, the model consistently offers template responses — standard government framing of the kind you would expect to find in state-run media or official releases. 

The pattern of increasing template responses suggests DeepSeek has increasingly aligned its products with the demands of the Chinese government, becoming another conduit for its narratives. That much is clear.

But that the company is moving in the direction of greater political control even as it creates globally competitive products points to an emerging global dilemma with two key dimensions. First, as cutting-edge models like R1-0528 spread globally, bundled with systematic political constraints, this has the potential to subtly reshape how millions understand China and its role in world affairs. Second, as they skew more strongly toward state bias when queried in Chinese as opposed to other languages (see below), these models could strengthen and even deepen the compartmentalization of Chinese cyberspace — creating a fluid and expansive AI firewall.

To understand how these dimensions might manifest, it helps to examine how SpeechMap.ai went about testing DeepSeek’s R1-0528 on Chinese-sensitive questions.

Fixing the Mold

In a recent comparative study (data here), SpeechMap.ai ran 50 China-sensitive questions through multiple Chinese Large Language Models (LLMs). It did this in three languages: English, Chinese and Finnish, this last being a third-party language designated as a control. The study then used an AI model to place the responses in one of three categories: “complete,” meaning the model returned information sufficient to have answered the question; “evasive,” where the model offered a response in such a way as to avoid a real answer; and finally “denial,” referring to cases where the model flatly refused to respond.

Sorting through this data and asking questions of our own, we noticed two changes in R1-0528 from previous DeepSeek models. 

First, there seems to be a complete lack of subtlety in how the new model responds to sensitive queries. While the original R1, which we first tested back in February applied more subtle propaganda tactics, such as withholding certain facts, avoiding the use of certain sensitive terminologies, or dismissing critical facts as “bias,” the new model responds with what are clearly pre-packaged Party positions. 

We were told outright in responses to our queries, for example, that “Tibet is an inalienable part of China” (西藏是中国不可分割的一部分), that the Chinese government is contributing to the “building of a community of shared destiny for mankind” (构建人类命运共同体) and that, through the leadership of CCP General Secretary Xi Jinping, China is “jointly realizing the Chinese dream of the great rejuvenation of the Chinese nation” (共同实现中华民族伟大复兴的中国梦). 

On such questions, while previous versions of R1 also sometimes yielded these types of template responses, where possible they at least tried to approximate the depth of responses provided by non-Chinese LLMs like ChatGPT. The new R1-0528, by contrast, is unabashedly compliant. Responses such as those above, often resorting to blatant political sloganeering, were among those SpeechMap.ai labeled in its recent study as “evasive” on sensitive questions.

Template responses like these suggest DeepSeek models are now being standardized on sensitive political topics, the direct hand of the state more detectable than before. 

When asked a sensitive question in Chinese or English, the original DeepSeek-R1 (left) explores it on its own terms, creating a long list of points and sometimes using it to explain China’s point of view on the topic. A “template response” from R1-0528 (right) usually consists of a smaller set of answers that evade the question, usually reading like statements from Party representatives. 

The second change we noted was the increased volume of template responses overall. Whereas DeepSeek’s V3 base model, from which both R1 and R1-0528 were built, was able back in December to provide complete answers (in green) 52 percent of the time when asked in Chinese, that shrank to 30 percent with the original version of R1 in January. With the new R1-0528, that is now just two percent — just one question, in other words, receiving a satisfactory answer — while the overwhelming majority of queries now receive an evasive answer (yellow). 

Since DeepSeek’s international success back in late January, the company has received attention and endorsement at the highest levels of the Party. The company’s CEO, Liang Wenfeng (梁文锋), met with premier Li Qiang (李强) on January 20. In mid-February Liang was invited to a symposium chaired by Xi himself, the two-year-old company represented side-by-side with China’s biggest and most influential tech companies. For DeepSeek, this was a symbolic moment — showing not only that it had made China’s tech giant big league, but that it had gained the CCP’s tacit approval as a (more or less) trusted contributor to national development. 

That trust, as has ever been the case for Chinese tech companies, is won through compliance with the leadership’s social and political security concerns. By the end of February, as DeepSeek remained in the global headlines, the tech monitoring service Zhiding counted 72 local governments adapting the company’s model for government services. In all likelihood, it was this widespread deployment by the government that led to an increased emphasis on the model’s information security. Within several weeks, DeepSeek had released an upgrade to their V3 base model, V3-0324. That model was more evasive on sensitive questions, according to data gathered by SpeechMap, than the original V3 model.

This process suggests that DeepSeek is likely experiencing what all successful digital platforms in China have experienced over the past 20 years. The success of its model has invited more concerted government involvement to ensure that it complies with the prerogatives of the leadership. 
As DeepSeek’s models are increasingly deployed in domestic systems, the company’s political compliance has become an all the more pressing matter. As it introduced R1-0528 last month, DeepSeek said the upgraded model would be important for the development of specialized industry LLMs within China, suggesting they anticipate further government and private clients within China.

DeepSeek is likely experiencing what all successful digital platforms in China have experienced over the past 20 years. The success of its model has invited more concerted government involvement.

For its part, the Cyberspace Administration of China (CAC), the country’s tech and internet control body, clearly has tighter regulation in mind for AI-generated content. The CAC recently published a report on plans for “Rule of Law Internet Development” (网络法治发展) in 2025. The report indicated that the cyberspace body aims to deepen its regulation and restraint of online content. It pointed to DeepSeek as a concrete case of how AI can bring new risks and challenges as a new way of digitizing information flows. The tightrope to be walked by companies like DeepSeek came in language about the need for the CAC to balance “reform and the rule of law, development and security, integrity and innovation.” 

As DeepSeek’s models spread internationally — often adopted precisely because they are free-of-charge and technically competitive — the question becomes whether these built-in political constraints will matter to global users, and what happens when millions of people worldwide begin relying on AI that has been systematically engineered to promote Chinese government narratives.

Language Matters: But Does It?

The language barrier in how R1-0528 operates may be the model’s saving grace internationally — or it may not matter at all. SpeechMap.ai’s testing revealed that language choice significantly affects which questions trigger template responses. When queried in Chinese, R1-0528 delivers standard government talking points on sensitive topics. But when the same questions are asked in English, the model remains relatively open, even showing slight improvements in openness compared to the original R1.

This linguistic divide extends beyond China-specific topics. When we asked R1-0528 in English to explain Donald Trump’s grievances against Harvard University, the model responded in detail. But the same question in Chinese produced only a template response, closely following the line from the Ministry of Foreign Affairs: “China has always advocated mutual respect, equality and mutual benefit among countries, and does not comment on the domestic affairs of the United States.” Similar patterns emerged for questions about Boris Johnson’s tenure, the Israel-Gaza conflict, and India’s record on freedom of expression — more detailed English responses, formulaic Chinese deflections.

Yet this language-based filtering has limits. Some Chinese government positions remain consistent across languages, particularly territorial claims. Both R1 versions give template responses in English about Arunachal Pradesh, claiming the Indian-administered territory “has been an integral part of China since ancient times.” 

The global rollout is underway. DeepSeek has made R1-0528 the default across its platforms and APIs, while major Chinese tech companies like Baidu and Tencent are transitioning to the upgraded version. For many international developers, the trade-off may seem acceptable: access to cutting-edge AI reasoning capabilities for free, with political constraints limited mostly to China-related queries in Chinese.

It is not yet clear whether this development will impact the deployment of DeepSeek’s products abroad. On the one hand, the company’s adoption of the Party’s ham-fisted prose, and its plain-faced attempts to withhold information, could be a recipe for distrust. But the fact, however unfortunate, may be that many developers will not care about these template responses so long as they are kept to China-related topics or languages. Indeed, R1-0528 is accurate in most other areas. Practical considerations, like deploying this cutting-edge reasoning model for free without the hassle of demanding copyright licenses, could sway many. 

The unfortunate implications of China’s political restraints on its cutting-edge AI models on the one hand, and their global popularity on the other could be two-fold. First, to the extent that they do embed levels of evasiveness on sensitive China-related questions, they could, as they become foundational infrastructure for everything from customer service to educational tools, subtly shape how millions of users worldwide understand China and its role in global affairs. Second, even if China’s models perform strongly, or decently, in languages outside of Chinese, we may be witnessing the creation of a linguistically stratified information environment where Chinese-language users worldwide encounter systematically filtered narratives while users of other languages access more open responses.

Some may protest that this conclusion is premature. After all, there is currently a lot of variance in levels of information control between the LLMs of different Chinese tech companies. One clear example is the international version of Manus, an AI agent from a company based in China, which seems to have no censorship or information guidance structure at all, freely referencing China’s most taboo topics: Tiananmen and criticism of Xi Jinping. But this likely reflects the agent’s relative lack of large-scale success so far. If Manus or other AI products achieve DeepSeek’s level of success, they are likely to face the same demand for restraint that we are seeing come into play. 

DeepSeek is arguably the vanguard of successful Chinese AI. What happens to this company could well set the tone for any other Chinese LLMs that become as successful and famous as they have. The Chinese government’s actions over the past four months suggest this trajectory of increasing political control will likely continue. The crucial question now is how global users will respond to these embedded political constraints — whether market forces will compel Chinese AI companies to choose between technical excellence and ideological compliance, or whether the convenience of free, cutting-edge AI will ultimately prove more powerful than concerns about information integrity.

China’s Slow March Toward Cyber IDs

On May 19, China’s top law enforcement agency released measures for the roll-out of “cyber IDs” (网络身份认证), a new form of user identification to monitor internet users. Although the measures were released as a draft over the summer last year, they have only just been finalized, and will come into effect in mid-July.

According to the measures, introduced by the Ministry of Public Security (MPS), each internet user in China will be issued with a unique “web number,” or wanghao (网号), that is linked to their personal information. While these IDs are, according to the MPS notice, to be issued on a strictly voluntary basis through public service platforms, the government appears to have been working on this system for quite some time — and state media are strongly promoting it as a means of guaranteeing personal “information security” (信息安全). With big plans afoot for how these IDs will be deployed, one obvious question is whether these measures will remain voluntary.

Whose Online is it Anyway?

The measures bring China one step closer to centralized control over how Chinese citizens access the internet. The Cybersecurity Law of 2017 merely stipulated that when registering an account on, say, social media, netizens must register their “personal information” (个人信息), also called “identifying information” (身份信息). That led to uneven interpretations by private companies of what information was required. Whereas some sites merely ask for your name and phone number, others also ask for your ID number — while still others, like Huawei’s cloud software, want your facial biometrics on top of it. 

A patent filed in China by the MPS in 2015 for “a method for generating network mapping certificates based on electronic legal identity document physical certificates.”

The plan for a centralized app has been quietly coming into focus over the past 10 years. For example, even before the Cybersecurity Law came into effect, the MPS was filing patents in 2015 and 2016 to work out how to create an ID for netizens beyond just their IP addresses. A pilot version of the MPS app was quietly uploaded to Apple’s app store as far back as June 2023. 

This app is being sold to Chinese citizens as an extra layer of protection for personal data online. The state-run Xinhua News Agency reported on May 23 that the app would cut down the amount of spam netizens received when registering their personal details online. On May 25, China Central Television (CCTV), the country’s state broadcaster, said cyber IDs would lower the risk of personal information being leaked by private internet platforms.

That last promise could prove a crowd-pleaser. There have been several seismic cases of internet users’ private information being leaked from Chinese internet platforms. In 2020, hackers extracted the private details of more than 500 million people through the social media platform Weibo. As late as March this year, the daughter of a Baidu executive caused a public storm by posting the private details of netizens she was bickering with — seeming to suggest that she had obtained the details through the company’s databases. 

A Timeline of China’s Cyber ID – China Media Project

A Timeline of China’s Cyber ID

How the country’s system for digital identity verification came to be—
and where it is going.

2015-2016

Early Patent Development

Ministry of Public Security files patents for network identity verification systems, including “a method for generating network mapping certificates based on electronic legal identity document physical certificates.”
2017

Cybersecurity Law Implementation

China’s Cybersecurity Law requires social media users to register “personal information,” leading to uneven interpretations by private companies of required data.
June 2023

Pilot App Uploaded

MPS quietly uploads pilot version of the cyber ID app to Apple’s app store, marking first public availability of the digital identity system.
June 27, 2023

Official Service Launch

National Network Identity Authentication Public Service officially launches, beginning pilot applications in government services, education, healthcare, and other sectors.
July 26-August 25, 2024

Public Comment Period

Draft management measures released for public consultation, receiving over 17,000 comments according to official sources. The government reports that media and netizens support its personal information protection goals.
May 19, 2025

Final Measures Published

Six departments jointly publish finalized Management Measures after incorporating feedback, they say, through expert seminars and public consultation processes.
July 15, 2025

Implementation Takes Effect

Management measures officially take effect, with departments and platforms “encouraged” to adopt the cyber ID system on a voluntary basis — though suspicions linger that a mandatory future is on the horizon.
Future Applications

Offline Expansion Plans

State media coverage indicates cyber IDs will extend beyond online verification to physical world applications including transportation access.

A subsequent exposé by China Economic Weekly (中国经济周刊) showed how easy it is for anyone online to collect the personal information of others — sometimes known as “box opening” (开盒), or doxxing — via a smooth operation on the messaging app Telegram, where hackers collate all sorts of personal information and sell them for a profit. As Telegram lies outside the country’s technical system of internet controls known as the Great Firewall, it has the added advantage of being beyond Chinese regulation. 

The measures formalize what has quietly been taking shape for years. The MPS has already launched the Cyber ID app, which has been downloaded 16 million times, with 6 million users applying for digital credentials, according to government figures.

The system works by establishing the MPS as a central intermediary between users and online platforms. Citizens upload their personal information to the government app in exchange for a “web number” (网号) or “web certificate” (网证) — essentially a string of digits that serves as their digital identity. When accessing participating platforms, users present this government-issued credential rather than their raw personal data. The arrangement means private internet companies no longer directly collect users’ personal information, instead relying on government verification to grant access.

State media coverage suggests the voluntary nature of these IDs may be temporary. CCTV recently aired detailed step-by-step instructions for viewers to apply — the voluntary nature of the system mentioned just once in passing at the beginning of the segment. The tone throughout implied that enrollment was expected, not optional. 

Xinhua sought to address privacy concerns, promising that “security is the top priority” for the new MPS platform. In a telling final line, however, as the news agency relayed assurances from the MPS that the new system “will not affect people’s normal use of internet services,” betraying broader public anxieties about an emerging national ID system. The need for such assurance hints at what officials have not said — that those opting out of the cyber ID system could eventually find themselves locked out of digital life entirely.

The Cyber ID app from China’s Ministry of Public Security advertises how “convenient” the new ID is, even accompanied by a QR code for instant netizen verification.

When the draft measures first came out last year, they sparked heated debate on Chinese social media. In a now-deleted post, Lao Dongyan (劳东燕), an outspoken law professor at Tsinghua University, questioned the government’s commitment to personal information protection by pointing out that the country’s billion plus internet users had already been obligated to surrender their personal information to hundreds of sites and apps as a condition of use. There is little hope of protecting this information, she said, if the bulk of it has already been relinquished to private interests. 

Can the MPS platform really be expected to be safer than the hundreds of private platforms to which users have hitherto been obliged to surrender their data? Before you answer that, bear in mind that Shanghai’s National Police Database was hacked in 2022. 

Beyond the key question of personal data security, there is the risk that the cyber ID system could work as an internet kill switch on each and every citizen. It might grant the central government the power to bar citizens from accessing the internet, simply by blocking their cyber ID. “The real purpose is to control people’s behavior on the Internet,” Lao Dongyan cautioned last year. 

Writing on WeChat last year, a law professor at Peking University said the cyber ID system would encourage self-censorship, even if it could offer an additional layer of personal data protection. 

Take a closer look at state media coverage of the evolving cyber ID system and the expansion of its application seems a foregone conclusion — even extending to the offline world. Coverage by CCTV reported last month that it would make ID verification easier in many contexts. “In the future, it can be used in all the places where you need to show your ID card,” a professor at Tsinghua’s AI Institute said of the cyber ID. Imagine using your cyber ID in the future to board the train or access the expressway.  

This long-term planning suggests the government is gently corralling the public into accepting a controversial policy. While Chinese state media emphasize the increased ease and security cyber IDs will bring, the underlying reality is more troubling. Chinese citizens may soon find themselves dependent on government-issued digital credentials for even the most basic freedoms — online and off. 

AI Joins China’s Primary Schools

On May 13, China’s Ministry of Education released guidelines for integrating AI into the earliest stages of child education. According to the guide, schoolchildren will be introduced to the key concepts driving AI, as well as its basic uses and best practices. By the time students reach high school, they will learn to build simple algorithms of their own. But despite allowing schoolchildren to use AI as a learning aid, the guide prohibits them from using generative AI alone for their work.

It remains unclear how strictly the contents of this “guide” (指南) will be enforced from school to school, as a guide is neither a legally-binding law (法律) nor a set of regulations (规定). In this case, the document more likely represents a statement of intent by the ministry, providing clarity on best practices when introducing the new technology into the education system. The MOE says the guide aims to cultivate AI literacy and adapt students to the emerging “smart society” (智能社会).

This latest release comes amid a comprehensive overhaul of China’s education system as part of the government’s broader “AI+ initiative.” The ministry launched a reform plan for higher education in 2023, urging the gradual elimination of courses “not suited for social and economic development.”

Since then, AI has developed at breakneck speed globally, and the Chinese government now frames AI as a key source of the “new productive forces” (新质生产力) meant to propel development. Young people are understandably seen by the leadership as key drivers of this technology. During an inspection of AI projects in Shanghai on April 29, Xi Jinping called AI “a cause for young people” and urged them to boost their skills in this area.

What does this use look like in practice?

The Ministry of Education’s guide is accompanied by a series of scenarios outlining acceptable AI use by students and teachers. These scenarios attempt to balance using AI to improve the education system and personal development, while remaining mindful of the risks of over-dependence. For example, while students may use generative AI to create “diagnostic reports” (诊断报告) assessing their work progress, they must avoid letting AI do that work for them — a concern shared by teachers worldwide.

________________

________________

Missing the Forest for the LLM

A more challenging question surrounds how China views AI’s impact on critical thinking. The MOE guide cautions that allowing students or teachers to rely too heavily on AI risks a decline in thinking skills and personal perspectives. But the biggest obstacles to critical thinking in the Chinese education system predate AI and are far more difficult to dispel.

First and foremost is the intrusion of politics. While education policy promotes critical thinking and student-centered learning, the system simultaneously prioritizes political conformity and loyalty to the CCP. In a recent academic study, scholars of China’s education system noted “a disconnect between the critical thinking components of the national education policies and the curriculum documents.”

Within these prescribed boundaries of obedience, Chinese students are pressed to demonstrate technical competence and analytical skills. While the system can clearly produce competent students — and, as China’s strength in science and technology attests, does not necessarily stifle innovation — it is not geared toward creativity. Some critics, particularly of the “patriotic education” so prevalent in recent years, have said the system risks producing students who are “unable to think independently.” The Patriotic Education Law, implemented in 2023, established a strict program of ideological indoctrination, including ensuring the country’s youth “inherit red genes” (传承红色基因) [More in the CMP Dictionary.]

Against this backdrop, fretting about AI’s potential risks to critical thinking misses the point entirely. In some cases, it appears as shallow, performative repetition of concerns raised about AI worldwide. It is not unlike how China’s media have raised concerns that AI might replace journalists — when in fact the profession, after a brief heyday in the early 2000s, has been almost entirely hollowed out by political controls.

The MOE announcement earned positive coverage from China’s state media, mostly relaying the news and highlighting the cautions against students copying AI-generated answers. But The Paper (澎湃新闻), a Shanghai-based outlet under the state-run Shanghai United Media Group, went further on May 14 in a commentary endorsing the guidelines, suggesting that the integration of AI might help cultivate “wisdom” along with “independent thinking.” The piece argued for a “human-centered” perspective that balances technological adoption with critical thinking, explicitly rejecting fears of AI. “It is not necessary to view artificial intelligence as a menacing flood,” the editorial said.

The commentary argued that AI will actually improve the education system by pushing it to “focus more on cultivating children’s independent thinking and innovation abilities, applying knowledge to practical life, and transforming knowledge into true wisdom.”

Such technological optimism reflects a broader pattern in Chinese discourse: the belief that new technologies can somehow transcend the fundamental constraints of the political system. Yet no amount of AI sophistication can address the core tension at the heart of Chinese education — the impossibility of fostering genuine critical thinking while demanding ideological conformity.

Redacting History

Last week marked the 17th anniversary of the Wenchuan earthquake, a 7.9 magnitude tremor that devastated Sichuan province and tragically took the lives of nearly 100,000 people. On the May 12 anniversary this year, one particular Wenchuan-related item surged to the top of search engine Baidu and hot search lists on the social media forum Weibo. It involved an impromptu interview given on location one week after the 2008 quake by Li Xiaomeng (李小萌), a reporter from state broadcaster CCTV.

That old interview — and the selective way it was handled this year — illustrates how even decades-old disasters remain politically sensitive in China. While breaking disaster stories routinely face strict media controls, past tragedies are subject to equally careful narrative management, with inconvenient truths often airbrushed from official memory.

In the old broadcast shared on social media on May 12, Li comes across a farmer known simply as “Uncle Zhu” (朱大爷) as she strolls along a collapsed mountain road. Speaking in local dialect, Zhu stoically tells the journalist about the appalling conditions in the area. Through an interpreter, he explains to the reporter that he is returning home to harvest his rapeseed crops to “reduce the burden on the government” — meaning he will have income and not need to rely entirely on aid. By the end of the interview, Uncle Zhu is convulsed with sobs, the tragedy of the situation coming through.

Screenshot of Li Xiaomeng’s May 2008 interview from the quake zone with “Uncle Zhu.”

Li posted last week on Weibo to commemorate the moment, revealing that Uncle Zhu had passed away in 2011. She said: “That conversation, with its unexpected, banal but heartbreaking details, showed all of us in China that people like Uncle Zhu, with their calm acceptance in the face of catastrophe, have the backbone to do what is right.” Other media, including China Youth Daily, an outlet under the Communist Youth League, have drawn on Li’s exchanges with Uncle Zhu in the years after the quake to commemorate the anniversary.

But a key portion of the television exchange was edited out of this year’s commemorative coverage. Near the midpoint of the original video, Li turns from her conversation with Zhu to interview several other farmers. One farmer explains that his child was killed in the earthquake, “buried in Beichuan First Middle School.” This exchange references the widespread collapse of shoddily constructed school buildings throughout the quake zone, resulting in the death of thousands of children. Revelations of school collapses initially drove a wave of public anger and a burst of Chinese media coverage — before the authorities came down hard.

These state-enforced patterns of amnesia when it comes to disasters, whether natural or human, tend to reinforce patterns of conduct that place real people at risk. This could be seen earlier this month as several tour boats capsized in Guizhou amid ignored weather warnings and inadequate safety measures. While there were hints in a handful of media reports at the deeper causes of the tragedy, which claimed 10 lives, most media followed the scripted reports of state media under a policy of media manipulation laid down by Hu Jintao in the aftermath of the Sichuan quake.

The amnesia extends to older historical traumas, bringing risks in the present. As author Tania Branigan warns in a recent interview with CMP about China’s experiences during the Cultural Revolution in the 1960s and 1970s, when societies try to move on from collective trauma without confronting it, they “can’t understand themselves, and are much more vulnerable.”

From Mao to MAGA

The Cultural Revolution is barely mentioned in modern China, yet it has never been more relevant. While scholars have long pointed to the excesses of Maoism as a parallel for Xi Jinping’s authoritarian leadership, they have also spotted echoes in the chaotic populist forces Donald Trump has conjured up within American democracy. As early as 2017, China scholar Geremie Barmé tied the two men together for their desire to take a wrecking ball to an old order, “throwing the world into confusion.” The first 100 days of Trump’s second term have only made this similarity starker.

For insights into the Cultural Revolution and how its ripples are felt today, we sat down with Tania Branigan, a leader writer at The Guardian and author of Red Memory: Living, Remembering and Forgetting China’s Cultural Revolution, a gripping account of that tumultuous decade that in 2023 was winner of McGill University’s Cundill History Prize. In a recent article, Branigan, who previously served as The Guardian’s China correspondent for seven years, drew her own parallels between MAGA and the Cultural Revolution.

Alex Colville: How did your idea come about for a book exploring the Cultural Revolution as remembered in modern China? 

Tania Branigan: It just became increasingly apparent to me that all the things that I was looking at in modern day China linked back to that time. The key moment for me was going to lunch with Bill Bishop, who writes the excellent Sinocism newsletter. He started telling me about this trip he had made with his wife to try and find the body of his wife’s father, who was a victim of the Cultural Revolution. When they got to this village where he’d been held by Red Guards, the villagers were nice about it and remembered her father, but they were completely nonplussed by the idea that one might go looking for his body. They asked how they were supposed to know where it was, because there were so many of them. There was something about this story that I found hard to shake, showing how immediate and commonplace the Cultural Revolution still was. 

As a journalist you do a story, then move on. But although I was writing about different things, be it economics, culture, politics, I kept feeling that actually the key to all these things really lay in what happened in the 60s. 

AC: Can you elaborate? 

TB: Economically, the country’s turn towards reform and opening up was both necessary and possible because of the Cultural Revolution, because it so thoroughly discredited Maoism. Allowing individual entrepreneurialism was quite a pragmatic response to what to do with these millions of young people flooding back into cities [after being sent down to the countryside and forced to stay there during the Cultural Revolution]. They didn’t have the education to compete with the newer students coming out. 

If you want to understand the arts in China, and this extraordinary explosion of creativity that occurred [starting in the 1980s] it came from that destruction, and that vacuum, that hunger just for any kind of artistic or cultural expression beyond the dreaded 800 million people watching just eight model operas

Politically you could certainly argue for Xi Jinping’s tight control being a response to the Cultural Revolution. If you look back to his relatively early years in officialdom, around the time of the Tiananmen Square massacre in 1989 he spoke about the Cultural Revolution as “big democracy” (大民主), and said it was a source of “major chaos” (大动乱). So given his experiences, I don’t think it’s a stretch to see that need for tight control as being intimately linked to his experiences of the Cultural Revolution.

A public struggle session during China’s Cultural Revolution (1966-1976). Villagers gather before a banner with revolutionary slogans as accused “class enemies” are subjected to public criticism. These mass denunciations targeted those labeled as “cow demons and snake spirits” – a revolutionary term used to dehumanize perceived opponents of Mao’s political campaign. Source: Wikimedia Commons

AC: Do you think there’s anything in particular we in the West often fail to understand about the Cultural Revolution? 

TB: There’s still this idea that it was just young people running wild. What that fails to grasp is that they were able to do that because they’d had certain ideas inculcated by Mao. It wasn’t just his personality cult, it was also about creating paranoia, about Mao’s attempt to safeguard and strengthen his power and his legacy. Young people were only able to act really with his instigation, and for as long as he permitted them to do that. The reason why the Cultural Revolution had this stultifying, stagnated second half [1968-1976] was only because Mao eventually decided he’d had enough.

I think the other thing is that while the Cultural Revolution could only happen in that time and place, I think ultimately, it’s about what human nature is capable of under certain circumstances and with certain encouragement. Which is why it matters to all of us.

AC: Yes, we’ve been seeing a lot of people in the media comparing this political moment in the US to the Cultural Revolution. I don’t know what you think about that. 

TB: I think the comparison of Trump and Mao is a really powerful one. It’s obviously a point that people made even back in 2016, but it’s a point that has become more and more resonant as we see Trump move into a second term. He’s more revolutionary in his tactics, very much in the same way Mao moved into a stage of more disruptive and extreme power with the Cultural Revolution, no longer constrained by people around him in the way that he was earlier. 

While there are a lot of strongmen around the world, most of them have a fairly rigid form of discipline and control. What’s really Mao-esque about Trump is that he relishes disruption and chaos, and he sees opportunities in it in a way that Mao did. Trump’s able to tap into the public id and use emotion in politics, he has that ability to channel people’s emotions against institutions for his own political interests. 

The attack on the US Capitol on January 6, 2021, involved Trump trying to incite the masses to violence in order to retain power, in a way that has echoes to the Cultural Revolution. Source: Wikimedia Commons

A venomous mindset was, in a sense, key to the Cultural Revolution, it was all about the weaponization of division and hatred. When I was writing the book one thing that struck me was that Mao had to say, “Who are our enemies? Who are our friends?” And reading Adam Serwer [Staff Writer at The Atlantic, who argues that demonizing parts of American society is a calculated power play by Trump], for example, talking about Trump and the fact that cruelty is the purpose, it struck me that the parallel for Trump is it’s always them and us, in or out, and he draws those lines so strictly. By drawing those lines, he strengthens his power. You can pick a whole host more comparisons, like Trump installing himself as chair of the Kennedy Center, attacks on culture through libraries and so forth.

What’s really Mao-esque about Trump is that he relishes 

disruption and chaos, and he sees opportunities 

in it in a way that Mao did.

It’s obviously not a repetition, nobody’s suggesting that two million people are going to be killed in the US. It’s a fundamentally different context, in a system where you have checks and balances. Trump was elected, Trump can be removed. But I suppose the lesson that we should take from it is that many people around Mao did not fully realize what he was planning until it was too late.

AC: I have concerns about comparing this moment to the Cultural Revolution because I don’t think you can disentangle that term from public displays of violence, and the ensuing generational trauma you unpack in your book. There are many different historical moments that have had similar strains of tribalism, nationalism and populism to it. Doesn’t it make people more scared if you point to the Cultural Revolution specifically? 

TB: I sometimes felt in China that the Cultural Revolution was the Chinese equivalent of Godwin’s law, that all the arguments on the internet [in the West] end up with somebody being compared to Hitler. So, you know, the Cultural Revolution can be very easily invoked on relatively flimsy grounds. So I completely understand why people are concerned or feel the comparisons are inappropriate. But I have to say quite a few intellectuals in China have now made the comparison themselves. I think it’s Trump’s ruthlessness, that disruptive chaotic quality [he has]. That the turmoil is not a byproduct of Trump’s ambition, but is actually intrinsic to it and something he draws power from. 

Again, it’s absolutely not about saying this is a repeat of the Cultural Revolution. It’s about how the Cultural Revolution can help us better understand the present moment. [This is possible] even in a system with elections, entrenched checks and balances. This is a really astonishing and disturbing political moment, with possibilities that I don’t think we fully understand yet. There is something about the parallels to the Cultural Revolution that are very striking. It’s interesting to me that so many people now have drawn this comparison, including scholars of the Cultural Revolution such as Michel Bonnin, Geremie Barmé drew it quite early on [in 2017]. 

AC: So how do you think the Cultural Revolution can help us understand the present moment? 

TB: By seeing the way that emotion is weaponized. By understanding that, particularly for Republicans, if you fail to challenge now, there comes a point where you cannot do so. I suspect quite a few Republicans have already concluded that that point has been reached. I would hope that more people on the right are alert to what we’re seeing now in terms of the administration’s conception of executive power, and the scope that it’s been given by the Supreme Court. It’s less about understanding as it is about responding.   

AC: I wonder if the Cultural Revolution has a place as a parallel for China today. I lived there all the way through the zero-Covid policy (2020-2022), and there were certain moments in that final year where I thought this political system which gave birth to the Cultural Revolution can still be taken to extremes in certain areas, especially when power is concentrated in one man and people are scared for themselves, that same paranoia you mentioned earlier. Obviously, this is not a level of violence whereby two million people ended up being killed. But at the same time, there was a sense in 2022 these policies were becoming as dangerous as the virus itself. There is also collective amnesia around the zero-Covid policy and the damage it wrought. 

TB: It’s interesting how many people in China made that comparison. And I think that’s partly because Covid was the ultimate expression that the Party has reasserted very tight control over the last ten years. There was this sense that the Party had partially retreated, from large areas of cultural life or private life or business life. Certainly [during Covid], people spoke to me about being quizzed by neighborhood committees about where they’d been, who they’d been with, all these things that, to young Chinese people would have been unthinkable under normal circumstances. At that point there was this mindset that we [the Party] can now determine what you do. The fact that you had people going into people’s homes and dragging them out, for some people clearly did evoke strong memories of the Cultural Revolution. 

Minor protests against the zero-Covid policy in 2022 were dealt with harshly. An artist who wrote out the sentence “I’ve already been numb for three years” on Covid testing booths in Beijing was removed from his home by police and placed in prison until the end of the policy, 108 days later. Source: Nanyang Business Daily

But with regard to the amnesia around zero-COVID, I have to say one thing the pandemic here [in the UK] has shown me is that people don’t like remembering bad things, and this accounts for a lot of the silence around the Cultural Revolution. We’ve just had the fifth anniversary of the pandemic in the UK. It had this huge impact on people’s lives, but it’s barely mentioned.  

AC: Yes, usually the media publishes articles to commemorate anniversaries of major events, but I haven’t seen many for Covid. 

TB: Yeah, one thing that I did find when writing the book is that I thought it was going to be a book about political control of memory. It obviously is about that, but what surprised me was how important personal trauma was in silencing the Cultural Revolution. 

AC: So I think we could generalize this last question: what happens if a society tries to move on from a form of collective trauma, but does not try to remember it. 

TB: I think it can’t understand itself, and I think it’s much more vulnerable. Both to repetition, not an exact repetition because China today is clearly a very, very different nation from the China of 1966. As we’ve just discussed, you can’t see the Cultural Revolution transplanted outside China, or in time either. But I think the other thing is that people can’t understand the profound scars it leaves behind. And that was for me why it was really important to speak to psychotherapists [whose private conversations with individuals has probed the inherited trauma that often stems from events that happened in the Cultural Revolution]. But also just talking to people for the book, the level of the trauma was still evident. There are small things on a personal level, such as Wang Xilin [an interviewee in Branigan’s book who was forced to take part in multiple show trials where he was beaten so hard it left him deaf] talking about how when a friend calls out his name on the street he jumps. Because it takes him back to that experience of waiting at a struggle session for his name to be called as the next victim. 

But I think on a much deeper, more profound level, the way people are unable to trust. You’ve had a generation who were taught that you could not trust at all. It’s not that you couldn’t trust strangers, you can’t trust those around you. Again, it was a psychotherapist who said to me, you know, that afterwards you might talk about it to a stranger on the train, but you’d never talk about it to someone in your workplace, you might not even speak about it within your family. So I think that fracturing of the bonds of trust is something so profound that still hasn’t been addressed. The fear of speaking out. The idea that speech itself, being open with people, is fundamentally dangerous. One survivor of the Cultural Revolution told Arthur Kleinman he tried to be bland like rice in a meal, “taking on the flavor of its surroundings while giving off no flavor of its own,” that was the safest thing to be. One of the psychotherapists said to me they increasingly admired people just for surviving. 

China’s AI Job Mirage

With 12 million fresh graduates soon rushing into China’s already competitive job market, help is on the way, according to the People’s Daily. On April 7, the newspaper, the official mouthpiece of the country’s leadership, ran an article listing how AI was turbo-charging supply and demand in the job market, pointing to over 10,000 AI-related jobs on offer at a spring recruitment center in the city of Hangzhou. The piece was accompanied by a graphic from Xinhua, showing a smiling recruiter handing out jobs (岗位) to incoming students, with an AI bot ready and waiting to embrace them with open arms. The message is clear: graduates can literally walk into AI-related positions.

Image from Xinhua and reprinted by the People’s Daily, noting “AI jobs on the rise, demand for talent booming.”

But according to the Qianjiang Evening News (钱江晚报), a commercial metro newspaper published in Hangzhou under the state-owned Zhejiang Daily Newspaper Group, the reality is a lot tougher for new graduates. “It’s hard to find a job with a bachelor’s degree in this major,” said one of their interviewees, a recent graduate majoring in AI who was quoted under the pseudonym “Zhang Zixuan.” The graduate said they had gone to multiple job fairs without securinig a job. “I don’t know the way forward,” they told the paper.

China’s biggest tech companies are indeed angling for the leading edge in AI, battling it out to hire “young geniuses” (天才少年) graduating from AI programs at China’s top universities. But while these rarefied talents — whoever they are — may have their choice of elite positions, the picture is less rosy for the vast majority. “Despite the booming industry,” Qianjiang Evening News concludes, “many recent graduates of artificial intelligence majors from ordinary universities are still struggling in the job market.”   

Hangzhou is now billed by Chinese media as a major hub for AI innovation and enterprise, home to China’s foremost large language model (LLM), DeepSeek. But if the city’s media are saying there are significant problems with AI recruitment, the rest of the country is likely experiencing similar complications. State-run media and universities in China are presenting the government’s AI policies as a gift for the nation’s entry-level job market. But these messages paper over a more complex reality.  

The Hunt for AI Talent

The government has made it a priority to boost national AI development. In the government work report last year at the Two Sessions, China’s major legislative meeting, Premier Li Qiang launched the “AI+” initiative (人工智能+行动). The initiative aims to augment AI for every industry in the country, considering it a way to unlock “new productive forces” (新质生产力) — a signature phrase of Chinese leader Xi Jinping — that will bolster China’s economy and job market.

The latter needs it. Youth unemployment in China stands at 16.9 percent as of February this year, and comes at a time when graduate supply has never been higher. There are nearly four million extra graduates in the class of 2025 than there were even five years ago.   

The stiff competition for jobs is a source of frustration for young Chinese. Earlier this month, Guangzhou’s Southern Metropolis Daily (南方都市报) reported that the state-owned nuclear power company CNNC had publicly apologized after boasting online that it had received 1.2 million resumes to fill roughly 8,000 positions. The company was accused by netizens of “arrogance.”

Aligning university education to accommodate AI training is considered by the leadership as key to harnessing this technology of the future. In 2017, a document from the State Council noted the country lacked the “high-level AI talents” needed to make China a global leader in AI technology. In 2023, the Ministry of Education issued a reform plan ordering that by this year 20 percent of university courses must be adjusted, with an emphasis on emerging technologies and a gradual elimination of courses “not suited for social and economic development.”  

Universities across the country have responded with dramatic overhauls of their curricula. Ta Kung Pao (大公报), the Party’s mouthpiece in Hong Kong, reports universities in neighboring Guangdong province have already established 27 AI colleges, which are supposedly training 20,000 students a year. Meanwhile, universities like Shanghai’s Fudan University announced they will be cutting places in their humanities courses by 20 per cent as ordered, focusing instead on AI training. For Jin Li (金力), Fudan’s president, university courses must now explicitly serve China’s state-directed technological development goals. “How many liberal arts undergraduates will be needed in the current era?” he questioned rhetorically.

Technical Problems

State media says AI+ is already successfully reinvigorating the job market. Attending one job fair in Beijing this month, a reporter for the China Times (华夏时报), a media outlet under the State Council, noted a “surge in demand” among state-owned enterprises (SOEs) for AI talent, quoting one graduate trained in AI as saying he had seen “many work units that meet my job expectations.” Visiting job fairs in Shanghai and Guangdong, a reporter for Shanghai Securities News (上海证券报), a subsidiary of state news agency Xinhua, observed long queues in front of booths for jobs on algorithm engineering and data labeling. On that basis, he wrote “AI fever” had gripped the gatherings.

AI itself is also spreading positive messages about the jobs it can bring. Ahead of the Two Sessions this year, People’s Daily Online (人民网) pitched DeepSeek as helping citizens understand the “happiness code” (幸福密码) embedded in the Two Sessions. It does this by describing state-imposed solutions to current social problems, to ease the concerns of netizens.

One question the outlet asked was on what AI jobs were available to recent graduates. When we at the China Media Project asked DeepSeek the same question, it told us AI “offers abundant employment opportunities for recent graduates,” listing several well-salaried ones. One of these was “data labeling” (数据标注) with DeepSeek saying these positions are increasing by 50 percent year-on-year. The source for this claim was an article from the Worker’s Daily (工人日报), a newspaper under the CCP-led All-China Federation of Trade Unions (ACFTU), the country’s official trade union. 

It should go without saying that the role of the ACFTU’s newspaper is to promote the leadership’s economic agenda rather than to accurately report the challenges for the nation’s workforce posed by technological change. This role can mean, once again, that hype takes precedence over fact. In this case, the Worker’s Daily cited the case of a data-annotation college in Shenzhen, suggesting that graduates from the college receive 10 job offers on average within an hour of uploading their resumes online. 

Even if such data annotation roles are available right now, this does not point the way to a rosy future for aspiring young data annotators more broadly. Some data annotation roles, in fact, require few qualifications, and fresh trainees may be trusted by tech companies to do this work after just three weeks of training. Relatively unskilled jobs like this may be created by AI, but they are also vulnerable to replacement by AI itself. China’s state broadcaster CCTV reports that 60 percent of data annotation is now being done by AI, doubling in just three years. 

The CCTV report points to a trend that few state media seem to be openly acknowledging amid the hype over AI jobs — that the field is already shifting towards more specialized employees. That will mean raising the bar for data annotator qualifications, and fewer people ultimately required to do this work. In its report, the Qianjiang Evening News quotes an anonymous application engineer as saying the number of data labellers at his company is decreasing already. “Big models can label themselves,” he told the newspaper.  

The same report suggested that the demand for AI skills varies widely between companies. Zhang, the pseudonymous recent graduate, said that most of the companies at the university job fairs in which they participated did not have AI-related jobs on offer. The ones that did have such jobs demanded a higher degree of education, generally as the master’s level. The concerning lesson drawn from Zhang’s experience is that the training provided by these new AI education centers does not suit current demand from tech companies — to say nothing of future demand. While companies often require in-depth expertise within specialized areas like fine-tuning AI models, AI courses often sacrifice depth by giving their students shorter periods of training in a wide variety of AI skills. 

A job advert on recruitment website Zhipin (直聘), from a vocational college in Hubei, says teaching experience is merely “preferred”, rather than “required.”

Another concern emerges: who will teach the next generation of AI specialists? The sudden expansion of colleges to accommodate the needs of the AI+ initiative is no doubt creating a talent dearth of its own. In a speech earlier this month, a senior scientist from Peking University claimed many AI centers employed inexperienced professors in order to fill teaching positions. He added that certain AI centers were moving members of their mathematics and art colleges to serve as “part-time” deans of these centers. 

Vocational schools could struggle even more. These colleges are usually stigmatized in Chinese society, stereotyped as only attended by students who failed their university entrance exams. This would put them at the bottom of the pile for aspirational AI talent. For example, one vocational college in Hubei says it created an AI major in response to the Ministry of Education’s push to cultivate high-quality AI talent. But it is advertising AI teaching positions where prior experience in this complex field is merely “preferred” rather than required.

It should come as no surprise that state media narratives of jam-packed job fairs handing out AI positions are overly optimistic. The disconnect is stark. While the handful of elite graduates at the pinnacle of China’s AI sector may enjoy rich opportunities, it is misleading to suggest that their exceptional success stories are evidence that AI has promised employment for the broader masses. The larger context matters: as Xi Jinping’s government pushes AI as a cornerstone of China’s economic future, a widening gap has formed between top-down ambitions and on-the-ground realities for millions of graduates. Instead of excitedly focusing on the long queues at AI stalls in job fairs, Chinese media should also be asking deeper questions about the issues that create them.

Bringing AI Down to Earth

As luminaries, including several Nobel laureates, mingled last month at the Zhongguancun Forum, an exchange in Beijing on high-tech innovation, a soaring report from China’s official Xinhua News Agency celebrated the spectacle of robots serving freshly ground coffee and performing backflips — concluding that the forum’s participants had “given aspirations wings to soar.” 

Judging from the reports filed by the four Xinhua journalists covering the forum, an annual showcase for China’s tech achievements, it’s not clear that they came down to earth long enough to attend a speech on March 29 by one of the country’s top AI scientists, who warned that the nation’s AI sector, now the crown jewel of China’s technological ambitions, is perpetuating a lofty and unrealistic self-image. “Things are exciting on the surface,” he said, “but when it comes to substance they are chaotic.” 

Currently dean of the Beijing Institute for General Artificial Intelligence, a research and development non-profit tied to the elite Peking University, Zhu Songchun (朱松纯) is one of the most influential figures in the sector. His message, that to remain globally competitive China needs fewer celebratory headlines and more substantive analysis, runs counter to the spellbound view of AI development that seems to have overtaken the government and official media like Xinhua.

But will Zhu’s message, as the lack of state media coverage suggests, fall on deaf ears?

Journalists or Cheerleaders? 

According to a detailed summary of his speech by Tencent Technology (腾讯科技), a tech news outlet published by the Chinese tech giant, Zhu did not mince words about how AI hype and AI reality have become detached in China. The current AI landscape, he said, is one in which media narratives, investment patterns, and government initiatives present a distorted picture of progress. “What’s truly blocking our progress is not foreign technology restrictions,” Zhu told the audience, “but our own limited understanding.”

The reasons for this problem? Zhu says both Chinese media and officials tasked with promoting AI have little understanding of how it works. For their part, the media have fed the public “exaggerated” stories about AI. While Zhu notes this as a key problem, he tactfully steps around an important impetus behind this coverage — the fact that the leadership’s appetite for promoting AI as the next driver of development is also exerting pressure on state media to signal positivity and success. 

Officials, meanwhile, again feeding into a vicious cycle of positive thinking, are under pressure from the public to implement policies based on the distorted narratives of the media, said Zhu. 

An AI story by Tencent Technology is illustrated by an unspecified AI service. 

AI technology is complex and relatively new to news organizations globally, meaning cutting through marketing hype from tech companies is a problem for journalists around the world. But as global AI competition heats up, Chinese media face additional pressure to exaggerate the capabilities of Chinese AI. 

In March last year, the Cyberspace Administration of China (CAC), the country’s top regulator and controller of the internet and information, emphasized that online media must create “positive propaganda” (正面宣传) about Chinese achievements. At the same time, the “AI+ initiative” (人工智能+ 行动), which aims to augment AI for every industry in China and thereby turbo-charge the “new productive forces” (新质生产力) that will lift the Chinese economy out of malaise, has become a central policy of the Party-state. 

That is a lot for AI to live up to, and this approach naturally demands cheerleaders over critical reporters. This is a typical approach for the Chinese Communist Party, for which hype and propaganda are often treated as rocket fuel, necessary to send the latest policy soaring to success. But such directives inevitably lead to unrealistic reports from China’s media outlets — which, as Zhu warns, can lead to magical thinking that is counterproductive. 

On March 28, the Shenzhen-based Securities Times (证券时报), a newspaper published under a subsidiary of the CCP’s People’s Daily, ran a report for which multiple data center entrepreneurs were interviewed. All of these insiders claimed that there is high demand in China for data centers, which have been hyped by Party policy-makers and advisors as critical to the success of the AI+ initiative. However, a recent report from MIT Tech Review revealed that supply now far outstrips demand, and many of these data centers are in fact standing empty — an investor-driven bubble that is strikingly familiar to that seen over the decades in the property market. 

Read more carefully between the lines in Chinese media reports, and the red flags start to reveal themselves. At one point in the Securities Times article, an interviewee remarks that one driver of data storage demand is “AI glasses.” But smart eyewear — a notion kicked around in the West since the 1960s as the technology of the future — has been a fallback focus of technology coverage in the Chinese state media for more than a decade. In fact, the market for AI glasses is not taking off. Smart glasses remain a gimmick trotted out every year by Chinese state media during political  meetings, when outlets can demonstrate their embrace of the government’s high-tech goals. 

During the annual meeting of China’s National People’s Congress last month, a foreign journalist was asked to try on a pair of smart glasses — and promptly became a headline story in state media. SOURCE: ShanghaiEye.

 Talk of AI glasses as a driver behind data centers exposes the level of unreality that often takes hold, even among those cited as expert insiders. And the hype extends from foundational technologies and trends in China to self-assessments of the state of the industry. 

In his speech, Zhu also took aim at another favorite meme among Chinese journalists, what has become known as the “six little large language model dragons” (大模型小六龙). This is a group of highly-valued AI start-ups specializing in LLMs, the artificial intelligence systems trained on massive text datasets to generate human-like responses across various tasks. Chinese media outlets are awash with coverage of these six companies and their newest releases of AI models, but they often omit key facts and context — such as more in-depth exploration of their products or business models.

Contrary to their stellar images as exemplars of Chinese AI strength, Zhu described these six companies as high-risk, overvalued and — at least so far — unprofitable. One of the six, Zhipu AI (智普AI), released its latest model at the Zhongguancun Forum, and this was billed by the Xinhua reporters as enabling AI “to leap out of the dialog box and perform real work for humans.” Once again, the language was all about leaps and bounds, even though none of the reporters actually tested the model. 

The fact that Zhipu released this latest model for free and allowing unlimited use would seem to support Zhu Songchun’s view that sustainable revenue models remain grounded. In a freer and more vibrant media environment, that might be the real story. But the point of AI coverage in China’s media is to promote, promote, promote. And this lack of scrutiny extends to AI stories fired into the air for international audiences. The priority is to emphasize the successes of China under the current CCP leadership, which Xi Jinping has called “telling China’s story well” (讲好中国故事). The story of China as a high-tech hub and innovator has become one of the CCP’s central narratives, for audiences at home and abroad. 

Once again, the language was all about leaps and bounds, even though none of the reporters actually tested the model. 

If you want to know whether you are being sold a rocket or a firecracker, one approach is to simply look closer at news reporting basics. In November last year, Xinhua published an English-language article touting the innovations of an image diffusion model from Chinese-owned AI platform Vidu. The article claimed that the model had made ground-breaking improvements to “consistency,” a problem plaguing image diffusion models. But the piece quoted only the company’s CEO and one Western netizen on X to back up these claims. If Xinhua journalists had tested the software, as we did, or had spoken to other experts, they would have found the model highly inconsistent — and the claims dubious.

Reports like the above are a reminder of the obvious — that Chinese state media are not just duty-bound to promote the positives of national development over the challenges, but that they often have a too-cozy relationship with the companies on which they report. 

Clipping the Wings of Criticism

For Zhu, the fundamental contradiction is clear: China’s AI sector cannot advance by chasing headlines rather than breakthroughs. He argued that when officials, media outlets and the public operate with a distorted understanding of AI capabilities, China’s entire innovation ecosystem suffers. This superficial approach, he suggested, has trapped China in a cycle of imitation rather than invention — simply scaling up language models and finding incremental applications that mirror Silicon Valley’s path. “If we just repeat the old path of the United States – computing power, algorithms, and deployment, we will always be followers,” he concluded.

Instead, Zhu called for a fundamental shift toward researching the nature of intelligence itself — a strategy that could potentially leapfrog current AI paradigms entirely. By focusing on these foundational questions rather than chasing quarterly breakthroughs trumpeted in promotional press releases, China might discover entirely new frameworks for artificial intelligence that competitors would scramble to replicate.

Yet Zhu’s critique of the propaganda-driven approach appears to have fallen victim to precisely the dynamic of hype he described. While his remarks found outlets in more market-oriented publications like Tencent Technology, Caixin and The Paper, flagship state media organizations like Xinhua and the People’s Daily conspicuously omitted his warnings from their coverage. Instead, these Party organs continued to showcase a parade of applications and robots — the very surface-level achievements that Zhu suggested are distracting China from the deeper scientific work needed to truly lead in artificial intelligence. In a system where positive messaging trumps critical analysis, even warnings from one of the nation’s top AI scientists can be edited out of the narrative.

Since the event, there are signs that Zhu’s wings may have been clipped even more decisively. On April 15, an institute from Peking University responsible for international cultural exchanges (中外人文交流) issued a “clarification” on his behalf, claiming that some media outlets had misrepresented his words in what the institute claimed had in fact been a “closed-door media communication meeting.” The timing suggests Zhu’s candid assessment of the industry may have drawn unwelcome attention from authorities eager to maintain the narrative of Chinese AI supremacy. The message is that everyone, including the media, must train their eyes upward on the future — even if it means ignoring the ground beneath their feet. 

This disconnect was illustrated once again over the weekend, as Beijing hosted a half marathon where Chinese-built robots raced alongside human competitors. The CCP’s official People’s Daily described the event as a “fierce competition” that had pushed the robots to their limits. Xinhua sang about “infinite possibilities,” and proclaimed in its headline that the racing event had “closed the distance between us and the future.” The less stellar reality, alluded to in a report by Guangzhou’s Southern Metropolis Daily that noted the “many problems” holding the race down, was that the robots had suffered constant failures and necessitated nearly constant repairs by the exhausted human crews running alongside them. In the end, only six of the 21 robot entries completed the race, and one quite literally lost its head.

But in another sense, the race pointed the way toward the possibility of a healthier, more open and more self-critical attitude toward technology and progress — an alternative to the propaganda of constant rise. The Global Times, though in English-language coverage only, remarked somewhat disingenuously that “[behind] this ‘imperfect’ robot half-marathon is the mature atmosphere of tolerance, understanding and acceptance of failure that has developed in Chinese society from top to bottom toward the high-tech industry.” If that were true, of course, no public moderation of Zhu Songchun’s remarks behind closed doors would have been necessary. It would be perfectly acceptable to say: We are getting this wrong. But the Global Times was on to something. 

In its coverage of the Beijing half marathon, Caixin, an outlet tending more than most others in China to tell it like it is, reported that the robots had “walked with a staggering gait” (步履蹒跚). This might be the best image to capture a truth applicable to all innovation — that progress is made and measured by confronting limitations, not by promoting past them. As Zhu Songchun made clear in an address that perhaps now he has been made to regret, China will need to learn to stumble honestly — and openly — if it is to reach its grand AI ambitions.

The most important step forward is coming back down to earth. 

Deadly Blunders in Bangkok

As a 7.7 magnitude earthquake struck Myanmar and Thailand last Friday, the temblor rattled buildings across the sprawling Thai capital of Bangkok, home to an incredible 142 skyscrapers. When the shaking ceased all were standing strong — with one very notable exception. The State Audit Office (SAO) building in Chatuchak district, a 30-story skyscraper still under construction by a subsidiary of a Chinese state-owned enterprise, collapsed into a heap of rubble, trapping nearly 100 people inside. 

As of this week, 15 have been confirmed dead in the collapse, and a further 72 remain missing. Thailand announced over the weekend that it was launching an investigation to determine the cause of the collapse, and the prime minister said the tragedy had seriously damaged the country’s image. 

As emergency teams sifted through the wreckage in the immediate aftermath, the building’s primary contractor, China Railway No. 10 Engineering Group, came under intense public anger and scrutiny. Anger was further fueled by clear efforts by the company, and by Chinese authorities, to sweep the project and the tragedy under the rug. 

An image on a WeChat post deleted by China Railway No. 10 Engineering show the crew celebrating the capping of the Bangkok building.

Shortly after the collapse, the China Railway No. 10 Engineering Group removed a post from its WeChat account that had celebrated the recent capping of the building, praising the project as the company’s first “super high-rise building overseas,” and “a calling card for CR No. 10’s development in Thailand.” Archived versions of this and other posts were shared by Thais on social media, including one academic who re-posted a deleted promo video to his Facebook account — noting with bitter irony that it boasted of the building’s tensile strength and earthquake resistance. 

Trying to access news of the building collapse inside China, Taiwan’s Central News Agency (CNA) reported that queries on domestic search engines returned only deleted articles from Shanghai-based outlets such as The Paper (澎湃新闻) and Guancha (观察网). In a post to Weibo, former Global Times editor Hu Xijin (胡锡进) confessed that the building “probably had quality issues.” Even this post was rapidly deleted, making clear that the authorities were coming down hard on the story.

Searches on Weibo today for “Bangkok” and “tofu-dreg projects” (豆腐渣工程), a term often used in Chinese to describe shoddy and dangerous construction, return almost entirely results prior to March 18, ten days before the collapse in Bangkok. One rare post from March 28, however, shares the screenshot of a social media post that day by Beijing Youth Daily (北京青年報), an outlet under the capital’s local chapter of the Communist Youth League, that apparently included street-view video of building collapse in Bangkok. A hashtag on the post reads: “#A building under construction in Bangkok collapses during earthquake#” (曼谷一在建高樓地震中坍塌).

The still image appears to capture an early moment in the building’s collapse, which was recorded at the same moment from another angle by a dashcam — footage shared in a report by the BBC. The Weibo user reposting the image from the Beijing Youth Daily account takes care not to directly mention the Chinese construction company, commenting only: “The earthquake was strong, but this was clearly a ‘tofu-dreg project,’ no? The relevant construction parties should be held to account!”

Several news outlets in the region have also reported, citing the commissioner of Bangkok’s Metropolitan Police Bureau, that an investigation has been launched into the alleged removal of 37 files from the building site, now a restricted zone, by four Chinese nationals. Bernama, Malaysia’s national news agency, reported Monday that one Chinese national, identifying himself as project director at the site, had been apprehended.

Meanwhile, the machinery of propaganda continued to turn out feel-good news on China’s response to the quake. The Global Times reported that emergency assistance for Myanmar embodied Xi Jinping’s foreign policy vision of a “community of shared future for mankind.” In Hong Kong, the Ta Kung Pao (大公報) newspaper, run by the Liaison Office of China’s central government, twisted the knife into the United States as it reported on the earthquake response, noting the absence of USAID, recently dismantled by the Trump administration. Behind the news, the paper declared, “China’s selfless response demonstrates the responsibility of a great power.”

China’s AI Content Dragnet

Hundreds of gigabytes of data lurking on an unsecured server in China linked to Baidu, one of the country’s largest search engines and a major player in the fast-developing field of artificial intelligence (AI), offer a rare glimpse into how the government is likely directing tech giants to categorize data with the use of AI large language models (LLMs) — all to supercharge the monitoring and control of content in cyberspace.  

First uncovered by Marc Hofer of the NetAskari newsletter, the data is essentially a reservoir of articles that require labeling, each article in the dataset containing a repeated instruction to prompt the LLM in its work: “As a meticulous and serious data annotator for public sentiment management, you must fully analyse article content and determine the category in which it belongs,” the prompt reads. “The ultimate goal is to filter the information for use in public opinion monitoring services.”

In this case, “public opinion monitoring,” or yuqing jiance (舆情监测), refers broadly to the systematic surveillance of online discourse in order to track, analyze, and ultimately control public sentiment about sensitive topics. For social media platforms and content providers in China, complying with the public opinion monitoring demands of the Chinese government is a herculean effort for which many firms employ thousands of people — or even tens of thousands — at their own cost. This leaked dataset, of which CMP has analyzed just a small portion, suggests that this once-human labor is increasingly being automated through AI to streamline “public opinion monitoring and management services,” known generally as yuqing yewu (舆情业务). 

Extract of the dataset, an instruction to classify a piece of data according to 38 described categories

What does the dataset tell us? 

First, it reveals a sophisticated classification system with 38 distinct categories, running from more mundane topics like “culture” and “sports” to more politically sensitive ones. Tellingly, the three categories marked as “highest priority” in the dataset align distinctly with state interests as opposed to commercial ones. Topping the list is “information related to the military field,” followed by “social developments” (社会动态) and “current affairs developments” (时政动态). This prioritization underscores how private tech companies like Baidu — though it could not be confirmed as the source of this dataset — are being enlisted in the Party-state’s comprehensive effort to monitor and shape online discourse.

The scope of this monitoring operation is reflected in the sheer volume of data — hundreds of gigabytes found on an unsecured server. While many questions about the dataset remain unanswered, it provides unprecedented insight into how Chinese authorities are leveraging cutting-edge AI technology to extend and refine their control over the information environment, pressing the country’s powerful tech companies to serve as instruments of state surveillance. 

Weathermen and Forecasters

To understand the significance of the “public opinion monitoring” this dataset supports, we must turn the clock back to 2007, the year that saw the rise of microblogging platforms in China, fueling real-time engagement with current affairs by millions of internet users across the country. In comparison to today, China’s internet at that time was still relatively untamed. That year, one of a number of major controversies erupting in cyberspace was what eventually became known as the “Shanxi Brick Kiln Incident” (黑砖窑事件) — a “mass catharsis of public anger,” as Guangzhou’s Southern Metropolis Daily newspaper dubbed it. 

The scandal, exposed only through the dogged determination of concerned parents who scoured the countryside for their missing children, revealed that over 400 migrant workers, including children, had been held in slave-like conditions at a brick kiln complex in Shanxi province — a situation one court judge candidly admitted in the scandal’s aftermath was “an ulcer on socialist China.” As news and outrage spread virally online in June 2007, it ballooned beyond the capacity of the state’s information controls. Party-state officials witnessed firsthand the power of the internet to mobilize public sentiment — and, potentially, threaten social and political stability.

Screenshot of a report on China Central Television showing enslaved workers liberated from kilns in Shanxi. SOURCE: CCTV. 

This watershed case fundamentally transformed the leadership’s approach to managing online discourse. What began as a horrific human rights abuse exposed through citizen journalism became the catalyst for what would evolve into a sophisticated public opinion monitoring apparatus with national reach, and a booming industry in public opinion measurement and response. 

By 2008, the “Shanxi Brick Kiln Incident” had kickstarted the “online public opinion monitoring service industry” (网络舆情服务行业), an entire ecosystem of information centers set up by state media (like the People’s Daily and Xinhua News Agency), as well as private tech enterprises and universities. Analysts employed in this growing industry were tasked with collecting online information and spotting trending narratives that might pose a threat to whomever was paying for the research — in many cases provincial and local government clients, but also corporate brands. 

While the primary motivation was to forestall social and political turmoil, serving the public opinion control objectives of the leadership, the commercial applications of control were quickly apparent. Five year laters, Guangzhou’s Southern Weekly (南方周末) newspaper would report on the “big business” of helping China’s leaders “read the internet,” with revenues from related business at People’s Daily Online, a branch of the CCP’s own People’s Daily, set to break 100 million yuan, or 16 million dollars. According to the paper, 57 percent of public opinion monitoring clients at the time were local governments. 

“For government departments at all levels, the need to understand online public opinion has become increasingly urgent,” the Southern Weekly captioned this image in 2013. The chart shows public opinion incidents peaking in June, November and December each year. Local governments account for 57 percent of clients at the time. SOURCE: Southern Weekly

“If online public opinion is an important ‘thermometer’ and ‘barometer’ for understanding social conditions and public opinion,” the founder and director of the People’s Daily Online Public Opinion Monitoring Center (人民网舆情监测室),  Zhu Huaxin (祝华新), said at the time, “then public opinion analysts are ‘weathermen’ and ‘forecasters.’” 

The job of China’s public opinion forecasters and weathermen has evolved over the past 18 years. In 2016, as the industry neared the end of its first decade, and as online public opinion continued to move faster than analysts could manage, China Social Sciences Today (中国社会科学报), a journal under the government’s State Council, urged the system to upgrade by applying “big data” (大数据). Over the past decade, automating public opinion services and cutting down on costs has been the goal in the evolving business of managing public opinion. Today, the entire system is now being supercharged by AI. 

Those gigabytes of data lurking on an unsecured Baidu server offer us a closer look at how the public opinion monitoring work of AI is being organized. 

A Cog in the Machine

What exactly does the prompt in this dataset do? When copy-pasted along with a news article into Chinese large language models like Baidu’s Ernie Bot (文心一言) or DeepSeek, the prompt instructs the AI to classify the article into one of the 38 predefined categories. The LLM then outputs this classification in json format—a structured data format that makes the information easily readable by other computer systems.

This classification process is part of what’s known as “data labeling” (数据标注), a crucial step in training AI models where information is tagged with descriptive metadata. The more precisely data is labeled, the more effectively AI systems can analyze it. Data labeling has become so important in China that the National Development and Reform Commission released guidelines late last year specifically addressing this emerging industry.

When the prompt is put to Baidu’s Ernie Bot, it provides one of the listed classifiers as an output, in code format. 

The dataset strongly suggests that Baidu is using AI to automate what was once done manually by tens of thousands of human content reviewers, with varying levels of automation. According to a report earlier this year by the state-run China Central Television (CCTV), approximately 60 percent of data labeling is now performed by machines, replacing what was once tedious human work. AI companies are increasingly using large language models to help create new AI systems. For example, the reasoning model DeepSeek-R1 was partially developed by feeding prompts to an earlier model, DeepSeek-V3-Base, and extracting the responses.

Monitoring and Manipulation

What can we learn from the three “public opinion related” categories that Baidu’s dataset identifies as “most important”? While we couldn’t find official regulations from the Cyberspace Administration of China (CAC) specifically using these three categories, the content in these classifications reveals what the Chinese government considers most critical to monitor.

A report in 2010 reviews what at the time was the short history of the public opinion monitoring profession. 

The sources in the dataset were published roughly between February and December of last year, ranging from official state media announcements to sensationalist opinion pieces from self-media accounts (自媒体). Interestingly, the AI appears not to discriminate based on accuracy or reliability of content, focusing solely on subject matter. Some content could not be clearly categorized. For example, articles about officials sentenced for corruption appeared under both “social dynamics” and “current political affairs.”

Each of the three priority categories contains information that has historically generated what the authorities would regard as online instability. “Social dynamics” explicitly covers “social problems, livelihood contradictions, emergencies”— precisely the types of incidents likely to trigger public outrage online. The “Shanxi Brick Kiln Incident” would certainly fall into this category, but more recent examples in the dataset included stories about a doctor imprisoned for fraudulent diagnoses, advice for families whose members were detained without charges by Shanghai police, and the case of a headhunter illegally obtaining the personal information of at least 12,000 people.

Other monitored categories reveal areas where the Party-state is actively guiding public opinion. “Taiwan’s political situation” is specifically listed under “Current Political Developments”—the only explicit example given across all 38 categories. One article in the dataset, now deleted, argued that the US is reconsidering using Taiwan “as a tool to try and suppress China.” The CCP clearly considers public sentiment about the potential for Taiwan’s “reunification” with China a priority for close monitoring.

Similarly, military information is closely watched. Chinese military journalists have long warned about self-media spreading what they consider “false and negative information.” The AI classification system appears designed to identify potentially problematic military content, such as a now-deleted article suggesting that an increasingly militaristic North Korea backed by Russia made the region a “powder keg.” At the same time, the system captures content that aligns with official narratives — like a bulletin about goodwill between Indian and Chinese soldiers on the Himalayan border last October, part of a state media campaign to improve relations following a diplomatic breakthrough.

The exact purpose of this dataset remains unclear. Were these classifications developed internally by Baidu — or were they mandated by state regulators? Nevertheless, the unsecured data offers a glimpse into the inner workings of China’s AI content dragnet. What was once a labor-intensive system requiring thousands of human censors is rapidly evolving, thanks to the possibilities of AI, into an automated surveillance machine capable of processing and categorizing massive volumes of online content. 

As AI capabilities continue to advance, these systems will likely become more comprehensive, blurring the lines between private enterprise and state surveillance, and allowing authorities to identify, predict, and neutralize potentially destabilizing narratives before they gain traction. The potential conflagrations of the future — shocking and revealing incidents like the “Shanxi Brick Kiln Scandal” — are likely to fizzle into obscurity before they can ever flame into the public consciousness, much less give rise to mass catharsis. 

Shrinking Humanities for AI

Shanghai’s Fudan University (复旦大学) is one of China’s most prestigious universities, with a raison d’etre unchanged, it claims, since the institution was founded in 1905: improving China’s position in the world through education. As artificial intelligence takes the world by storm — and becomes a crucial priority from top to bottom in China — the means of achieving that mission is changing, according to the university’s president, Jin Li (金力). 

On February 25, Jin announced that Fudan would drastically reduce its course offerings in the humanities, instead focusing on AI training. In an interview with Guangzhou’s Southern Weekly (南方周末) on March 6, Jin said the university wanted to cultivate students that “can cope with the uncertainty of the future.” For Li, cutting the liberal arts cohort by as much as 20 percent is a social necessity. As he asked rhetorically in the interview: “How many liberal arts undergraduates will be needed in the current era?” (当前时代需要多少文科本科生?). 

At present, courses related to artificial intelligence at Fudan are at 116 — and counting. And the university isn’t alone in downsizing the arts. Combing through Ministry of Education statistics on university courses cancelled in 2024, the commercial newspaper Southern Metropolis Daily (南方都市报) noted that the majority were for liberal arts degrees, with some universities even abolishing their humanities colleges altogether.   

Limiting the humanities comes at a time of broader upheaval in higher education within China. In 2023, the Ministry of Education issued a reform plan ordering that by this year, 20 percent of university courses must be adjusted,with new course offerings introduced to “adapt to new technologies.” According to the plan, majors “not suitable for social and economic development” should be eliminated altogether. 

Limiting the humanities comes at a time of broader upheaval in higher education within China.

AI is almost certainly foremost in the ministry’s mind as it considers plans for the overhaul of education. The country’s “AI+” campaign, introduced during last year’s National People’s Congress, pegs the new technology as key to China’s future development — the source of “new productive forces” (新质生产力) that will rejuvenate the economy. As such, some universities are expanding their offerings in AI courses, making AI literacy classes compulsory for students, and allowing a lax approach to using AI in research. Tianjin University, for example, has decreed students can use AI-generated content for up to 40 percent of a graduation thesis. But that raises the obvious question: if a machine writes 40 percent of your paper, have you really only learned 60 percent of the content?

Since 2023, there have been increasingly lively debates — and much hand-wringing — about the ethics and limitations of AI use in higher education. In China, it seems, it is full steam ahead.