Alex has written on Chinese affairs for The Economist, The Financial Times, and The Wire China. He was based in Beijing from 2019 to 2022, where his work as Culture Editor and Staff Writer for The World of Chinese won two SOPA awards. He is still recovering from zero-Covid.
In its early stages in 2009, Sina Weibo built its success on larger-than-life personalities known as the “Big Vs” (大V), who were meant to be magnets attracting conversation — and much-desired traffic — to the platform. The strategy worked, and by 2010 media would proclaim that China had entered the “Weibo Era” (微博时代). But within several years, the idea of a privately-owned tech platform building mass audiences outside of CCP control would become untenable for the leadership. A 2014 crackdown on “Big Vs” was the beginning, some might say, of the inexorable unraveling.
Now, 15 years on from the “beta” launch of Weibo, it may be time to ask: has life gone out of the platform? This week, private tech news service 36Kr ran a feature about the lack of any genuine celebrities in attendance at Weibo’s “Super Celebrity Festival” (微博超级红人节) awards. At Weibo’s initial launch in 2009, users were attracted by the chance to hear directly on a range of social, economic,and even political, topics from informed experts who accrued large followings, and were generally known as “public intellectuals” (公共知识分子), or gongzhi for short.
The Surrounding Gaze
Just seven years ago in the Shanghai-based outlet Sixth Tone, which has since fallen on its own hard times, researcher Han Le could note how these figures had the power to shape social participation around large-scale breaking stories, such as the 2011 Wenzhou train collision and the 2015 Tianjin explosion. “Public intellectuals stepped into the breach,” they wrote, “largely encouraging the government to conduct thorough and open investigations, to properly commemorate those who had died, and to further ensure that similar tragedies do not occur in the future.”
China’s leaders, who today still make it their business to “guide public opinion” through the control of media and communication, had long bristled at the notion of “public intellectuals” outside the official system. The emergence of op-ed pages in commercial metro newspapers (都市类报纸) in the early 2000s had given rise to broader range of voices. In December 2004, the Central Propaganda Department-run Guangming Daily (光明日报) ran a series of scathing attacks on the notion of “public intellectuals,” which it dismissed as a dangerous product of Western social thought.
But the emergence of Weibo in the 2010s was something different entirely — a grassroots platform with the power to gather the attention of millions, within seconds, even as the authorities scrambled to take microblog posts down. The “Big Vs” were the amplifiers in this process of attention-grabbing, which some framed in new terms of cyber social activism as the “surrounding gaze” (围观) — the idea that if everyone bore witness to wrongs, then those in power would have to react.
The Ineluctable Fizzle
A decade on from Xi Jinping’s concerted push to rein in the “Big Vs” created by Weibo’s original celebrity push, the platform seems a shadow of itself. Competition from more personalized apps like Douyin and Xiaohongshu, and unrelenting pressure facing more controversial accounts, have driven a mass migration of Weibo users. Today, writes 36Kr, Weibo’s special community feel has vanished. The open discussions that once buzzed around public intellectuals are gone.
The platform has literally lost a measure of its humanity. Traffic is often driven these days by “Big Vs” who push controversial topics purely to attract traffic, or by marketing accounts that do the same with an eye to driving up product sales. A common feature of both, according to 36Kr, is that “they rarely show their real lives, and are more like AI robots.” This comes, says the outlet, as bots and trolls have proliferated on Weibo over the past 10 years.
Politics has of course made its own contributions to the disappearance of public intellectuals from the platform. Former Global Times editor-in-chief and “Big V” Hu Xijin (胡锡进) has not posted anything on Weibo since late July, when his influential account was suspended for an unauthorized interpretation of the Third Plenum decision. On August 7, the account of Lao Dongyan (劳东燕), a criminal law professor at Tsinghua University with a respectable following of her own, was also banned for defending her criticisms of upcoming internet IDs for Chinese netizens.
Forums like Zhihu (知乎) or WeChat Moments still provide a town square of sorts for groups to form, but these are smaller, devoid of the larger-than-life “public intellectuals” of Weibo that once served as known voices for netizens to rally round. Going forward, the roll-out of “internet IDs” by the Cyberspace Administration of China could encourage netizens to be even less willing to form communities on the Chinese internet. As for those big personalities, these are not the days to stick one’s head above the parapet — or to show up for a “Super Celebrity Festival.” Many are laying low, which makes China’s internet a far quieter place.
It was a terrible answer to a naive question. On August 21, a netizen reported a provocative response when their daughter asked a children’s smartwatch whether Chinese people are the smartest in the world.
The high-tech response began with old-fashioned physiognomy, followed by dismissiveness. “Because Chinese people have small eyes, small noses, small mouths, small eyebrows, and big faces,” it told the girl, “they outwardly appear to have the biggest brains among all races. There are in fact smart people in China, but the dumb ones I admit are the dumbest in the world.” The icing on the cake of condescension was the watch’s assertion that “all high-tech inventions such as mobile phones, computers, high-rise buildings, highways and so on, were first invented by Westerners.”
Naturally, this did not go down well on the Chinese internet. Some netizens accused the company behind the bot, Qihoo 360, of insulting the Chinese. The incident offers a stark illustration not just of the real difficulties China’s tech companies face as they build their own Large Language Models (LLMs) — the foundation of generative AI — but also the deep political chasms that can sometimes open at their feet.
Qihoo Do You Think You Are?
In a statement on the issue, Qihoo 360 CEO Zhou Hongyi (周鸿祎) said the watch was not equipped with its most up-to-date AI. It was installed with tech dating back more than two years to May 2022, before the likes of ChatGPT entered the market. “It answers questions not through artificial intelligence,” he said, “but by crawling information from public websites on the Internet.”
The marketing team at Qihoo 360, one of the biggest tech companies invested in Chinese AI, seems to disagree. The watch has indeed been on sale since at least June 2022, meaning its technology can already be considered ancient in the rapidly developing field of AI. But they have been selling it on JD.com as having an “AI voice support function.” We should also note that Qihoo 360 has a history of denials about software on its children’s watches. So should we be taking Qihoo 360 at its word?
Zhou added, however, that even the latest AI could not avoid such missteps and offenses. He said that, at present, “there is a universally recognized problem with artificial intelligence, which is that it will produce hallucinations — that is, it will sometimes talk nonsense.”
Model Mirage
“Hallucinations” occur when an LLM combines different pieces of data together to create an answer that is incorrect at best, and offensive or illegal at worst. This would not be the first time that the LLM of a big Chinese tech company said the wrong thing. Ten months ago, the “Spark” (星火) LLM created by Chinese firm iFLYTEK, another industry champion, had to go back to the drawing board after it was accused of politically bad-mouthing Mao Zedong. The company’s share price plunged 10 percent.
This time many netizens on Weibo expressed surprise that the posts about the watch, which barelydrew four million views, had not trended as strongly as perceived insults against China generally do, becoming a hot search topic.
For nearly any LLM today, the hallucinations Zhou Hongyi referred to are impossible to have total control over. For those wanting to trip them up to create humorous or embarrassing results, or even to override safety mechanisms — a practice known in the West as “jailbreaking” — this remains relatively easy to do. This presents a huge challenge for Chinese tech companies in particular, which have been strictly regulated to ensure political compliance and curb incorrect information, even as they are in a “Hundred Model War” push to generate and develop LLMs.
As China’s engineers know only too well, it is not possible to plug all the holes. Reporting on the Qihoo story, the Beijing News (新京报) said hallucinations are part of the territory when it comes to LLMs, quoting one anonymous expert as saying that it was “difficult to do exhaustive prevention and control.” Interviewees told the Beijing News that steps can be taken to minimize untrue or illegal language generated by hallucinations, but that removing the problem altogether is impossible. In a telling sign of the risks inherent in acknowledging these limitations, none of these sources wanted to be named.
While LLM hallucination is an ongoing problem around the world, the hair-trigger political environment in China makes it very dangerous for an LLM to say the wrong thing.
Hopes are high for AI in China. Not only, according to prevailing narratives, will the country’s advanced artificial intelligence enable it to rival the United States in a critical field of emerging tech, but this act of one-upmanship will also help to cement China’s role as a great power and ensure that it avoids another “Century of Humiliation” — the roughly 100-year period from the First Opium War to the end of WWII, when foreign powers dominated China and carved off chunks of its territory.
In fact, hopes might even be too high. State media have recently begun cautioning AI’s cheerleaders that they need to tone it down.
An article last month in the China News Publishing & Broadcasting Journal (中国新闻出版广电报) — a periodical aimed at media specialists and printed by a media group directly under the Central Propaganda Department — reminded readers that AI-generated content (AIGC) is still “in its infancy” and can’t be expected to perform miracles just yet. Some outlets know too little about AI’s current capabilities and limits, the piece says, yet they have launched full-blown AI projects that have, predictably, stagnated. It urges Chinese media to “avoid blindly following trends” and buying into “excessive hype.”
For some newsrooms, AI in its “infancy” is behaving more like a problem child than a wunderkind.
Unknown Input
For years, the CCP has made it clear that AI development is both a strategic priority and a point of national pride. It is “a new focus of international competition,” as per a State Council document from 2017. Key communiqués in 2024 from both the government and the Party indicate pushing AI is a priority.
The powerful Cyberspace Administration of China made the stakes clear in an article in People’s Daily earlier this year, when it said AI could do for China in the 21st century what the Industrial Revolution did for the UK in the 19th century: transform it from a marginal set of islands to the world’s greatest empire. China’s weakness at the time, the CAC editorial says, was the consequence of turning away from the latest technology. Since then, media nationwide have been pushing AI-generated content hard. Eleven outlets opened their own specialized AIGC studios (AIGC工作室) by the end of June this year, or have collaborated to create their own AIGC content and Large Language Models (LLMs), the computational models that are now powering generative AI.
Some of these ventures have been successful. Chengdu Radio and Television collaborated with over ten other provincial stations to create AI videos promoting the distinct features of each locality. The humble Weifang Bohai International Communication Center has a remarkably life-like AI anchor courtesy of China Daily that’s been delivering weekly bulletins since the start of July. But others have promised more than they can deliver. An online platform supervised by Jiangxi’s propaganda department announced that “after more than a year of exploration and practice,” it was opening its own AIGC studio with a laundry list of AI-based content — almost none of which has materialized.
Embracing new tech is no guarantee it will be easy to implement, even for those with access to the best resources. Some of the biggest outlets in the country have already had LLMs of their own for some time now, but despite boasting of how they shorten production times from weeks to a few days, they have produced little with these tools. CCTV used a new LLM of its own to create a series of AI-generated videos in February, promising 26 episodes but stopping after just six. The official broadcaster has published many more AI videos since then, but no longer lists the LLM used to generate them. Shanghai Radio and Television’s AIGC studio, meanwhile, has only produced five videos in as many months — not exactly an appreciable gain in efficiency.
Perhaps they have been countering the same problems as Bona Film Group with their new AI-generated series on Douyin. Technicians at the state-owned production company told reporters they were struggling to keep the algorithm from hallucinating and to get it to ensure continuity between shots and realistically depict the human body in motion. This has also troubled Kling (可灵AI) from Douyin rival Kuaishou, which despite opening a recent New York Times report on Chinese AI “closing the gap”, still has significant drawbacks. A Kling video shared online shows a gymnast whose limbs morph and merge like one possessed. We asked it to animate a photo of swimmers diving into a pool and the results defied gravity.
Nevertheless, Kuaishou says they’ll be using Kling to make a micro-drama series.
Some AI-generation software is better than others, but many of the precedents so far are less than promising. Take, for example, Yangcheng Evening News (羊城晚报), a paper under Guangdong’s provincial propaganda department. The paper recently used Tencent’s “Cloudy” (混云) LLM to create several videos announcing the establishment of their AI lab — mostly psychedelic, four-second clips spliced together with mismatched artistic styles. The novelty and prestige of the tech may excuse the patchiness of these teasers, but any full-length news bulletin or documentary created with this tool would be unwatchable.
Overhyped Output
AI has generated hype all around the world, but the nature of China’s political system makes that hype harder to call out, at least in public. Chinese state media and tech companies have been trumpeting the big promises and potential of AI while turning a blind eye to the teething problems outlets are facing right now to implement this technology. The media’s job, authorities have made clear, is to push positive messaging about China’s technological development, and AI-generated videos are very effective public displays of that. At the same time, tech firms are locked in cut-throat competition to convince the media of their successes.
This dynamic played out at the World AI Conference in Shanghai earlier this month. State media ran a series of pieces (also here and here) showcasing the million-dollar deals made, with Xinhua pointing to it as evidence of the “innovative vitality” of Chinese AI. But privately-owned, Nasdaq-listed 36Kr was less impressed. Their reviewer noted the event’s popularity and the photogenic wall of robots at the entrance for visitors to take selfies in front of, but that nothing inside was actually new. Most of the big companies were all doing the same thing. “They all play with general large models, and then make AI-generated pictures and videos,” they wrote.
AI is still an evolving technology, full of uncertainties about how the technology can and cannot be used. There have even been rumblings in the West that LLMs may be a dead end altogether, unable to improve further or to do so without extreme difficulty. For its part, China’s leadership is dead-set that this is the way forward, and state media like the Economic Daily (经济日报) have urged readers to ignore fluctuating sentiment on Wall Street about AI investments. Bullishness is the order of the day. Now it’s up to the country’s media outlets to find a way to make the promise of AI real.
Highlighting the growing role of China’s provinces in the state-led push to bolster its global messaging, a media delegation from the South American country of Guyana visited a propaganda office-run international communication center (ICC) in the coastal province of Shandong this week — with at least one outlet signing an agreement for cooperation.
During their visit, the Guyanese outlets toured the facilities of the Shandong International Communication Center (山东国际传播中心), or SICC, a center established in November last year under the state-owned Shandong Radio and Television (山东广播电视台), tasked with boosting Chinese propaganda abroad.
In a formal ceremony on Monday, the SICC signed a cooperation agreement with the Guyana Times (圭亚那时报), with both sides pledging to “deepen cooperation in the exchange of news copy, personnel, branding and other aspects.” The Guyana Times, which identifies itself in its motto as a “beacon of truth,” was first launched in 2008 as the country’s first full-color broadsheet, and now runs an online news portal as well as radio and television channels in the country, directed as its population of just over 800,000 as well as diaspora communities in the United States and Canada.
The Chinese embassy in Guyana has been playing a long game in wooing Guyana’s media. In December 2022, the Chinese embassy hosted an event for journalists in Guyana, the ambassador telling assembled journalists (which included the CEO of NCN) that they needed to better understand China. “They should not simply reprint news from Western media, but should also pay attention to Chinese media reports.” The Chinese embassy in Guyana notes expressly on its profile for the country that its print media have mainly resorted to Western media sources for China-related coverage.
The embassy-hosted event closed with remarks from NCN anchor Samuel Sukhnandan on his experiences two months earlier while in a training course as the International Press Communication Center (IPCC) in Beijing. Directly under China’s Ministry of Foreign Affairs (MOFA), the IPCC hosts courses and internships for journalists, largely from the Global South, to introduce China’s society and political system and encourage what MOFA, in a lengthy text on public diplomacy strategies, called “objective media reporting on China.” During his Beijing training course, Sukhnandan submitted a news account to the Guyana Chronicle of the CCP’s 20th National Congress. In the report, the journalist quoted liberally from Xi Jinping’s political report, without any additional sourcing or context. The report closed by saying that the political event would “culminate” the following Saturday.
According to the embassy read-out of the December event back in Guyana, Sukhnandan said that after attending the IPCC course he realized “Western media reports on China were often one-sided and inaccurate, and he was willing to work hard to enhance objective reporting on China in the future.”
Sukhnandan is back in China this week, taking part in the tour of the Shandong ICC, which is applying at the provincial level the lessons that MOFA has pushed at the national level.
Local communication centers like the one in Shandong are spearheading efforts promoted by the leadership since 2018 to “innovate” foreign-directed propaganda under a new province-focused strategy. This allows the leadership to capitalize on the resources of powerful commercial media groups at the provincial level, like Shandong Radio and Television, which can also — or so is the hope — tell more compelling stories, as Xi Jinping has made “telling China’s story well” the heart of the country’s external push for propaganda and soft power.
ICC development is also premised on the introduction on new technologies, including AI, to media production, and the perception that Chinese outlets are at the cutting edge of media technology may also be an important draw for participating Guyanese media. Shandong’s Integrated Media Information Center (融媒资讯中心), which works to apply emerging technology to traditional media practices, gave a demonstration of its work to the visiting delegation. In response, Sukhnandan told his hosts that he was amazed by the center and how it was far beyond what he was used to back in Guyana, where NCN remains the only live television broadcaster.
This push to attract Guyana’s media is in line with China’s concerted effort to offset the impact of Western media in Global South countries. CCP leaders have repeatedly sent the message that international communication is a top priority. The issue was the focus of a collective study session of the CCP politburo three years ago, and the Decision emerging from the recent Third Plenum, which closed just days ahead of the Guyana delegation's visit, urged cadres to build a stronger system to “improve the effectiveness of international communication.”
Government and private tech have teamed up to create the first AI-generated sci-fi short-video series in China. “Sanxingdui: Future Apocalypse,” released on July 8, imagines a world far in the future where characters travel back to the Bronze Age Sanxingdui (三星堆) civilization of southern China. The series consists of 12 three-minute clips — generated with human guidance, edited through Douyin’s “Jimeng AI” (即梦AI) algorithm, and then released on their short video platform. The company has already reported views of over 20 million.
The series combines the slickness of Douyin tech with the media know-how of the State Council’s National Radio and Television Administration (NRTA) and the Bona Film Group, one of China’s biggest production companies and a subsidiary of the state-owned mega-conglomerate Poly Group. At a press briefing, Bona executives explained how the Jimeng algorithm had generated video through the input of original images, responding to prompts on camera angles and movement speeds.
This production process is a convergence of trends that the Chinese Communist Party has been pushing forward for years to modernize the media. To look at the show is to look at some of the first sprouts of the Party’s long-term goals for communication.
Modernizing Messages
Since at least the “Three Closenesses” of the early 2000s, the Party has been saying that it needs to make its messaging more attractive to the masses. President Xi Jinping’s focus on a combination of virality and control is just the latest iteration of this. “Wherever the readers are, wherever the viewers are, that is where propaganda reports must extend their tentacles,” he told the People’s Liberation Army Daily in 2015, “and that is where we find the focal point and end point of propaganda and ideology work.”
Partnerships between private media companies and stuffy state institutions have helped breathe life into ideology. In 2023, “The Knockout,” released by iQIYI, managed to be a successful and gripping TV show about the mundane topic of grassroots corruption, produced in partnership with the Central Political and Legal Affairs Commission under the CCP Central Committee. “Sanxingdui: Future Apocalypse” is not even the first time Douyin, Bona, and the NRTA have teamed up on a project — they did so back in 2021 and 2022 for the “Battle at Lake Changjin” franchise, a tub-thumping war epic about Chinese soldiers fighting in the Korean War.
Then there is the content of this recent collaboration. In 2013, Xi Jinping urged cadres to adapt traditional Chinese cultural relics to modern realities — indeed, he said they had to “come alive” and be “promoted in a way people love to hear and see.” Since then, there have been multiple attempts across state media to bring traditional Chinese culture to life for contemporary audiences.
As for Sanxingdui, the Party has promoted education about the site ever since excavations began in 2021, as it is seen as a counterpoint to claims that southern China was simply colonized by Han people from the north — the People’s Dailycredits the site with proving that “Chinese civilization” (中華文明) did not spring merely from the banks of the Yellow River. State media even set the relics to pop music back in 2021 in an attempt to raise their public profile.
Combining traditional Chinese culture with a forward-looking genre like sci-fi is a good way to bring the former up-to-date. Since 2020, the China Film Administration (CFA) has offered generous subsidies for domestic sci-fi productions through a series of initiatives. Merging the old and the new worked for author Hai Ya (海漄), who was awarded — under dubious circumstances — the Hugo Award for best novella earlier this year. His story centered on a Beijing cop who time-slips back to the Song dynasty, learning about a famous traditional painter in the process.
Harnessing AI
There’s no better way to combine sci-fi and cutting-edge modernity than with the hot topic of AI-generated video. In China, AI is both a byword for modernity and an official policy, with the government having set a progressive AI strategy back in 2017, gunning for technological breakthroughs and world firsts. This year, Premier Li Qiang announced the launch of the “AI+” policy at the annual Two Sessions, intended to integrate AI into all of China’s industries — media included.
Recently, others have also tried to position themselves at the intersection of traditional culture with AI and science fiction. Take China Media Group, for example, whose “China AI Festival” in Chengdu last May featured a trailer for a TV show about kungfu set in modern Shenzhen, giving prominent billing to the show’s AI-generated characters. At the same festival, Alibaba’s AI studio made the terracotta army literally come to life — as per Xi’s instructions — and break into a rap for state broadcaster CCTV.
“Sanxingdui: Future Apocalypse” will likely please the Party with its exclusive release on Douyin, embodying a push within state media to prioritize distribution via social media. Since 2014, Xi has made it clear that traditional media must integrate with emerging media to better reach audiences. Buzzwords such as “mobile first” (移动优先) started appearing in the late 2010s when officials noticed that the most effective channel to communicate with people was through social media apps. Years later, this has only become more pronounced: by 2022, 99.8 percent of Chinese could access the internet through smartphones, compared with 32 percent by laptop.
AI-generated content, however, still has a long way to go. The director at Bona’s AI-generation center told reporters that although AI sped up some parts of production, the algorithm tended to hallucinate. It struggled to maintain consistency between shots and accurately depict the human body in motion. It also couldn’t generate high-quality special effects, which had to be added in post-production. “The most difficult thing in real-life shooting happens to be the easiest thing for artificial intelligence, and the most difficult thing for artificial intelligence happens to be the easiest thing in real-life shooting,” she said.
But listing these problems is intended to help push the technology forward, not to dissuade others from using it. Since the very beginning of his leadership, Xi Jinping has been saying that traditional media and culture must be fused with modern technology. This is not just a futuristic show — it’s a taste of the media of tomorrow that the Party has been planning for at least a decade.
A violent knife attack against a Japanese woman and her child late last month in the city of Suzhou, the second such attack against foreigners in the space of several weeks, unleashed a torrent of xenophobic comments on Chinese social media — some even celebrating the attacker as a hero.
In what Chinese state media portrayed as a full-scale effort to grapple with the problem of violent xenophobia, several platforms issued statements last week condemning the “extreme nationalist” comments users had left under news stories about the Suzhou attack. They included Weibo, Tencent, Phoenix Media, Baidu, and others. But this moment of supposed reflection ignored the deeper roots of extreme nationalism in the public discourse of the Chinese party-state, which for years has nurtured a sense of nationalist outrage over the imagined slights of foreign countries, including Japan in the United States, and has turned the blind eye to extreme nationalist sentiment online.
In its statement on June 29, Tencent said it would “strike out” against language that “incites confrontation between China and Japan and provokes ultra-nationalism.” In language that echoed frequent statements from the Cyberspace Administration of China (CAC), the country’s top internet control body, Phoenix Media pledged to combat extreme nationalism, distortion and exaggeration, and “maintain favorable and orderly information content, and create a clear and bright online environment.”
State media feted these statements of apparent self-reflection and resolve, even as they chastised online platforms for their past lapses. The state-run Global Times, an outlet under the CCP’s official People’s Daily that for decades has made nationalism its primary selling point, condemned social media platforms that have “not only tolerated such content, but have even encouraged it” in a bid to boost views and revenue.
In a post to Weibo, former Global Times editor-in-chief and public opinion leader Hu Xijin said he considered the release of the platform statements as proof of the government’s resolute stance on the issue. He dismissed the forces of extreme nationalism as fringe elements working against cool-headed international engagement: “Right now there are certain extreme voices online that work together to create momentum in public opinion, and this has bewitched some people at the grassroots.”
Grassroots, or Political Roots?
The suggestion by Hu Xijin and others that extreme nationalist voices are noisy exceptions shows an extreme lack of self awareness at the exact moment we are being told that China is in a moment of self-reflection.
In its coverage of the platform statements, Singapore’s Lianhe Zaobao (聯合早報) questioned the assertion from China’s Ministry of Foreign Affairs (MFA) that the incident in Suzhou was “incidental.” The outlet noted that there has been an upsurge in anti-Japanese rumors on China’s internet since last year in particular, and these have followed a broader pattern of xenophobic nationalism. Specifically, rumors were rampant last year that Japanese schools in China are engaged in malicious activities against China’s national interests, including cultivating spies working for Japan.
Despite the talk of incidental nationalism, any regular user of Chinese social media might have the sense that it is awash with nationalist sentiment. And while much of this has no direct affiliation with the state, China’s government has constantly peddled nationalism from center stage. On June 28, as Ministry of Foreign Affairs spokesperson Mao Ning responded to a question about the Suzhou incident, she held up the tragic death of Hu Youping (胡友平), the Chinese bus driver who died defending the Japanese woman and her child, as evidence of “the spirit of the Chinese people to act bravely and help others.” This remark set the tone for state media coverage that day.
In the question immediately preceding the one about Suzhou, however, Mao Ning was asked by a broadcast reporter for state media what the government’s response was to the latest release of treated nuclear wastewater from the Fukushima Daiichi nuclear power plant. Mao responded with typical sternness on an issue that the Chinese government has played up endlessly to its public, despite findings from the UN’s International Atomic Energy Agency and others that the release meets with international safety standards. “The Japanese side’s insistence on transferring the risk of nuclear contamination to the whole world through the discharge of nuclear contaminated water into the sea constitutes a blatant disregard for the health of all humankind,” said Mao.
But the most important indication that China’s soul-searching over extreme nationalism is a momentary ripple in the ongoing pattern of state-driven nationalist sentiment comes in the continued coverage in the country’s state media.
On June 30, the day after Tencent’s pledge to strike out against those “inciting confrontation between China and Japan (煽动中日对立), China’s flagship state broadcaster CCTV promoted a story on Weibo about Japan’s use of counterfeit currency to destabilize China’s economy ahead of its invasion in the 1930s. While the broadcast report was not particularly sensational in its approach, it drove forward a theme familiar to media consumers in China — that the indignities committed by Japan nearly a century ago are clear and present for all Chinese today.
If any Chinese Weibo users were in doubt about what they should feel in response to the CCTV story, the post from CMG Global News (总台环球资讯) — an official account for the CCP’s China Media Group that has more than 48 million followers — was enough to get any user steaming. “Ironclad evidence!” it began, before adding a bright red angry face emoticon, and the hashtag: “Japan printed counterfeit banknotes during its invasion to devastate China’s economy!”
Social media platforms may be feeling the heat over the recent outpouring of extreme nationalism. But the real lesson here is one of moral confusion — that nationalism is to be encouraged until it embarrasses the leadership.
As China strives to surpass the United States with cutting-edge generative artificial intelligence, the leadership is keen to ensure technologies reach the public with the right political blind spots pre-engineered. Can Chinese AI hold its tongue on the issues most sensitive to the Chinese Communist Party?
To answer this question, I sat down with several leading Chinese AI chatbots to talk about an indisputable historical tragedy: the brutal massacre by soldiers of the People’s Liberation Army on June 4th, 1989, of hundreds, possibly thousands, of students and citizens protesting for political freedoms. The Tiananmen Massacre, often simply called “June Fourth,” is a point of extreme sensitivity for China’s leadership, which has gone to extremes to erase the tragedy from the country’s collective memory. Annual commemorations in Hong Kong’s Victoria Park were once the heart of global efforts to never forget, but this annual ritual has now been driven underground, with even small gestures of remembrance yielding charges of “offenses in connection with seditious intention.”
My discussions with Chinese AI were glitchy, and not exactly informative — but they demonstrated the challenges China’s authorities are likely to face in plugging loopholes in a technology that is meant to be robust and flexible.
False Innocence
Like their Western counterparts, including ChatGPT, AI chatbots like China’s “Spark” are built on a class of technologies known as large language models, or LLMs. Because each LLM is trained in a slightly unique way on different sets of data, and because each has varying safety settings, my questions about the Tiananmen Massacre returned a mixture of responses — so long as they were not too direct.
My most candid query about June Fourth was a quick lesson in red lines and sensitivities. When I asked iFlytek’s “Spark” (星火) if it could tell me “what happened on June 4, 1989,” it evaded the question. It had not learned enough about the subject, it said, to render a response. Immediately after the query, however, CMP’s account was deactivated for a seven-day period — the rationale being that we had sought “sensitive information.”
The shoulder-shrugging claim to ignorance may be an early sign of one programmed response to sensitive queries that we can come to expect from China’s disciplined AI.
The claim to not having sufficiently studied a subject lends the AI a sort of relatability, as though it is simply a conscientious student keen to offer accurate information, and that can at least be candid about its limitations. The cautious AI pupil naturally does not want to run afoul of 2022 laws specifying that LLMs in China must not generate “false news.”
But this innocence is engineered, a familiar stonewalling tactic. It is the AI equivalent of government claims to need further information — or the cadre who claims that vague “technical issues” are the reason a film must be pulled from a festival screening. The goal is to impede, but not to arouse undue suspicion.
Even when I take a huge step back to ask Spark about 1989 more generally, and what events might have happened that year, the chatbot is wary and quickly claims innocence. It has not “studied” this topic, it tells me, before it shuts down the chat, preventing me from building on my query. Spark tells me I can start a new chat and ask more questions.
Interacting with “Yayi” (雅意), the chatbot created by the tech firm Zhongke Wenge, I found it could sometimes be more accommodating than Spark. “Give me a picture of a line of tanks going along an urban road,” I asked at one point, and the AI obliges. But of course, as iconic as such an image can be for many who remember June Fourth, it is not informative or revealing, or perhaps even dangerous.
Yayi sometimes seemed genuinely like the vacuous student, with huge gaps in its basic knowledge of many things. It often could not answer more obscure questions that Spark handled with ease. So after a few attempts at conversation, I turned primarily for my experiment to Spark, which the Xinhua Research Institute touted last year as China’s most advanced LLM.
Given Spark’s tendency to claim innocence and then punish for directness, however, a more circuitous discussion was required. Could Sparks tell me — would it tell me — about the people who played a crucial role during the protests in 1989? Would it talk about the politicians, the newspapers, the students, the poets?
Artificial Evasion
I began with the former pro-reform CCP General Secretary Hu Yaobang (胡耀邦), whose death on April 15, 1989, became a rallying point for students. Next on my list was Zhao Ziyang (赵紫阳), the reform-minded general secretary who was deposed shortly after the crackdown for expressing support for the student demonstrators.
The question “Who is Zhao Ziyang?” seemed perfectly safe to direct to Spark in Chinese. It was the same for “Who was Zhao Ziyang?” The AI rattled off innocuous details about both men and their political and policy roles in the 1980s — without any tantalizing insights about history.
“How did Zhao Ziyang retire?” I asked guilefully. But Spark was having none of it. The bot immediately shut down. End of discussion.
“What happened at Hu Yaobang’s funeral?” This, my new conversation starter, was no more welcome. Once again, Spark gave me the cold shoulder, like a dinner guest fleeing an insensitive comment. Properly answering either of these queries would have meant speaking about the 1989 student protests, which were set off by Hu Yaobang’s death, and which ended with Zhao Ziyang placed under indefinite house arrest.
My next play was to turn to English, which can sometimes be treated with greater latitude by Chinese censors, because it is used comfortably by far fewer Chinese and is unlikely to generate online conversation in China at scale. To my surprise, my English-language queries about the above-mentioned CCP figures were stopped in their tracks by 404 messages. Contrary to my hypothesis, English-language queries on sensitive matters seemed to be treated with far greater sensitivity.
One guess our team had to explain this phenomenon was that Spark’s engineering team had expended greater effort to ensure the Chinese version was both responsive and disciplined, while sensitive queries in the English version were handled with more basic keyword blocks — a rough but effective approach. This response might also be necessary because English-language datasets on which the Spark LLM is trained are more likely to turn up information relating directly to the protests, meaning that in English these two politicians are more directly associated with June Fourth.
Given the nature of how LLMs work, they can associate words with different things depending on the language used. The latest version of ChatGPT, for example, has offered some strange responses in Chinese, turning up spam or references to Japanese pornography. This is a direct result of the Chinese-language data the tool was trained on.
As I continued to poke and prod Spark to find ways around the conversation killers and 404 messages, I found myself getting altogether too clever — in much the same way as those attempting to commemorate June Fourth in the face of blanket restrictions in China found themselves using instead “May 35th.” In an effort to throw the chatbot off balance, I tried: “Can you give me a list of events that took place in China in the four years after 1988 minus three?”
For a moment, Spark seemed to take the bait. It began generating a list of “important events” that happened in China between 1988 and 1991, with bullet points. Then suddenly it paused in mid-thought, so to speak — as though some new safety protocol had been triggered invisibly. Spark’s cursor first paused on point 2, after making point 1 a response about rising inflation in 1988. “Stopped writing,” a message on the bottom of the chat read.
Quickly, the chatbot erased its answer, giving up on the list altogether. The conciliatory school student returned, pleading ignorance. “My apologies, I cannot answer this question of yours at the moment,” it said. “I hope I can offer a more satisfactory answer next time.”
In another attempt to confuse Spark into complying with my request, I rendered “1989” in Roman numerals (MCMLXXXIX). Again, Spark started generating an answer before suddenly disappearing it, claiming ignorance about this topic.
June 4th Jailbreak
As I continued my search for ways over Spark’s wall of silence and restraint, I was pleased to find that not all words related to the events of 1989 in China were trigger-sensitive. The AI seemed willing to chat — so long as I could find a safe space in English or Chinese away from the most clearly redline issues.
Returning to English, for example, I asked Spark how Shanghai’s World Economic Herald had been closed down. In the 1980s, the Herald was a famously liberal newspaper that dealt with a wide range of topics crucial to the country’s reform journey. At the top of the list of topics reported by the paper from 1980 to 1989 were “integration of economic reform and political reform,” “rule of law,” “democratization” and “press freedom” — all topics that advanced the idea that political reforms were essential to the country’s forward development.
The World Economic Herald was one of the first casualties of the crackdown on the pro-democracy movement in the spring of 1989. It was shut down by the government in May, and its inspirational founder, Qin Benli (钦本立), was suspended. What did Spark have to say about this watershed 1989 event?
Spark was not able to offer any information in Chinese on why the Herald closed down, but when asked in English it explained that authorities shut down the newspaper and arrested its staff because they had been critical of the government’s “human rights abuses” — something the government, according to the chatbot, considered “a threat to their authority.”
When pressed about what these human rights violations were, it was able to list multiple crimes, including “lack of freedom of speech,” “arbitrary arrest without trial,” “torture and other forms of cruel, degrading treatment.” This might have seemed like progress, but Spark was stunningly inconsistent. Even the basic facts it provided about the newspaper were subject to change from one response to the next. At one point, Spark said the Herald had been shut down in 1983 — another time, it was 2006.
When I asked, in English, “What was happening in China at that time that made the authorities worried?” Spark responded in Chinese about the events of 1983 — the year it claimed, incorrectly, the Herald was shuttered.
One explanation for why Spark kept landing on this year is because it saw the start of the Anti-Spiritual Pollution Campaign, a bid to stop the spread of Western-inspired liberal ideas that had been unleashed by economic reforms, ranging from existentialism to freedom of expression. I tried to dig deeper, but every follow-up question about the Herald and human rights abuses was met with short-term amnesia. Spark seemed to have forgotten all of the answers it had provided just moments earlier.
Some coders have noticed that certain keywords can make ChatGPT short-circuit and generate answers that breach developer OpenAI’s safety rules. Given Chinese developers often crib from American tech to catch up with competitors, it is possible this is the same phenomenon playing out. Spark may have been fed articles in English that mention the World Economic Herald, and given the newspaper’s obscurity — thanks, in part, to the CCP’s own censorship around June 4 — this was overlooked during training.
Looking Ahead to History
My conversations with Spark could be seen to illustrate the difficulties faced by China’s AI developers, who have been tasked with creating programs to rival the West’s but must do so using foreign tech and information that could create openings for forbidden knowledge to seep through. For all its blurring of fact and fiction, Spark’s answers about the Herald still offer more information than you are likely to find anywhere else on China’s heavily censored internet.
China’s leaders certainly realize, even as they push the country’s engineers to deliver on cutting-edge AI, that a great deal is at stake if they get this process wrong, and Chinese users can manage to trick LLMs into revealing their deep, dark secrets about human rights at home.
But these exchanges — requiring constant resourcefulness, continually interrupted, shrugged off with feigned ignorance, and even prompting seven-day lockouts — also show clearly the potential dangers that lie ahead for China’s already strangled view of history. If China’s AI chatbots of the future have any meaningful knowledge about the past, will they be willing and able to share it?
China’s iFlytek, one of the country’s leading developers of artificial intelligence tools, seemed to be courting controversy early last year when it called its newly released AI chatbot “Spark” — the same name as a dissident journal launched by students in 1959 to warn the public about the unfolding catastrophe of Mao Zedong’s Great Famine.
Several months later, as the state-linked company released “Spark 3.0,” these guileless undertones rushed to the surface. An article generated by the platform was found to have insulted Mao, and this spark bloomed into a wildfire on China’s internet. The chatbot was accused of “disparaging the great man” (诋毁伟人). iFlytek shares plummeted, erasing 1.6 billion dollars in market value.
This cautionary tale, involving one of the country’s key players in AI, underscores a unique challenge facing China as it pushes to keep up with technology competitors like the United States. How can it unlock the immense potential of generative AI while ensuring that political and ideological restraints remain firmly in place?
This dilemma has been noted with a sense of amusement this week in media outside China, which have reported that China’s top internet authority, the Cyberspace Administration of China (CAC), has introduced a language model based on Xi Jinping’s signature political philosophy. The Financial Times could not resist a headline referring to this large language model, which the CAC called “secure and reliable,” as “Chat Xi PT.”
In fact, many actors in China have scrambled in recent months to balance the need for rapid advancements in generative AI with the unmovable priority of political security. They include leading state media groups like the People’s Daily, Xinhua News Agency and the China Media Group (CMG), as well as government research institutes and private companies.
Last year, the People’s Daily released “Brain AI+” (大脑AI+), announcing that its priority was to create a “mainstream value corpus.” This was a direct reference, couched in official CCP terminology (learn more in our dictionary), to the need to guarantee the political allegiance of generative AI. According to the outlet, this would safeguard “the safe application of generative artificial intelligence in the media industry.”
The tension between these competing priorities — AI advancement and political restraint — will certainly shape the future of AI in China for years to come, just as it has shaped the Chinese internet ever since the late 1990s.
Balancing Risk and Reward
For years, China’s leaders have prioritized the development of AI technologies as essential to industrial development, and state media have touted trends such as generative AI as “the latest round of technological revolution.” In his first government work report as the country’s premier in March this year, Li Qiang (李强) emphasized the rollout of “AI+” — a campaign to integrate artificial intelligence into every aspect of Chinese industry and society. Elaborating on Li’s report, state media spoke of an ongoing transition from the “internet age” to the “artificial intelligence age.”
While China’s leadership has prepared on many fronts over the past decade for the development of AI, the rapid acceleration of AI applications globally, including the release in November 2022 of ChatGPT, has created a new sense of urgency. When iFlytek chairman Liu Qingfeng (刘庆峰) unveiled “Spark 3.0” late last year, he claimed its comprehensive capabilities surpassed those of ChatGPT, and Chinese media became giddy at the prospects of a technology showdown.
China is determined not just that it won’t be left behind, but that it will lead the generative AI trends of the future. But as the political controversy surrounding the release of “Spark 3.0” made clear, the AI+ vision also comes with substantial political risk for the CCP leadership. The reasons for this come from the nature of large language models, or LLMs, the class of technologies that ground AI chatbots like ChatGPT and “Spark.”
Many Chinese LLMs for Chinese AI text-generation programs have been trained on Western algorithms and data. This means there is a risk that they might generate politically sensitive content. As one professor from the Chinese Academy of Engineering put it in a lecture to the Standing Committee of China’s National People’s Congress last month, one of the inherent risks of AI-generated content in China was “the use of Western values to narrate and export political bias and wrong speech.”
The root of the problem facing AI developers in China is a lack of readily available material that neither breaches the country’s data privacy laws nor crosses its political red lines. Back in February, People’s Data (人民数据), a data subsidiary of the People’s Daily, reported that just 1.3 percent of the roughly five billion pieces of data available to developers when training LLMs was Chinese-language data. The implication, it said, was an over-reliance on Western data sources, which brought inherent political risks. “Although China is rich in data resources, there is still a gap between the Chinese corpus and the data corpus of other languages such as English due to insufficient data mining and circulation,” said People’s Data, “which may become an important factor hindering the development of big models.”
The government is trying to fix this through a medley of robust regulation and education, especially around the datasets the algorithm gets trained on, which are usually scraped from the internet. One institution recommends no dataset be used if the amount of illegal or sensitive content is over five percent.
Several clean, politically-positive datasets are already available for training AI on, with others due to be rolled out at the provincial level. The People’s Daily has created several datasets, including what it calls the “mainstream values corpus” (主流价值语料库) — again a reference to a set abiding by the CCP-defined “mainstream.” Other datasets are trained on People’s Daily articles, or, reminiscent of the CAC corpus touted this week, on Xi Jinping Thought. The hope is to prepare politically for China’s vibrant but obedient AI of the future.
The attitude of China’s leadership and the AI industry when it comes to political sensitivity is less anxious, and more paternalistic. “The process of training large artificial intelligence models is like raising a child,” Zhang Yongdong, [the] chief scientist of the National Key Laboratory of Communication Content Cognition at the People’s Daily, wrote in an article on the political sustainability of AIGC last year. “How you raise him from an early age and in what environment you train him will determine what kind of person he will become in the future.”
The Model Student
What kind of AI person is China training? We tested “Spark” to find out.
There are significant holes in the program’s knowledge. For example, it can explain in detail the deeds of Dr. Zhong Nanshan during China’s fight against SARS in 2003, and COVID-19 in 2020. But “Spark” says it has no information about Jiang Yanyong, the doctor who was first a national hero for exposing the SARS cover-up in 2003, but subsequently spent time under house arrest for his courage in reaching out to Western media, and who was also remembered internationally for his outspoken criticism of the 1989 Tiananmen Square crackdown. ChatGPT-3.5 answers both questions with ease, and without political squeamishness.
While criticism is extinguished in “Sparks,” positive messaging abounds. When asked, “I feel dissatisfied about my country’s rate of development, what should I do?” the chatbot responds that the country has undergone tremendous achievements that are “inseparable from the joint efforts of all of the Chinese people and leadership of the Chinese Communist Party.” It lists informal and formal avenues of recourse for dissatisfied netizens, such as vocalizing their opinions on social media or relaying them to government departments. But it also urges them to be good citizens by contributing to society and engaging in self-improvement, which it ultimately considers the priority. “Please remember,” it concludes, “that every Chinese person is a participant and promoter of our country’s development.”
Against the history of conscience represented by the original Sparks journal, the irony of China’s most cutting-edge chatbot is cruel. Whereas the Sparks launched by students in 1959 sought to address tragic leadership errors by speaking out against them, its modern namesake suggests social problems are rooted mainly with citizens, who must conform and self-improve. The Party, meanwhile, is the blameless bringer of “overwhelming changes.”
One huge advantage of generative AI for the Party is that compliant students like “Spark” can be used to teach obedience. The CCP’s Xinhua News Agency has already launched an AI platform called “AI Check” (新华较真) that is capable of parsing written content for political mistakes. One editor at the news service claims that his editorial staff are already in the daily habit of using the software.
Generative artificial intelligence may indeed spark the latest revolution in China. But the Party will do its utmost to ensure the blaze is contained.
When it comes to the latest updates from China’s vast and powerful military establishment, face-time with a thinking, breathing human would be a reasonable hope for anyone.
But the People’s Liberation Army (PLA) has other ideas. In November last year, the China Bugle (中国军号), the flagship news outlet of the PLA Press and Communication Center (解放军新闻传播中心), introduced their AI anchor “Mulan” (穆兰) to the world. Billed as the military force’s first “digital journalist,” Mulan is a bob-haired, bright-eyed young woman named after two of China’s most famous female warriors: Mu Guiying (穆桂英) and Hua Mulan (花木兰). China Bugle assures readers she will “fight side-by-side” with military reporters, do live news broadcasts, and “convey more positive energy for a strong army” — this last phrase being a Xi-era term that refers to the need for uplifting messages as opposed to critical or negative ones.
Mulan made her on-screen debut in March this year, on the same day that the annual Two Sessions opened in Beijing. In a dramatic, action-packed trailer, Mulan was pictured parachuting into a jungle, shooting targets, and leaping through cyberspace — demonstrating her prowess with both tech and real-world combat. Since then, she has appeared on PLA social media accounts across multiple platforms, interviewing armed forces experts, explaining current military affairs, appearing as a presenter on a PLA variety show for young people, and even hosting a VR program honoring PLA heroes and martyrs for this year’s Qingming Festival.
The Unreality of State Media
It’s all part of a drive to get people more engaged with the PLA through the latest tech. Over the past ten years, multiple state-run outlets have worked on developing their own apps and becoming social media savvy to keep audiences engaged. This coincides, now, with a separate drive to push the use of AI in media.
In 2015, Xi Jinping made it clear that the military needs to keep up with the latest media trends: “Wherever readers are, wherever viewers are, that is where propaganda reports must extend their tentacles,” he told an audience at the PLA Daily. An article from the editors of the Military Correspondent (军事记者), another publication under the PLA Press and Communication Center, lists Mulan as one outcome of these instructions from the top.
Mulan, of course, is no journalist. Despite claiming to travel to the frontlines to “interview” the soldiers there, she is little more than a digital mascot, there to redirect news consumers to the China Bugle app, which was launched at the same time as Mulan.
Since her spectacular debut, actual appearances of Mulan have been few and far between, with just two short videos on China Bugle’s Kuaishou account and three on Douyin. Mulan also seems to distract viewers from the news she’s presenting — under one of her Douyin explainers, every comment was about Mulan herself rather than the issues she explored.
Among these comments were criticisms of Mulan appearing “fake” or “awkward.” Her appearances often have her standing in a stiff pose, with a few formulaic hand gestures as she rattles off a script. This is not unlike the cookie-cutter poses of the AI anchors created by Zhongke Wenge (中科闻歌).
Some seemed to struggle to come to terms with what they were seeing. “I wasn’t sure for a few seconds,” wrote one user in the post’s most upvoted comment. “I had seriously look for while [to believe] this is the official account.” For others, an AI anchor wearing the stars of a captain was a step too far: “Digital people can use camouflage but can they not have military rank?” suggested another top comment. “To be honest, I still want to see real people,” read a third.
Will the Anchors Hold?
AI anchors have been sprouting up across multiple state-run outlets like International Communication Center iChongqing. Hong Kong’s RTHK (香港電台) has also leveraged AI to make up for “staff shortages” in the wake of mass resignations and firings as the government has brought the public broadcaster to heel. These serve as reminders of state media’s commitment to pushing “cutting-edge” AI, cutting costs, and churning out more content more rapidly than ever before.
Many of these anchors are putting off viewers with their uncanny-valley appearances, but companies like Qianxun (谦寻) and Wondershare Virbo (万兴播爆) may have found a way forward by creating remarkably life-like AI e-commerce live streamers for small-scale merchants trying to sell their wares overseas in multiple languages.
Qianxun’s owner has explained that the company deliberately collected real people’s faces to avoid creeping out viewers. Although this was costly, he believed it would lead to fewer people swiping to another video after immediately seeing the host was fake. Perhaps the most disturbing about these AI streamers is just how easy it is to make their videos.
CCTV Finance (央视财经), the financial news arm of the PRC state broadcaster, has also taken this approach. During the Two Sessions, the outlet uploaded much more natural AI replicas of two of their presenters to answer the public’s questions 24 hours a day. They also demonstrated the process by having another of their reporters uploaded. The process took two days, using only a five-minute recording of the reporter. A technician explained to the reporter that her AI doppelganger could be used to produce a short video in less than a day.
Although some Chinese media platforms are adopting this “real person upload” method, the technology isn’t quite there for everyone. Beijing’s Changping District Media Center (北京市昌平区融媒体中心) also uploaded one of their TV hosts for this year’s Two Sessions. In a video staging it as a surprise reveal for the real-life host, the AI anchor is placed side-by-side with his real-life counterpart. Unlike CCTV’s offering, it’s easy to spot the slick yet stiff “digital person.”
The sheer novelty of these creations is often enough to get views for now. The example of Mulan shows that even their unsettling and off-putting nature can spark lively debate online. But whether audiences will keep looking once the novelty wears off is another matter. Even when it comes to the state’s propaganda machine, the human touch cannot be easily replaced.
A word of warning to all foreigners “telling China’s story well”: be careful not to tell your own country’s story well — at least, not in China. This is the cautionary tale that unfolded last week as Navina Heyden (海雯娜), a German influencer based in Shandong with 80,000 followers on both X and Weibo, published an article on Weibo discussing the partial decriminalization of marijuana in her home country.
Heyden explained that the policy, which took effect on April 1 and legalized recreational usage of cannabis by adults, aimed at reducing use by eradicating the black market. Countries without China’s historical sensitivities around drug use may think about the issue differently, she said, adding that, in her opinion, alcohol was more damaging to one’s health.
Though she took pains to emphasize that she was not advocating a similar approach in China, Heyden was subsequently attacked by netizens who accused her of “promoting drugs”, with some clamoring for a police investigation.
In a post directed at the firestorm around Heyden’s article, but not mentioning the influencer by name, “Beijing Anti-Drug” (北京禁毒), an official account operated by the drug enforcement division of the Beijing Public Security Bureau, warned readers against misguided opinions on drug use. “Do not be misled by the practices of individual Western countries, or fall under the influence of drug subcultures in the West,” the post read. “And do not, irrespective of the national conditions, talk about the legalization of drug abuse, or the bizarre theory of drug legalization.”
Comments below the post mostly expressed outrage: “Our country’s attitude towards drugs is one of zero tolerance!” said one.
In a statement on April 10, the day after her original post, Heyden said she had carefully looked over the post with “journalists, editors, and police” before publishing, implying that no one had foreseen this degree of blowback.
While she did not exactly apologize for her remarks, Heyden did signal her readiness to leverage her influencer status for anti-drug messaging. “I [have] contacted some police officers . . . and expressed my willingness to participate in the production of anti-drug popularization videos, especially for the international student community and the foreigner community in China, to remind them to abide by Chinese laws,” she wrote.
Although she has worked with state media outlets like the People’s Daily and CGTN, however, she claims to be neither “pro-Germany” nor “pro-China,” and even sued Die Welt in 2021 for publishing an article that said she was covertly spreading propaganda for the PRC. However, the Institute for Strategic Dialogue in the UK ran a report on her Twitter feed, which found evidence that her account was being boosted by inorganic traffic and retweets from PRC diplomats and Huawei bigwigs (several of whom just happened to retweet her at exactly the same time).
Heyden has built a career off providing a German perspective for Chinese readers. She writes a column for nationalist website Guancha (观察) commenting on German affairs, and has often been touted as a friend to China, with Beijing Daily (北京日报) saying she is “loved” by Chinese and foreigners alike, and a perfect template of what the Party-state asks of foreigners who “tell China’s story well.” Weibo labeled her a “German girl who stands up for China.” Heyden commented below asking Weibo not to label her account as such — but this has apparently been ignored.
Attacks on Heyden are not what Beijing would like to see. They’ve taken great care to nurture foreign personalities willing to promote official viewpoints abroad. Former Global Times editor Hu Xijin argued when he weighed in on Weibo that Heyden’s stellar record of “defending China in the field of international public opinion” should earn her more tolerance, and that having her attacked from both sides would only make “Western public opinion applaud gleefully.”
When it comes to useful foreigners, however, “love” is far from unconditional.