Author: Alex Colville

Alex has written on Chinese affairs for The Economist, The Financial Times, and The Wire China. He was based in Beijing from 2019 to 2022, where his work as Culture Editor and Staff Writer for The World of Chinese won two SOPA awards. He is still recovering from zero-Covid.

For State Media, Copycats are No Joke

Earlier this month, the People’s Daily astonished millions of online readers in China by weighing in on a petty dispute between two celebrities. The article, which accused an actress of grabbing publicity by slandering her ex-boyfriend, was an odd change of character for the Chinese Communist Party mouthpiece. Speculation raged about what this aberration could mean.

There was just one problem — the article was a complete fake. And within hours, a new question loomed: How did this happen?

In fact, convincing as it was — with an apparently genuine People’s Daily Online URL, look and layout — the piece wasn’t written by People’s Daily at all. The next day, the media group weighed in to disavow the article, saying it was a “copycat” (套牌) that had cloned its news pages. It went on to say this was not an isolated incident, and voiced concern that the impersonation of official news outlets, apparently a rather widespread phenomenon, could “trigger a crisis of trust” in the country’s Party-run news outlets. 

In fact, the issue has little or nothing to do with trust — and everything to do with power. The lesson: monopolize access to speech and information, and those eager to be heard will find a way to borrow your privilege. 

To Fake It, Officially Make It

In this case, the actors doing the borrowing seem to have been “fan circles” (饭圈), or “fandoms,” a repeated source of trouble on China’s internet. These fan groups can become tribal in their devotion to the celebrity figures they have rallied around, and go to great lengths to defend them. In July 2020, after two fandom camps started a disruptive online war, the government really did step in, the Cyberspace Administration of China (CAC) promising to stop fan circles “tearing each other apart.” 

The fake article appearing online in China on October 3, 2024 — shown here with a “fake” label — was indistinguishable from a real People’s Daily Online post.

This time, the trigger seems to have been a trivial matter of a celebrity couple splitting up, the woman alleging her partner had cheated on her multiple times. In a post on Weibo, the woman accused her partner’s fan circle of being behind the fake People’s Daily article, trying to get revenge for her accusations.  

When investigating these accusations, a reporter from The Beijing News (新京报), one of the country’s most-read commercial newspapers, unearthed a thriving underground industry where clients could pay for a clone of any website they wanted — and noted that 200,000 clones of Chinese websites were found in 2020 alone. 

As some of these pages are hosted on servers outside China, there is not much the authorities can do, apparently, to remove them. The reporter found multiple fake websites of official government platforms, including one based in Singapore posing as a Hubei government portal. The site (which is still active) looks similar enough to the real thing to fool people. 

In an official commentary, The Beijing News went on to say that websites of government institutions and official media “have become the hardest hit” by counterfeit websites, given their position of authority within China’s information flow. They don’t seem to be wrong. The CAC’s “Joint Rumor Refutation Platform” (联合辟谣平台) documented eight cases of netizens forging official government websites or documents to release false information in September, and five so far for October. 

Guiding Pains

In China’s news environment, the copycat phenomenon risks damaging a carefully curated system, official media say. “At worst, they undermine the credibility of authoritative institutions and disrupt the public opinion ecology,” explains The Beijing News, an outlet that, while commercial in outlook, is also directly under Beijing’s propaganda office. 

An October 4 commentary — this time real — at People’s Daily Online targets the fake People’s Daily Online article appearing the day before.

For decades, the Chinese Communist Party has taken great pains to shape the country’s public opinion ecology so that it is the only authority for information (and, as in the case of the joint rumor refutation platform, gets to dictate what is and is not accurate information). The party also set the tone for how events are to be viewed — and how they are subsequently covered by domestic media. In the case of the stabbing of a Japanese child in Shenzhen this September, for example, state media waited until the Ministry of Foreign Affairs MFA) spokesperson had passed judgment on the incident — dismissing it, essentially, as having any systemic cause in government-fueled anti-Japanese sentiment — before directly parroting the statement. 

The Chinese government is known itself to peddle misinformation, or to allow it to thrive online, if it suits their domestic or geopolitical agenda. As official outlets are still considered a source of (at least official) truth by many Chinese, their words are an effective tool to sway public opinion — a textbook definition, in fact, of how the leadership has defined the media’s role since 1989. 

When Ministry of Foreign Affairs spokespeople and official news outlets alleged in 2020 that Covid-19 had originated from the US-run Fort Detrick (in reaction to accusations Covid had originated from a lab in Wuhan), this was taken on faith as fact by many ordinary Chinese citizens. 

Given the increased reliance on official sources as the core of information flow in China, it only makes sense for netizens to tap into this source of authority — which has taken the place of genuine trust. How can netizens not resort to official copycats when this is the most surefire way of being taken seriously and having an impact? 

So long as the confusion of power and trust is endemic in China, with the CCP defined as the only possible source of truth, the problem of official copycatting will likely persist. 

For its part, the People’s Daily, the victim of the fraud in this latest instance, remains in self-denial about the fact that what it calls “trust” is actually power — and that this is the root of the problem. “Once this kind of trust crisis is there,” the outlet said ominously the day after the spoof went viral, “it will be difficult to go back, and there will be long-term negative impacts on the network ecology.”

Making Deepfakes With Chinese AI

To many, 2024 is the Year of Democracy, a time when billions of people will go to the polls in over 65 elections across the world, giving us the biggest elections megacycle so far this century. To others, it’s the Year of AI, when the rapidly developing technology truly went mainstream. And for autocracies like China, some worry it could be both — a year of unbridled opportunities to use AI to manipulate the outcomes of some of the world’s most consequential elections.

“Generative AI is a dream come true for Chinese propagandists,” wrote Nathan Beauchamp-Mustafaga, Senior Policy Researcher at RAND, in November last year. He predicted the PRC would quickly adopt AI technology and “push into hyperdrive China’s efforts to shape the global conversation.” A few months later, AI-generated content from China attempted to influence Taiwan’s presidential elections by slandering leaders of the ruling Democratic Progressive Party, which champions Taiwan’s autonomy and separate identity.

Since then, China’s AI capabilities have been rapidly developing. Tech firms have rolled out an array of video generation tools — freely available, easy to use, and offering increasing levels of realism. These could allow any actors, government-affiliated or private, to generate their own deepfakes. But despite concerns about the potential for AI-generated fake news from China, there has been little investigation into what is currently possible. 

At CMP, we tried to generate our own deepfakes to find out. In the end, we found that the process is simple and fast — with results that, while imperfect, suggest the potential implications of AI for external propaganda and disinformation from China are immense.

Working Out the Fakes

We gave ourselves a few caveats. Firstly, these tools had to be free and simple to use. This would cater to the lowest common denominator: an ordinary Chinese netizen without coding skills or funding, just a desire to spread disinformation. It’s likely that government-affiliated groups would be far more professional — assigned a budget, VPN, higher-quality tools, and technical expertise — but we wanted to determine how low the bar currently is. Our question: What is possible in China today with the minimum amount of effort and resources? This is also why we limited ourselves to tools available on the Chinese internet, within the Great Firewall. 

With US presidential elections just around the corner, we set out to create a video of Republican candidate Donald Trump announcing that, if elected, he would withdraw the US from NATO.

Screenshot of an early attempt at a script, which included Donald Trump recognizing Taiwanese nationhood.

For the script, we turned to Zhongke Wenge (中科闻歌), an AI company chaired by a member of the National Development and Reform Commission. The platform has a setting that generates scripts for short videos. We soon had a minute-long script, including lines for our fake Trump. 

We then turned to ElevenLabs, an AI voice generation company, to narrate the script. Despite being a US platform, ElevenLabs is available within China, according to Chinese firewall database GFWatch. It’s not far-fetched at all to combine Chinese and non-Chinese tools: social network analytics company Graphika identified one pro-China influence campaign as using AI-anchor tools from UK company Synthesia, whose tools are also available within China.

For Trump’s voice, a search on Chinese video streaming platform Bilibili yielded a video on voice cloning websites. This led us to Fish Audio, a website available in China has a ready-made Trump voice cloner. All we had to do was write out what we wanted Trump to say.

For the video footage, we experimented with several tools, but were dissatisfied with the vast majority. In the end, we chose Chinese AI startup Minimax’s “Hailuo AI” (海螺AI), partly because it generated the most realistic video clips but also because the company seems to be angling it towards professional creators, providing guides on how to produce ads with their software. We bookended the video with template title and end cards from FlexClip, an open-access website also available over the Firewall. Finally, we stitched the footage and audio together using Jianying (剪映), the Chinese version of ByteDance’s CapCut video editing app.

And here it is, our end result. We estimate the total time spent to be an hour at most.

The Results Are In

We set out to create footage of Trump announcing his intentions to withdraw from NATO and to recognize Taiwan as an independent country. Videos have been doing the rounds on Douyin that use voice and lip-synching technology to make him sing songs in Chinese. But this is still complex, requiring knowledge of developer tools and access to more sophisticated technology. Quality is also an issue. Many Chinese video generation websites — including Vidu, Kling, and cogvideoX — generated hallucinatory, unusable footage. When we attempted to animate images of Joe Biden or Donald Trump using Vidu and cogvideoX, their faces transformed completely. Hopefully, this still feels fake enough that most viewers would not be fooled.


Chinese companies have put a number of safeguards in place for AI-generated video, but these are easy to get around. New standards proposed by the government in September rule that any AI-generated content has to have a prominent visual watermark, and Chinese experts have argued this is an effective way to combat AI disinformation. But circumventing this is easy. All one has to do is to screen-record the video, cropping out the watermark. Another question we have over the medium to long term is whether the new standards will apply at all in cases where AI-generated content is intended for China’s own state-backed disinformation. Will China abide in practice to the spirit of its Global AI Governance Initiative

Censored words can also be overridden with some creative thinking. Although Hailuo AI refuses to generate videos that have “Donald Trump” in the prompt, it decided that our prompt for “President of the United States” meant Trump, and generated footage of him. It also refused to generate content for “Taiwan” but allowed the island’s historic name, “Formosa.”

Sensitive terms, however, are constantly being updated. The video was quickly removed from our Hailuo AI account, and when we tried to use “Formosa” again a few weeks later, it was refused. 

The work itself was tedious. Five seconds of video from Minimax required five minutes to generate, so a news bulletin of one minute would therefore require at least an hour of button-pressing — assuming you were happy with the footage that resulted. The same prompt generates different results each time, so simple things like continuity between shots were problematic. In our video, Trump looks slightly different in each shot and the backgrounds behind him and the journalists do not match. Sometimes prompts using “President of the United States” generated videos of Barack Obama instead of Trump.

This echoes what the director of the state-affiliated Bona Film Group said in July about a series of short films the group had generated using AI. She lamented the difficulty of maintaining continuity between shots: “The most difficult thing in real-life shooting happens to be the easiest thing for artificial intelligence, and the most difficult thing for artificial intelligence happens to be the easiest thing in real-life shooting.” 

Although we initially planned for a minute-long video, the amount of tedium involved — endlessly pushing buttons, writing prompts, waiting five minutes, and checking the results — would be enough to tempt an ordinary netizen working in their free time to cut corners.

Whether or not our endeavor impresses you, these videos represent only the possibilities at this moment in time. China’s AI is advancing at an incredible speed. Hailuo was released just over a month ago, and already the state-run China Central Television (CCTV) has used it — alongside Vidu and Kling — in a video marking this month’s National Day. If Chinese AI software is nearing standards for national broadcast, this suggests it is approaching a level of believability suitable for deepfakes.

So, could a Chinese netizen produce a believable deepfake with relative ease? Our experiment suggests the answer is yes — but with a lot of caveats. As AI tech improves, however, the list of drawbacks will surely shorten. In a matter of months, the tools available to deepfakers, including those backed by determined states, could be sufficiently sophisticated to have impacts that are all too real. 

After a Savage Attack, Brutal Silence

The brutal killing of a Japanese schoolboy in the Chinese city of Shenzhen last week has made headlines across the world. The wider context of the tragedy — that it happened on the anniversary of the “Mukden Incident” that began Japan’s invasion of China nearly a century ago, and just months after another nearly deadly attack on a Japanese mother and her child in another city — raises serious questions about how it might be linked to decades of anti-Japanese education, entertainment and cultural conditioning in China. 

But these are serious questions China’s media are not asking, or cannot ask.  

How the media in China have reported the incident domestically (or not) is an unfortunate reminder not just of how stringent controls have become, but also how detrimental this atmosphere has been to discussion of the darker undercurrents of contemporary Chinese society.

From the early stages of the incident, key details were missing. The police report from Shenzhen did not mention the boy’s nationality, age, or where the attack took place. Instead, news filtered into China through overseas media. Some of the earliest reporting of the response from the Japanese government inside China came from the WeChat account of Nikkei Asia. Several reports published on the day of the incident that noted the statement from Japan, including from Caixin and from Shanghai’s Guancha, have been scrubbed from the internet, yielding 404 errors. Another report from the news portal NetEase, which seemed to have included some on-the-ground reporting from Shenzhen, was also taken down.

Screenshot of the Caixin website on September 18, with a notice saying that the webpage “does not exist or has been deleted.”

In all likelihood, reports from outlets like the above were removed by the authorities because they jumped the gun, not waiting for an official news release (通稿) from Xinhua News Agency. Generally, for such sensitive stories, more compliant media know that protocol demands that they wait for official word. State media, therefore, kept silent on the issue until after Lin Jian (林剑), a spokesperson for China’s Ministry of Foreign Affairs (MFA), held a press conference late on September 18, and again on September 19

During the second press conference, journalists from Nikkei Asia linked the Shenzhen attack to the June attack against the Japanese mother and her son in Suzhou, widening the context of the Shenzhen case. Lin insisted, however, that this was an isolated incident. “According to the information we have so far, this is an individual case,” he told reporters. “Similar cases can happen in any country.” 

That China’s foreign affairs ministry was out in front of Xinhua pointed to the sensitivity of the story, and its possible international impact. State media could now follow up on the story, but they limited themselves entirely to the MFA remarks. Hunan Daily, for example, the official mouthpiece of the provincial CCP leadership in the province, quoted Lin Jian verbatim, offering no additional details or context. The same was true of Shanghai’s The Paper, published by the state-owned Shanghai United Media Group, and other provincial-level dailies such as Guizhou Daily.

A more detailed report from local newspaper Shenzhen Special Zone Daily (深圳特区报), published on September 20 and shared by Yicai and other online outlets, illustrated another time-worn propaganda approach — the single-minded focus on official action and heroism. The report focused on the valiant attempts by the emergency services to rescue the boy. There was again no link made to the previous stabbing attempt in Suzhou. The attack, as they stated prominently at the top of their report — closely following the MFA line — was an “isolated case.” 

The only touch of compassion in the report came in the form of a few quotes from locals laying flowers at the school. “No matter where the child’s nationality is,” one note at the scene reportedly read, “since he lives and studies in Shenzhen, he is a child of Shenzhen.” 

Silencing Hints of Humanity

In fact, signs the public was interested and willing to discuss the broader context of the killing, as well as its causes and implications, were evident early on. Articles on WeChat linking the Shenzhen and Suzhou cases, as well as mentioning that images of the Japanese school’s entrance had been circulating on social media before the attack, were removed from the popular platform.

While such nuanced voices were stopped in their tracks, posts that urged online caution about the sensitivity of the September 18 anniversary were allowed to go viral. One example was an article on Baidu about netizen outrage over an internet vlogger who had her account suspended for allegedly disparaging the anniversary by calling it “June 18.” 

A silence about context was enforced the day of the killing, state-affiliated media outlets readied the next, predictable phase — the demonizing of context itself. A commentary posted by Guancha on September 19 said that any netizens blaming the attack on patriotic education or “hate propaganda” would “produce a destructive effect” on society. 

The Shenzhen attack is a sensitive story on a number of fronts for China. For starters, the government — which has touted increasing foreign visits as a mark of economic turnaround — is wary of frightening away foreign tourists, businesspeople, and investors. The attack, the third high-profile assault on foreigners in China in recent months, risks undermining the leadership’s message that China is open and ready to engage again with the world following the pandemic downturn. 

The attack also risks undermining the simplistic narrative, advanced by state media, that China is fundamentally a society encouraging tolerance among civilizations — which has lately been a key pillar of what the leadership calls “Xi Jinping Thought on Culture.” The case tells us that despite China’s rhetoric of civilizational tolerance, the country has its own share, like perhaps any country, of individuals capable of violent xenophobia. 

But the most sensitive aspect of this story, the most dangerous question that can be asked, is why. Why is China experiencing such violent attacks, and against the Japanese in particular? The answer to that question is no doubt complex. And yet, as netizens made clear in their early, stillborn conversations on the Shenzhen attack, the role of China’s officially-encouraged culture of xenophobic ire — a culture of “toxic nationalism” —  is a serious issue that needs to be addressed. 

The brutal truth behind this savage attack is that this problem will not go away until the antipathy at its root, present in the media discourse of the state as much as in the heart of the attacker, can be faced head on.

How China Thinks About AI Safety

Matt Sheehan is a Fellow at the Carnegie Endowment for International Peace, specializing in China’s AI safety and governance. His latest paper highlights concerns within China’s government about regulating Artificial Intelligence. In July, he appeared on a panel at Shanghai’s World AI Conference (WAIC), a meeting of China’s top AI entrepreneurs, engineers, and lawmakers that was heavily publicized by state media. Alex Colville spoke with Matt to take the temperature of China’s AI industry and discuss the PRC’s current positions on AI safety and deepfakes.

Alex Colville: What is the current discussion on AI safety among PRC elites? 

Matt Sheehan: There’s been a huge change in the last 18 months or so. If you’d asked me two years ago “How salient is AI safety in China’s AI industry and policy?” I would have said that it’s something you’ll hear people talk about in private sometimes, but it’s very much not in mainstream policy discussion and it’s not even very well represented in the mainstream scientific technical discussion. From, say, 2021 through mid-2023, AI regulation was a public opinion management issue, implemented at the department or ministry level, particularly the Cyberspace Administration of China. Before the rise of ChatGPT, the only part of the government making reference to safety issues was the Ministry of Science and Technology, which had made explicit references to the loss of human control over AI systems. But then over the course of 2023 we started to see AI safety rise to prominence, especially in the elite scientific community.

We’ve seen a number of public statements from China’s most prominent computer scientists saying they share a lot of the international concerns about catastrophic risks from loss of control of unaligned AI systems. The question has been if that elite scientific conversation has been making its way into the elite levels of the CCP. Beginning with the Interim Measures for Generative AI [regulations issued by the CAC to monitor AI-generation service, in effect since August 2023], AI governance has become something the Politburo and the Central Committee want to be more directly involved in. They’ve shifted away from viewing AI governance as primarily a public opinion management issue that can be left to the CAC, and now view it as an issue of national power, and geopolitical positioning. 

I think this past couple of months we’re starting to see some of the strongest indications AI safety is getting a hearing at this level. In the recent Third Plenum Decision, a section on public security and public safety included the call for creating an “AI safety supervision and regulation system.” This is the first time we saw a direct reference to AI safety  —  used in reference to public safety rather than content safety  — appear in this high level of a policy document. It’s up for debate what exactly that supervision and regulation system might look like.

AC: How have China’s views on AI safety evolved compared to the views of the US and EU?  

MS: In the past when the CCP has talked about “safety” (安全) as it relates to AI, they mostly have been talking about ideological security, political stability, content security, and content safety. What’s happening now in the last year is that we’re starting to see them use the term “AI safety” (人工智能安全) in a way that does seem more aligned with Western usage of the term, more tied to potential catastrophic risks. In both the Third Plenum Decision and some of the explainer documents released afterward, they tend to include AI safety in a category of large-scale industrial safety or use terms that in the past used to refer to large-scale industrial accidents. So whereas for a while I’d say China and the West were talking about fundamentally different things even when we appeared to be using the same language of “safety,” there’s now this emerging area where our scientific communities might be talking about the same thing. 

AC: When we say elite levels of the CCP, we’re talking about a group of socially conservative men in their mid-60s, only a few of whom have been educated abroad or have STEM-related educations. What do you think is their understanding of AI and its capabilities?

MS: The answer is we don’t really know. You can learn a little bit by who they are talking and listening to. For example, Andrew Yao is probably the most respected computer scientist in China. He personally received a letter from Xi Jinping congratulating him on his 20 years working in China. He won the Turing Award in the United States and returned to China around 2000. He has this class called the “Yao Class” (姚班), which a lot of the major AI scientists or entrepreneurs in China went through. 

Andrew Yao. Source: Wikimedia Commons.

I’d say he’s one of the most important and listened to computer scientists within the halls of government. He’s become a very vocal proponent of AI safety being a serious issue that needs to be acted on. I haven’t seen his name in one of the Study Sessions of the Politburo, but I would not be at all surprised if he presents at those kinds of levels. We have to infer whether or not those messages are getting from him into the higher levels of the Chinese government, but at least circumstantially we can see some evidence of that.

MS: The only full study session of the Politburo dedicated just to AI, back in 2018, was done by Gao Wen (高文), who’s another relatively elite scientist who has also been talking more about AI safety and alignment recently. Peking and Tsinghua University professors have definitely been consulted on the regulations. There are also the heads of state-sponsored AI labs: Zhang Hongjiang of the Beijing Academy of Artificial Intelligence (张宏江) and Zhou Bowen of the Shanghai AI Lab. 

AC: Watching Chinese reactions to the EU’s new AI Act has been interesting. An article from China’s AI Industry Alliance (AIIA) has one consultant talking about how Chinese laws will be “small incisions” (小切口法) versus the “horizontal legislation” (横向立法) in the EU’s Act. How do the EU and China differ in the way they create AI legislation? 

MS: I think how the regulatory architecture is built out is one of the biggest differences between China and the EU. From the start, the EU went for one large horizontal regulation that’s supposed to cover the vast majority of AI systems all in one go, and within that you just tier them by risk. The Chinese approach has been not to have a comprehensive law, but to try to attack it quickly at the departmental level with targeted application-specific regulations, specifically for recommendation algorithms [in effect since March 2022], deep synthesis [in effect since January 2023], and the generative AI interim measures. They didn’t start from the technology, they started from a problem as they saw it. They thought recommendation algorithms were threatening the ability to dictate the news agenda, while deep fakes and generative AI would threaten social stability. So they worked backwards from the problem to create regulations specifically targeted toward it. 

AC: China’s own AI Law has been in unofficial draft form since August 2023. Do you think this is something the leadership is still pursuing?

MS: A move towards a national AI law would be more aligned with the EU, but it’s unclear if they’re going to end up pursuing that. Policymakers often say their approach to AI should be “small, fast, flexible,” (小快灵). They’re worried about some long process that ends up being immediately out of date. The EU did a lot of work on their AI Act, but then in the final stages ChatGPT came out, and they needed to do some significant rethinking of how to deal with foundation models. China released its deep synthesis regulation five days before ChatGPT came out, and ChatGPT totally shook up what was in that regulation. So they just went right back to the drawing board and a few months later, had their generative AI regulation. 

It’s very up in the air how specific, binding, and constraining the AI law would be if it came out. I think for a little while there was a lot of momentum and the thought was we have a push for this law very quickly. But it seems like the government has pumped the brakes a bit. They’ve got three different AI regulations already and currently want to be more pro-innovation — maybe they think they don’t need to roll out an entirely new national law on this right away. That makes sense from a number of perspectives: do you want to codify things in a law when the technology changes quickly? I think it’s most likely we won’t see the AI law text itself for another two years, maybe more, but they could surprise us on this. 

Meanwhile, they’re also going to apply targeted standards to different industries, helping compliance with current regulations or to quickly integrate AI into different industrial processes. There are compliance standards about how to implement the generative AI regulation, exactly how to test your generative AI systems, and what performance criteria they need to meet in order to be compliant. 

AC: We’ve been finding a lot of variance on how closely current regulations are followed, for example on putting digital watermarks on AI-generated videos and whether models can create deepfakes of politicians. How standardized is the field at the moment, and is there going to be any pushback for not following standards and regulations?

MS: It seems like classic Chinese regulation, where they throw a lot down on paper, but then selectively enforce it to achieve their ends, and maybe one day have a big crackdown on non-watermarked content. [Note: Since this interview, the CAC has released a draft standard fleshing out the Interim Measures for Generative AI, listing in detail where and how watermarks should be used.] I don’t think a national AI law necessarily solves that. In a lot of ways, I think their management of AI-generated content is going to be like the broader never-ending game of whack-a-mole that governs the CAC’s relationship with big internet platforms, where they constantly go to companies and tell them how to cover an event they’re worried about. The rules require conspicuous watermarks on content that might “mislead the public,” and the point of the watermarks is to make sure that you’re not having people deceived in ways that are criminal or harmful to the party’s interests. There’s going to be a lot of AI-generated content that is not really that harmful, so you don’t actually need visible watermarks on every piece of AI-generated content. Maybe the CAC will choose to selectively enforce this, where if a company allows a misinformation or disinformation video to go viral on the platform, that company will be punished in line with the generated AI regulation requirements for not having a watermark. But if you just have online influencers choosing to replicate themselves so they don’t have to record new videos every day, that might not be something the CAC sees as a problem. 

Deepfakes would not fall under the AI safety (人工智能安全) concept we’re seeing in policy documents or discussed at the International Dialogues on AI Safety, which is quite specifically referring to AI escaping human control or very large-scale catastrophic risks from the most powerful AI models. So that’s a much more forward-looking and speculative set of concerns the elite level of politicians are worried about, which is pretty different from complying with the generative AI regulations. To them, being able to generate deepfakes from one diffusion model but not another sounds like a compliance issue for the CAC. Maybe one company is either not as good at detecting unacceptable content, or they’re not putting much effort into it.

Matt Sheehan speaking on a panel at WAIC with AI policy scholar Zhang Linghan (张凌寒). Image courtesy of Matt Sheehan and Tsinghua University’s I-AIIG.

AC: We noticed you spoke at a panel for this year’s World AI Conference in Shanghai. What was that like?

MS: WAIC was a massive event. It’s always a big production, but this year was really the return to pre-Covid form, and you could see there was a concerted effort by organizers to use the event as a way of saying “we’re back.” They wanted foreigners to be there. They really wanted to tell the outside world that China is open, China is responsible with AI, and China is leading in some areas.

AC: Were there any moments that stood out for you?

MS: I was talking to some of the entrepreneurs who were impacted by the Interim Measures for Generative AI and how intensively it is still enforced. Prior to going to WAIC, it looked to me like the CAC had decided to be much more accommodating for AI companies. They really watered down the text of the generative AI regulation, between the drafting and finals, to make it more favorable to businesses. That was because they got pushback from other parts of the bureaucracy, who have an eye on the economy and competing with the US, saying “we cannot sacrifice our AI industry entirely at the altar of censoring what comes out of language models.” 

In general, I think the CAC’s in a bit of an awkward position because the zeitgeist has shifted so much from the time of the tech crackdown [late 2020 – late 2023] to a focus on economic development, getting AI companies to thrive. So prior to the WAIC, I expected them to be a little bit more hands-off with AI companies when it comes to implementing the regulation. But being there and talking to folks I realized that this is all relative. Yes, they’ve eased up a bit from early 2023, but they’re still very, very hands-on with constantly testing the models, and re-adjusting every day, every week, what is unacceptable content.

AC: How much hype do you think there is in China’s presentation of its home-grown AI? 

MS: Hype is not China-specific, and every AI company in the world has been overhyping their products for the most part. Maybe in China there’s a little bit of an extra layer of showmanship just because it’s their tendency when it comes to these types of business products. I think in some ways, 2023 was all about catching up with Large Language Models once ChatGPT came out. Everyone was just competing on these performance metrics, making money wasn’t hugely necessary. Now there’s an increasing sense of needing to find a way to sell this service, to create applications that can make money. That’s probably going to be a little bit less grandiose in some ways, but it might actually put the industry on more stable footing going forward. I think the industry in China faces tons of problems with financing. There’s a very good chance we’re in the midst of a big AI bubble (or at least a large model bubble) that’s going to pop, with a lot of the biggest and best AI startups today going bankrupt in a few years. But that’s not unique to China, it might just be more acute in China than in other places.

Responding to AI Risks

The hefty document emerging from a much-awaited political meeting in Beijing back in July covered an expansive range of areas, from leadership and long-term governance to “comprehensively deepening reform”  — but, as commentators noted, few specifics. This week, we may have greater clarity on at least one priority area: artificial intelligence (AI). 

The Third Plenum decision, released on July 22, made it clear that AI safety has moved rapidly up the Party’s agenda. Both members of the powerful Politburo of the Chinese Communist Party (CCP) and prominent Chinese scientists have acknowledged in recent months that while AI has the potential to revolutionize China’s economy and geopolitical position, it could also have disastrous impacts on humanity. 

The “Decision” called on the party leadership to “improve the development and management mechanism of generative artificial intelligence.” In line with this goal, the document mentioned plans to create an “AI safety supervision and regulation system” (人工智能安全监管制度) — a response first mooted in October last year in China’s Global AI Governance Initiative. But with the AI sector undergoing breakneck development in China, what would such a system look like? 

On Monday, a special committee under the Cyberspace Administration of China (CAC) dealing with AI released an initial draft of what is being called the “AI Safety Governance Framework” (人工智能安全治理框架). Not only does it lay out in detail a swathe of AI-related risks the CAC is looking out for, but also points to possible solutions to deal with these risks — with everyone from developers to netizens all having a role to play.

Prevention and Response

The office in question, the “National Technical Committee 260 on Cybersecurity” (TC260), is charged within the CAC with liaising with industry experts to create IT standards for cybersecurity. Essentially, TC260 must work out how to ensure cybersecurity policies from the top are fleshed out for industry professionals to follow. Last year, for example, they published standards on exactly how to create a “clean” dataset for generative AI models (with all sensitive political content removed), in compliance with a new set of CAC measures on generative AI.

China’s new  “AI Safety Governance Framework,” released this week by the CAC.

After laying out the general principles of AI security, the document is structured around a series of preventive and countermeasure actions for a series of bolded points of risk (风险分类). Types of risk are divided into two overarching categories — endemic generative AI risks (人工能内生安全风险) and application generative AI risks (人工智能应用安全风险). As the terms suggest, the first category deals with the risks inherent to AI by its very nature, while the second deals with the technology’s possible misuse or abuse with harmful outcomes. 

The framework introduced by TC260 this week considers a host of AI-based risks, including AI becoming autonomous and attempting to seize control from humans. “With the rapid development of AI technology, it is not ruled out that AI can independently acquire external resources, reproduce itself, generate self-awareness, and seek external power, and bring the risk of seeking to compete with humans for control,” the document read. This more dramatic scenario, however, comes as the final flourish on a lengthy list of risks that many AI experts globally would recognize. 

The more practical, current risk scenarios detailed in the document include such things as the use of AI in criminal activities, the inadvertent release of state secrets, misinformation through AI hallucination, the deepening of racial and gender discrimination, and external AI-related risks such as the “malicious” blocking by other states of the global AI supply chain. The document also raises the concern that AI might “challenge the traditional social order” by subverting general understandings around issues like employment, childbirth, and education. On this last point, here is an example of how the document lays out and addresses such risks:

The risk of challenging the traditional social order. The development and application of artificial intelligence may bring about significant changes in the means of production and production relations, accelerate the reconstruction of traditional industry models, subvert the traditional concepts of employment, reproduction and education, and challenge the stable operation of the traditional social order.

The document follows on from these risks by listing out a series of both “technical response measures” (技术应对措施) and “comprehensive governance measures” (综合治理措施). For example, in outlining responses to the question of data security — referring in this instance to users’ personal data — the document notes the need to follow “safety rules for data collection and use, and personal information processing” in the course of “the collection, storage, use, processing, transmission, provision, disclosure, and deletion of training data and user interaction data.” And in other cases, the responses point to the further need for other concrete mechanisms to deal with underlying risks. In its response to “network domain risk” (网络域风险), for example, the document notes the need to “establish a security protection mechanism to prevent the output of untrustworthy results due to interference and tampering during the operation of the model.

Endemic Uncertainties

One source of generative AI risk that comes through at a number of points in the CAC document concerns the “poor explainability” (可解释性差的风险) of the decisions AI makes. “The internal operation logic of AI algorithms represented by deep learning is complex, and the reasoning process is in black and gray box mode,” the document says, “which may lead to output results that are difficult to predict and accurately attribute, and if there are any anomalies, it is difficult to quickly correct them and trace them back to the source.” At issue here is the basic nature of neural networks, which makes it virtually impossible to identify and repair the “thought” processes of generative AI. The result is the AI black box — a general and unpredictable source of risk (a gray box is when a developer partially knows the arrangement of the neural network, but not everything). Even as the CCP hopes to harness AI for what it calls “high-quality development,” these inherent uncertainties are a frustration the authorities are keen to anticipate and resolve.

The black box exacerbates two other problems TC260 identifies: “hallucination,” when an AI model presents a garbled, inaccurate answer as fact, and “poisoned” training data, when an AI model says something politically or socially harmful because of the data on which it was trained. Such content, warns the CAC group, could lead to problems like fake news, racially discriminatory language, and personal data theft. It might also compromise “ideological security” (意识形态安全). But these problems are extremely difficult to eliminate. As Qihoo 360 CEO Zhou Hongyi (周鸿祎) acknowledged last month, eradicating hallucinations in AI’s current set-up is impossible.

The “AI safety supervision and regulation system” recognizes that these endemic issues might be exploited by human beings using generative AI. The CAC document says protection mechanisms for AI models must be put in place to ensure that harmful prompt words do not generate “illegal and harmful content.” Poking around with Chinese LLMs at the China Media Project, we have often discovered how easy it can be to generate content the PRC would deem to be politically harmful with the help of AI — as when we got iFLYTEK’s model to hallucinate when discussing the Tiananmen Massacre. 

Finding Solutions

On the question of what can be done to minimize the risks that come with generative AI, TC260 offers a long list of suggestions. 

Some are simple and sensible: China should work to clean up AI datasets, raise public awareness about the dangers of AI, and ensure users do not rely solely on AI to inform their decisions. Others are more wishful. TC260 says it wants to improve the “explainability and predictability” of AI, essentially eradicating AI’s black box. This is not currently possible given how LLMs have been built — a fact the office acknowledges further down in the document as it urges further research and development in this area. Perhaps just as unrealistic is the CAC’s suggestion that “the public should carefully read the product service agreement before use” — something few users anywhere in the world actually do. 

Many of these recommendations are nothing new. TC260 have themselves already created a risk management process to eliminate sensitivities when training Large Language Models, while state media have already been raising awareness of the hazards of AI for a while. Other solutions have only just started being rolled out by the CAC, such as a “self-discipline initiative” in late August, designed to raise awareness in the industry about the importance of data security, model compliance, and ethical standards.

One solution that could be crucial is for the authorities to actually enforce rules already on the books that can have a real impact — and that are already emerging as standard practice elsewhere. Nearly two years ago, in November 2022, the CAC released rules requiring digital watermarks on AI-generated video content. These rules have not been uniformly observed as AI companies have focused on revenue generation, and the authorities seem for now to be looking the other way. Some major AI-generation video companies will remove all watermarks as an incentive for paid subscriptions. Despite the lack of enforcement on the labeling issue, the CAC document released this week identifies the difficulty of identifying deepfakes as a crucial area of risk, and acknowledges that their prior regulation has been inadequate. “We should formulate and introduce standards and regulations on AI output labeling, and clarify requirements for explicit and implicit labels,” the document concludes. 

TC260’s framework is not a codified document or binding law. It is more a roadmap — or even a wishlist — for how authorities want the tech industry to think about AI safety governance. The details will be thrashed out later, subject to constant revision and elaboration through subsequent CAC notices. This has been the practice with internet control and regulation already for decades. Right now, China’s strategy on AI regulation is to make “small incisions” (小切口法) in the form of standards and guidelines, adding a level of flexibility to a rapidly-changing technology they consider lacking in a one-size-fits-all law on AI like the European Union’s Artificial Intelligence Act

The CAC framework will certainly have multiple updates, this week’s being only the “first version.” This list of risks and responses, like the tech, is liable to change fast.

Farewell, Microblog

In its early stages in 2009, Sina Weibo built its success on larger-than-life personalities known as the “Big Vs” (大V), who were meant to be magnets attracting conversation — and much-desired traffic — to the platform. The strategy worked, and by 2010 media would proclaim that China had entered the “Weibo Era” (微博时代). But within several years, the idea of a privately-owned tech platform building mass audiences outside of CCP control would become untenable for the leadership. A 2014 crackdown on “Big Vs” was the beginning, some might say, of the inexorable unraveling. 

Now, 15 years on from the “beta” launch of Weibo, it may be time to ask: has life gone out of the platform? This week, private tech news service 36Kr ran a feature about the lack of any genuine celebrities in attendance at Weibo’s “Super Celebrity Festival” (微博超级红人节) awards. At Weibo’s initial launch in 2009, users were attracted by the chance to hear directly on a range of social, economic,and even political, topics from informed experts who accrued large followings, and were generally known as “public intellectuals” (公共知识分子), or gongzhi for short. 

The Surrounding Gaze

Just seven years ago in the Shanghai-based outlet Sixth Tone, which has since fallen on its own hard times, researcher Han Le could note how these figures had the power to shape social participation around large-scale breaking stories, such as the 2011 Wenzhou train collision and the 2015 Tianjin explosion. “Public intellectuals stepped into the breach,” they wrote, “largely encouraging the government to conduct thorough and open investigations, to properly commemorate those who had died, and to further ensure that similar tragedies do not occur in the future.”

Chinese-American businessman Charles Xue (薛蛮子), a “Big V,” became one of the first to fall in Xi Jinping’s crackdown on influencers in 2014. ABOVE: Screenshot of broadcast forced confession on CCTV’s official Xinwen Lianbo program.

China’s leaders, who today still make it their business to “guide public opinion” through the control of media and communication, had long bristled at the notion of “public intellectuals” outside the official system. The emergence of op-ed pages in commercial metro newspapers (都市类报纸) in the early 2000s had given rise to broader range of voices. In December 2004, the Central Propaganda Department-run Guangming Daily (光明日报) ran a series of scathing attacks on the notion of “public intellectuals,” which it dismissed as a dangerous product of Western social thought. 

But the emergence of Weibo in the 2010s was something different entirely — a grassroots platform with the power to gather the attention of millions, within seconds, even as the authorities scrambled to take microblog posts down. The “Big Vs” were the amplifiers in this process of attention-grabbing, which some framed in new terms of cyber social activism as the “surrounding gaze” (围观) — the idea that if everyone bore witness to wrongs, then those in power would have to react. 

The Ineluctable Fizzle

A decade on from Xi Jinping’s concerted push to rein in the “Big Vs” created by Weibo’s original celebrity push, the platform seems a shadow of itself. Competition from more personalized apps like Douyin and Xiaohongshu, and unrelenting pressure facing more controversial accounts, have driven a mass migration of Weibo users. Today, writes 36Kr, Weibo’s special community feel has vanished. The open discussions that once buzzed around public intellectuals are gone. 

Politics has of course made its own contributions to the disappearance of public intellectuals from the platform. 

The platform has literally lost a measure of its humanity. Traffic is often driven these days by “Big Vs” who push controversial topics purely to attract traffic, or by marketing accounts that do the same with an eye to driving up product sales. A common feature of both, according to 36Kr, is that “they rarely show their real lives, and are more like AI robots.” This comes, says the outlet, as bots and trolls have proliferated on Weibo over the past 10 years.

Politics has of course made its own contributions to the disappearance of public intellectuals from the platform. Former Global Times editor-in-chief and “Big V” Hu Xijin (胡锡进) has not posted anything on Weibo since late July, when his influential account was suspended for an unauthorized interpretation of the Third Plenum decision. On August 7, the account of Lao Dongyan (劳东燕), a criminal law professor at Tsinghua University with a respectable following of her own, was also banned for defending her criticisms of upcoming internet IDs for Chinese netizens.

Forums like Zhihu (知乎) or WeChat Moments still provide a town square of sorts for groups to form, but these are smaller, devoid of the larger-than-life “public intellectuals” of Weibo that once served as known voices for netizens to rally round. Going forward, the roll-out of “internet IDs” by the Cyberspace Administration of China could encourage netizens to be even less willing to form communities on the Chinese internet. As for those big personalities, these are not the days to stick one’s head above the parapet — or to show up for a “Super Celebrity Festival.” Many are laying low, which makes China’s internet a far quieter place. 

China’s AI Hallucination Challenge

It was a terrible answer to a naive question. On August 21, a netizen reported a provocative response when their daughter asked a children’s smartwatch whether Chinese people are the smartest in the world. 

The high-tech response began with old-fashioned physiognomy, followed by dismissiveness. “Because Chinese people have small eyes, small noses, small mouths, small eyebrows, and big faces,” it told the girl, “they outwardly appear to have the biggest brains among all races. There are in fact smart people in China, but the dumb ones I admit are the dumbest in the world.” The icing on the cake of condescension was the watch’s assertion that “all high-tech inventions such as mobile phones, computers, high-rise buildings, highways and so on, were first invented by Westerners.”

Qihoo 360’s smartwatch.

Naturally, this did not go down well on the Chinese internet. Some netizens accused the company behind the bot, Qihoo 360, of insulting the Chinese. The incident offers a stark illustration not just of the real difficulties China’s tech companies face as they build their own Large Language Models (LLMs) — the foundation of generative AI — but also the deep political chasms that can sometimes open at their feet.

Qihoo Do You Think You Are? 

In a statement on the issue, Qihoo 360 CEO Zhou Hongyi (周鸿祎) said the watch was not equipped with its most up-to-date AI. It was installed with tech dating back more than two years to May 2022, before the likes of ChatGPT entered the market. “It answers questions not through artificial intelligence,” he said, “but by crawling information from public websites on the Internet.” 

The marketing team at Qihoo 360, one of the biggest tech companies invested in Chinese AI, seems to disagree. The watch has indeed been on sale since at least June 2022, meaning its technology can already be considered ancient in the rapidly developing field of AI. But they have been selling it on JD.com as having an “AI voice support function.” We should also note that Qihoo 360 has a history of denials about software on its children’s watches. So should we be taking Qihoo 360 at its word?

A screenshot of the watch from 360’s self-operated store on JD.com, with “AI Voice support” in bottom-right corner.
A screenshot of the watch from 360’s self-operated store on JD.com, with “AI Voice support” in the bottom-right corner

Zhou added, however, that even the latest AI could not avoid such missteps and offenses. He said that, at present, “there is a universally recognized problem with artificial intelligence, which is that it will produce hallucinations — that is, it will sometimes talk nonsense.”

Model Mirage

“Hallucinations” occur when an LLM combines different pieces of data together to create an answer that is incorrect at best, and offensive or illegal at worst. This would not be the first time that the LLM of a big Chinese tech company said the wrong thing. Ten months ago, the “Spark” (星火) LLM created by Chinese firm iFLYTEK, another industry champion, had to go back to the drawing board after it was accused of politically bad-mouthing Mao Zedong. The company’s share price plunged 10 percent.

This time many netizens on Weibo expressed surprise that the posts about the watch, which barely drew four million views, had not trended as strongly as perceived insults against China generally do, becoming a hot search topic. 

For nearly any LLM today, the hallucinations Zhou Hongyi referred to are impossible to have total control over. For those wanting to trip them up to create humorous or embarrassing results, or even to override safety mechanisms — a practice known in the West as “jailbreaking” — this remains relatively easy to do. This presents a huge challenge for Chinese tech companies in particular, which have been strictly regulated to ensure political compliance and curb incorrect information, even as they are in a “Hundred Model War” push to generate and develop LLMs.

As China’s engineers know only too well, it is not possible to plug all the holes. Reporting on the Qihoo story, the Beijing News (新京报) said hallucinations are part of the territory when it comes to LLMs, quoting one anonymous expert as saying that it was “difficult to do exhaustive prevention and control.” Interviewees told the Beijing News that steps can be taken to minimize untrue or illegal language generated by hallucinations, but that removing the problem altogether is impossible. In a telling sign of the risks inherent in acknowledging these limitations, none of these sources wanted to be named. 

While LLM hallucination is an ongoing problem around the world, the hair-trigger political environment in China makes it very dangerous for an LLM to say the wrong thing.

China’s AI Hype Gets a Reality Check

Hopes are high for AI in China. Not only, according to prevailing narratives, will the country’s advanced artificial intelligence enable it to rival the United States in a critical field of emerging tech, but this act of one-upmanship will also help to cement China’s role as a great power and ensure that it avoids another “Century of Humiliation” — the roughly 100-year period from the First Opium War to the end of WWII, when foreign powers dominated China and carved off chunks of its territory.

In fact, hopes might even be too high. State media have recently begun cautioning AI’s cheerleaders that they need to tone it down.

An article last month in the China News Publishing & Broadcasting Journal (中国新闻出版广电报) — a periodical aimed at media specialists and printed by a media group directly under the Central Propaganda Department — reminded readers that AI-generated content (AIGC) is still “in its infancy” and can’t be expected to perform miracles just yet. Some outlets know too little about AI’s current capabilities and limits, the piece says, yet they have launched full-blown AI projects that have, predictably, stagnated. It urges Chinese media to “avoid blindly following trends” and buying into “excessive hype.”

For some newsrooms, AI in its “infancy” is behaving more like a problem child than a wunderkind.

Unknown Input

For years, the CCP has made it clear that AI development is both a strategic priority and a point of national pride. It is “a new focus of international competition,” as per a State Council document from 2017. Key communiqués in 2024 from both the government and the Party indicate pushing AI is a priority. 

The powerful Cyberspace Administration of China made the stakes clear in an article in People’s Daily earlier this year, when it said AI could do for China in the 21st century what the Industrial Revolution did for the UK in the 19th century: transform it from a marginal set of islands to the world’s greatest empire. China’s weakness at the time, the CAC editorial says, was the consequence of turning away from the latest technology. Since then, media nationwide have been pushing AI-generated content hard. Eleven outlets opened their own specialized AIGC studios (AIGC工作室) by the end of June this year, or have collaborated to create their own AIGC content and Large Language Models (LLMs), the computational models that are now powering generative AI.

Jiangxi’s propaganda department used AI to create an audiobook version of “Old Auntie” Gong Quanzhen (龚全珍) a communist party “moral exemplar” who passed away last year.

Some of these ventures have been successful. Chengdu Radio and Television collaborated with over ten other provincial stations to create AI videos promoting the distinct features of each locality. The humble Weifang Bohai International Communication Center has a remarkably life-like AI anchor courtesy of China Daily that’s been delivering weekly bulletins since the start of July. But others have promised more than they can deliver. An online platform supervised by Jiangxi’s propaganda department announced that “after more than a year of exploration and practice,” it was opening its own AIGC studio with a laundry list of AI-based content — almost none of which has materialized.

Embracing new tech is no guarantee it will be easy to implement, even for those with access to the best resources. Some of the biggest outlets in the country have already had LLMs of their own for some time now, but despite boasting of how they shorten production times from weeks to a few days, they have produced little with these tools. CCTV used a new LLM of its own to create a series of AI-generated videos in February, promising 26 episodes but stopping after just six. The official broadcaster has published many more AI videos since then, but no longer lists the LLM used to generate them. Shanghai Radio and Television’s AIGC studio, meanwhile, has only produced five videos in as many months — not exactly an appreciable gain in efficiency.

Perhaps they have been countering the same problems as Bona Film Group with their new AI-generated series on Douyin. Technicians at the state-owned production company told reporters they were struggling to keep the algorithm from hallucinating and to get it to ensure continuity between shots and realistically depict the human body in motion. This has also troubled Kling (可灵AI) from Douyin rival Kuaishou, which despite opening a recent New York Times report on Chinese AI “closing the gap”, still has significant drawbacks. A Kling video shared online shows a gymnast whose limbs morph and merge like one possessed. We asked it to animate a photo of swimmers diving into a pool and the results defied gravity.

Nevertheless, Kuaishou says they’ll be using Kling to make a micro-drama series.

Some AI-generation software is better than others, but many of the precedents so far are less than promising. Take, for example, Yangcheng Evening News (羊城晚报), a paper under Guangdong’s provincial propaganda department. The paper recently used Tencent’s “Cloudy” (混云) LLM to create several videos announcing the establishment of their AI lab — mostly psychedelic, four-second clips spliced together with mismatched artistic styles. The novelty and prestige of the tech may excuse the patchiness of these teasers, but any full-length news bulletin or documentary created with this tool would be unwatchable.

Overhyped Output

AI has generated hype all around the world, but the nature of China’s political system makes that hype harder to call out, at least in public. Chinese state media and tech companies have been trumpeting the big promises and potential of AI while turning a blind eye to the teething problems outlets are facing right now to implement this technology. The media’s job, authorities have made clear, is to push positive messaging about China’s technological development, and AI-generated videos are very effective public displays of that. At the same time, tech firms are locked in cut-throat competition to convince the media of their successes.

Humanoid Robots on display at the WAIC. Photo: VCG

This dynamic played out at the World AI Conference in Shanghai earlier this month. State media ran a series of pieces (also here and here) showcasing the million-dollar deals made, with Xinhua pointing to it as evidence of the “innovative vitality” of Chinese AI. But privately-owned, Nasdaq-listed 36Kr was less impressed. Their reviewer noted the event’s popularity and the photogenic wall of robots at the entrance for visitors to take selfies in front of, but that nothing inside was actually new. Most of the big companies were all doing the same thing. “They all play with general large models, and then make AI-generated pictures and videos,” they wrote.

AI is still an evolving technology, full of uncertainties about how the technology can and cannot be used. There have even been rumblings in the West that LLMs may be a dead end altogether, unable to improve further or to do so without extreme difficulty. For its part, China’s leadership is dead-set that this is the way forward, and state media like the Economic Daily (经济日报) have urged readers to ignore fluctuating sentiment on Wall Street about AI investments. Bullishness is the order of the day. Now it’s up to the country’s media outlets to find a way to make the promise of AI real.

How to Push China’s Narrative Abroad

Highlighting the growing role of China’s provinces in the state-led push to bolster its global messaging, a media delegation from the South American country of Guyana visited a propaganda office-run international communication center (ICC) in the coastal province of Shandong this week — with at least one outlet signing an agreement for cooperation. 

Members of the Guyana media delegation came from several major outlets. They included the country’s state-owned television and radio broadcaster, the National Communications Network (NCN), Stabroek News, Kaieteur News, and the Guyana Times

During their visit, the Guyanese outlets toured the facilities of the Shandong International Communication Center (山东国际传播中心), or SICC, a center established in November last year under the state-owned Shandong Radio and Television (山东广播电视台), tasked with boosting Chinese propaganda abroad.

In a formal ceremony on Monday, the SICC signed a cooperation agreement with the Guyana Times (圭亚那时报), with both sides pledging to “deepen cooperation in the exchange of news copy, personnel, branding and other aspects.” The Guyana Times, which identifies itself in its motto as a “beacon of truth,” was first launched in 2008 as the country’s first full-color broadsheet, and now runs an online news portal as well as radio and television channels in the country, directed as its population of just over 800,000 as well as diaspora communities in the United States and Canada. 

The Chinese embassy in Guyana has been playing a long game in wooing Guyana’s media. In December 2022, the Chinese embassy hosted an event for journalists in Guyana, the ambassador telling assembled journalists (which included the CEO of NCN) that they needed to better understand China. “They should not simply reprint news from Western media, but should also pay attention to Chinese media reports.” The Chinese embassy in Guyana notes expressly on its profile for the country that its print media have mainly resorted to Western media sources for China-related coverage. 

“They should not simply reprint news from Western media, but should also pay attention to Chinese media reports.”

The embassy-hosted event closed with remarks from NCN anchor Samuel Sukhnandan on his experiences two months earlier while in a training course as the International Press Communication Center (IPCC) in Beijing. Directly under China’s Ministry of Foreign Affairs (MOFA), the IPCC hosts courses and internships for journalists, largely from the Global South, to introduce China’s society and political system and encourage what MOFA, in a lengthy text on public diplomacy strategies, called “objective media reporting on China.” During his Beijing training course, Sukhnandan submitted a news account to the Guyana Chronicle of the CCP’s 20th National Congress. In the report, the journalist quoted liberally from Xi Jinping’s political report, without any additional sourcing or context. The report closed by saying that the political event would “culminate” the following Saturday. 

According to the embassy read-out of the December event back in Guyana, Sukhnandan said that after attending the IPCC course he realized “Western media reports on China were often one-sided and inaccurate, and he was willing to work hard to enhance objective reporting on China in the future.” 

Sukhnandan is back in China this week, taking part in the tour of the Shandong ICC, which is applying at the provincial level the lessons that MOFA has pushed at the national level.

Local communication centers like the one in Shandong are spearheading efforts promoted by the leadership since 2018 to “innovate” foreign-directed propaganda under a new province-focused strategy. This allows the leadership to capitalize on the resources of powerful commercial media groups at the provincial level, like Shandong Radio and Television, which can also — or so is the hope — tell more compelling stories, as Xi Jinping has made “telling China’s story well” the heart of the country’s external push for propaganda and soft power.

ICC development is also premised on the introduction on new technologies, including AI, to media production, and the perception that Chinese outlets are at the cutting edge of media technology may also be an important draw for participating Guyanese media. Shandong’s Integrated Media Information Center (融媒资讯中心), which works to apply emerging technology to traditional media practices, gave a demonstration of its work to the visiting delegation. In response, Sukhnandan told his hosts that he was amazed by the center and how it was far beyond what he was used to back in Guyana, where NCN remains the only live television broadcaster.    

This push to attract Guyana’s media is in line with China’s concerted effort to offset the impact of Western media in Global South countries. CCP leaders have repeatedly sent the message that international communication is a top priority. The issue was the focus of a collective study session of the CCP politburo three years ago, and the Decision emerging from the recent Third Plenum, which closed just days ahead of the Guyana delegation's visit, urged cadres to build a stronger system to “improve the effectiveness of international communication.”

When Worlds Collide

Government and private tech have teamed up to create the first AI-generated sci-fi short-video series in China. “Sanxingdui: Future Apocalypse,” released on July 8, imagines a world far in the future where characters travel back to the Bronze Age Sanxingdui (三星堆) civilization of southern China. The series consists of 12 three-minute clips — generated with human guidance, edited through Douyin’s “Jimeng AI” (即梦AI) algorithm, and then released on their short video platform. The company has already reported views of over 20 million.

The series combines the slickness of Douyin tech with the media know-how of the State Council’s National Radio and Television Administration (NRTA) and the Bona Film Group, one of China’s biggest production companies and a subsidiary of the state-owned mega-conglomerate Poly Group. At a press briefing, Bona executives explained how the Jimeng algorithm had generated video through the input of original images, responding to prompts on camera angles and movement speeds.

This production process is a convergence of trends that the Chinese Communist Party has been pushing forward for years to modernize the media. To look at the show is to look at some of the first sprouts of the Party’s long-term goals for communication.

Modernizing Messages

Since at least the “Three Closenesses” of the early 2000s, the Party has been saying that it needs to make its messaging more attractive to the masses. President Xi Jinping’s focus on a combination of virality and control is just the latest iteration of this. “Wherever the readers are, wherever the viewers are, that is where propaganda reports must extend their tentacles,” he told the People’s Liberation Army Daily in 2015, “and that is where we find the focal point and end point of propaganda and ideology work.”

Partnerships between private media companies and stuffy state institutions have helped breathe life into ideology. In 2023, “The Knockout,” released by iQIYI, managed to be a successful and gripping TV show about the mundane topic of grassroots corruption, produced in partnership with the Central Political and Legal Affairs Commission under the CCP Central Committee. “Sanxingdui: Future Apocalypse” is not even the first time Douyin, Bona, and the NRTA have teamed up on a project — they did so back in 2021 and 2022 for the “Battle at Lake Changjin” franchise, a tub-thumping war epic about Chinese soldiers fighting in the Korean War.

Then there is the content of this recent collaboration. In 2013, Xi Jinping urged cadres to adapt traditional Chinese cultural relics to modern realities — indeed, he said they had to “come alive” and be “promoted in a way people love to hear and see.” Since then, there have been multiple attempts across state media to bring traditional Chinese culture to life for contemporary audiences.

As for Sanxingdui, the Party has promoted education about the site ever since excavations began in 2021, as it is seen as a counterpoint to claims that southern China was simply colonized by Han people from the north — the People’s Daily credits the site with proving that “Chinese civilization” (中華文明) did not spring merely from the banks of the Yellow River. State media even set the relics to pop music back in 2021 in an attempt to raise their public profile.

Combining traditional Chinese culture with a forward-looking genre like sci-fi is a good way to bring the former up-to-date. Since 2020, the China Film Administration (CFA) has offered generous subsidies for domestic sci-fi productions through a series of initiatives. Merging the old and the new worked for author Hai Ya (海漄), who was awarded — under dubious circumstances — the Hugo Award for best novella earlier this year. His story centered on a Beijing cop who time-slips back to the Song dynasty, learning about a famous traditional painter in the process.

Harnessing AI

There’s no better way to combine sci-fi and cutting-edge modernity than with the hot topic of AI-generated video. In China, AI is both a byword for modernity and an official policy, with the government having set a progressive AI strategy back in 2017, gunning for technological breakthroughs and world firsts. This year, Premier Li Qiang announced the launch of the “AI+” policy at the annual Two Sessions, intended to integrate AI into all of China’s industries — media included. 

Recently, others have also tried to position themselves at the intersection of traditional culture with AI and science fiction. Take China Media Group, for example, whose “China AI Festival” in Chengdu last May featured a trailer for a TV show about kungfu set in modern Shenzhen, giving prominent billing to the show’s AI-generated characters. At the same festival, Alibaba’s AI studio made the terracotta army literally come to life — as per Xi’s instructions — and break into a rap for state broadcaster CCTV.

“Sanxingdui: Future Apocalypse” will likely please the Party with its exclusive release on Douyin, embodying a push within state media to prioritize distribution via social media. Since 2014, Xi has made it clear that traditional media must integrate with emerging media to better reach audiences. Buzzwords such as “mobile first” (移动优先) started appearing in the late 2010s when officials noticed that the most effective channel to communicate with people was through social media apps. Years later, this has only become more pronounced: by 2022, 99.8 percent of Chinese could access the internet through smartphones, compared with 32 percent by laptop.

State media launched a coordinated campaign in the late 2010s to migrate to social media platforms, and have adapted their messaging to suit the medium, releasing short videos with cutesy aesthetics.

AI-generated content, however, still has a long way to go. The director at Bona’s AI-generation center told reporters that although AI sped up some parts of production, the algorithm tended to hallucinate. It struggled to maintain consistency between shots and accurately depict the human body in motion. It also couldn’t generate high-quality special effects, which had to be added in post-production. “The most difficult thing in real-life shooting happens to be the easiest thing for artificial intelligence, and the most difficult thing for artificial intelligence happens to be the easiest thing in real-life shooting,” she said

But listing these problems is intended to help push the technology forward, not to dissuade others from using it. Since the very beginning of his leadership, Xi Jinping has been saying that traditional media and culture must be fused with modern technology. This is not just a futuristic show — it’s a taste of the media of tomorrow that the Party has been planning for at least a decade.