Matt Sheehan

How China Thinks About AI Safety

Sep 19, 2024
Sep 19, 2024
Alex Colville
Matt Sheehan

Alex Colville

Alex has written on Chinese affairs for The Economist, The Financial Times, and The Wire China. He was based in Beijing from 2019 to 2022, where his work as Culture Editor and Staff Writer for The World of Chinese won two SOPA awards. He is still recovering from zero-Covid.
China’s attitudes towards AI may be at a turning point, with officials not only pushing it up the list of priorities but also coming to terms with its existential capabilities.

Matt Sheehan

Matt Sheehan is a Fellow at the Carnegie Endowment for International Peace, specializing on China’s AI safety and governance. He lived and worked in China for six years before becoming a fellow at MacroPolo, where he specialized in technology issues in the PRC.

Matt Sheehan is a Fellow at the Carnegie Endowment for International Peace, specializing in China’s AI safety and governance. His latest paper highlights concerns within China’s government about regulating Artificial Intelligence. In July, he appeared on a panel at Shanghai’s World AI Conference (WAIC), a meeting of China’s top AI entrepreneurs, engineers, and lawmakers that was heavily publicized by state media. Alex Colville spoke with Matt to take the temperature of China’s AI industry and discuss the PRC’s current positions on AI safety and deepfakes.

Alex Colville: What is the current discussion on AI safety among PRC elites? 

Matt Sheehan: There’s been a huge change in the last 18 months or so. If you’d asked me two years ago “How salient is AI safety in China’s AI industry and policy?” I would have said that it’s something you’ll hear people talk about in private sometimes, but it’s very much not in mainstream policy discussion and it’s not even very well represented in the mainstream scientific technical discussion. From, say, 2021 through mid-2023, AI regulation was a public opinion management issue, implemented at the department or ministry level, particularly the Cyberspace Administration of China. Before the rise of ChatGPT, the only part of the government making reference to safety issues was the Ministry of Science and Technology, which had made explicit references to the loss of human control over AI systems. But then over the course of 2023 we started to see AI safety rise to prominence, especially in the elite scientific community.

We’ve seen a number of public statements from China’s most prominent computer scientists saying they share a lot of the international concerns about catastrophic risks from loss of control of unaligned AI systems. The question has been if that elite scientific conversation has been making its way into the elite levels of the CCP. Beginning with the Interim Measures for Generative AI [regulations issued by the CAC to monitor AI-generation service, in effect since August 2023], AI governance has become something the Politburo and the Central Committee want to be more directly involved in. They’ve shifted away from viewing AI governance as primarily a public opinion management issue that can be left to the CAC, and now view it as an issue of national power, and geopolitical positioning. 

I think this past couple of months we’re starting to see some of the strongest indications AI safety is getting a hearing at this level. In the recent Third Plenum Decision, a section on public security and public safety included the call for creating an “AI safety supervision and regulation system.” This is the first time we saw a direct reference to AI safety  —  used in reference to public safety rather than content safety  — appear in this high level of a policy document. It’s up for debate what exactly that supervision and regulation system might look like.

AC: How have China’s views on AI safety evolved compared to the views of the US and EU?  

MS: In the past when the CCP has talked about “safety” (安全) as it relates to AI, they mostly have been talking about ideological security, political stability, content security, and content safety. What’s happening now in the last year is that we’re starting to see them use the term “AI safety” (人工智能安全) in a way that does seem more aligned with Western usage of the term, more tied to potential catastrophic risks. In both the Third Plenum Decision and some of the explainer documents released afterward, they tend to include AI safety in a category of large-scale industrial safety or use terms that in the past used to refer to large-scale industrial accidents. So whereas for a while I’d say China and the West were talking about fundamentally different things even when we appeared to be using the same language of “safety,” there’s now this emerging area where our scientific communities might be talking about the same thing. 

AC: When we say elite levels of the CCP, we’re talking about a group of socially conservative men in their mid-60s, only a few of whom have been educated abroad or have STEM-related educations. What do you think is their understanding of AI and its capabilities?

MS: The answer is we don’t really know. You can learn a little bit by who they are talking and listening to. For example, Andrew Yao is probably the most respected computer scientist in China. He personally received a letter from Xi Jinping congratulating him on his 20 years working in China. He won the Turing Award in the United States and returned to China around 2000. He has this class called the “Yao Class” (姚班), which a lot of the major AI scientists or entrepreneurs in China went through. 

Andrew Yao. Source: Wikimedia Commons.

I’d say he’s one of the most important and listened to computer scientists within the halls of government. He’s become a very vocal proponent of AI safety being a serious issue that needs to be acted on. I haven’t seen his name in one of the Study Sessions of the Politburo, but I would not be at all surprised if he presents at those kinds of levels. We have to infer whether or not those messages are getting from him into the higher levels of the Chinese government, but at least circumstantially we can see some evidence of that.

MS: The only full study session of the Politburo dedicated just to AI, back in 2018, was done by Gao Wen (高文), who’s another relatively elite scientist who has also been talking more about AI safety and alignment recently. Peking and Tsinghua University professors have definitely been consulted on the regulations. There are also the heads of state-sponsored AI labs: Zhang Hongjiang of the Beijing Academy of Artificial Intelligence (张宏江) and Zhou Bowen of the Shanghai AI Lab. 

AC: Watching Chinese reactions to the EU’s new AI Act has been interesting. An article from China’s AI Industry Alliance (AIIA) has one consultant talking about how Chinese laws will be “small incisions” (小切口法) versus the “horizontal legislation” (横向立法) in the EU’s Act. How do the EU and China differ in the way they create AI legislation? 

MS: I think how the regulatory architecture is built out is one of the biggest differences between China and the EU. From the start, the EU went for one large horizontal regulation that’s supposed to cover the vast majority of AI systems all in one go, and within that you just tier them by risk. The Chinese approach has been not to have a comprehensive law, but to try to attack it quickly at the departmental level with targeted application-specific regulations, specifically for recommendation algorithms [in effect since March 2022], deep synthesis [in effect since January 2023], and the generative AI interim measures. They didn’t start from the technology, they started from a problem as they saw it. They thought recommendation algorithms were threatening the ability to dictate the news agenda, while deep fakes and generative AI would threaten social stability. So they worked backwards from the problem to create regulations specifically targeted toward it. 

AC: China’s own AI Law has been in unofficial draft form since August 2023. Do you think this is something the leadership is still pursuing?

MS: A move towards a national AI law would be more aligned with the EU, but it’s unclear if they’re going to end up pursuing that. Policymakers often say their approach to AI should be “small, fast, flexible,” (小快灵). They’re worried about some long process that ends up being immediately out of date. The EU did a lot of work on their AI Act, but then in the final stages ChatGPT came out, and they needed to do some significant rethinking of how to deal with foundation models. China released its deep synthesis regulation five days before ChatGPT came out, and ChatGPT totally shook up what was in that regulation. So they just went right back to the drawing board and a few months later, had their generative AI regulation. 

It’s very up in the air how specific, binding, and constraining the AI law would be if it came out. I think for a little while there was a lot of momentum and the thought was we have a push for this law very quickly. But it seems like the government has pumped the brakes a bit. They’ve got three different AI regulations already and currently want to be more pro-innovation — maybe they think they don’t need to roll out an entirely new national law on this right away. That makes sense from a number of perspectives: do you want to codify things in a law when the technology changes quickly? I think it’s most likely we won’t see the AI law text itself for another two years, maybe more, but they could surprise us on this. 

Meanwhile, they’re also going to apply targeted standards to different industries, helping compliance with current regulations or to quickly integrate AI into different industrial processes. There are compliance standards about how to implement the generative AI regulation, exactly how to test your generative AI systems, and what performance criteria they need to meet in order to be compliant. 

AC: We’ve been finding a lot of variance on how closely current regulations are followed, for example on putting digital watermarks on AI-generated videos and whether models can create deepfakes of politicians. How standardized is the field at the moment, and is there going to be any pushback for not following standards and regulations?

MS: It seems like classic Chinese regulation, where they throw a lot down on paper, but then selectively enforce it to achieve their ends, and maybe one day have a big crackdown on non-watermarked content. [Note: Since this interview, the CAC has released a draft standard fleshing out the Interim Measures for Generative AI, listing in detail where and how watermarks should be used.] I don’t think a national AI law necessarily solves that. In a lot of ways, I think their management of AI-generated content is going to be like the broader never-ending game of whack-a-mole that governs the CAC’s relationship with big internet platforms, where they constantly go to companies and tell them how to cover an event they’re worried about. The rules require conspicuous watermarks on content that might “mislead the public,” and the point of the watermarks is to make sure that you’re not having people deceived in ways that are criminal or harmful to the party’s interests. There’s going to be a lot of AI-generated content that is not really that harmful, so you don’t actually need visible watermarks on every piece of AI-generated content. Maybe the CAC will choose to selectively enforce this, where if a company allows a misinformation or disinformation video to go viral on the platform, that company will be punished in line with the generated AI regulation requirements for not having a watermark. But if you just have online influencers choosing to replicate themselves so they don’t have to record new videos every day, that might not be something the CAC sees as a problem. 

Deepfakes would not fall under the AI safety (人工智能安全) concept we’re seeing in policy documents or discussed at the International Dialogues on AI Safety, which is quite specifically referring to AI escaping human control or very large-scale catastrophic risks from the most powerful AI models. So that’s a much more forward-looking and speculative set of concerns the elite level of politicians are worried about, which is pretty different from complying with the generative AI regulations. To them, being able to generate deepfakes from one diffusion model but not another sounds like a compliance issue for the CAC. Maybe one company is either not as good at detecting unacceptable content, or they’re not putting much effort into it.

Matt Sheehan speaking on a panel at WAIC with AI policy scholar Zhang Linghan (张凌寒). Image courtesy of Matt Sheehan and Tsinghua University’s I-AIIG.

AC: We noticed you spoke at a panel for this year’s World AI Conference in Shanghai. What was that like?

MS: WAIC was a massive event. It’s always a big production, but this year was really the return to pre-Covid form, and you could see there was a concerted effort by organizers to use the event as a way of saying “we’re back.” They wanted foreigners to be there. They really wanted to tell the outside world that China is open, China is responsible with AI, and China is leading in some areas.

AC: Were there any moments that stood out for you?

MS: I was talking to some of the entrepreneurs who were impacted by the Interim Measures for Generative AI and how intensively it is still enforced. Prior to going to WAIC, it looked to me like the CAC had decided to be much more accommodating for AI companies. They really watered down the text of the generative AI regulation, between the drafting and finals, to make it more favorable to businesses. That was because they got pushback from other parts of the bureaucracy, who have an eye on the economy and competing with the US, saying “we cannot sacrifice our AI industry entirely at the altar of censoring what comes out of language models.” 

In general, I think the CAC’s in a bit of an awkward position because the zeitgeist has shifted so much from the time of the tech crackdown [late 2020 – late 2023] to a focus on economic development, getting AI companies to thrive. So prior to the WAIC, I expected them to be a little bit more hands-off with AI companies when it comes to implementing the regulation. But being there and talking to folks I realized that this is all relative. Yes, they’ve eased up a bit from early 2023, but they’re still very, very hands-on with constantly testing the models, and re-adjusting every day, every week, what is unacceptable content.

AC: How much hype do you think there is in China’s presentation of its home-grown AI? 

MS: Hype is not China-specific, and every AI company in the world has been overhyping their products for the most part. Maybe in China there’s a little bit of an extra layer of showmanship just because it’s their tendency when it comes to these types of business products. I think in some ways, 2023 was all about catching up with Large Language Models once ChatGPT came out. Everyone was just competing on these performance metrics, making money wasn’t hugely necessary. Now there’s an increasing sense of needing to find a way to sell this service, to create applications that can make money. That’s probably going to be a little bit less grandiose in some ways, but it might actually put the industry on more stable footing going forward. I think the industry in China faces tons of problems with financing. There’s a very good chance we’re in the midst of a big AI bubble (or at least a large model bubble) that’s going to pop, with a lot of the biggest and best AI startups today going bankrupt in a few years. But that’s not unique to China, it might just be more acute in China than in other places.


Alex Colville

Alex has written on Chinese affairs for The Economist, The Financial Times, and The Wire China. He was based in Beijing from 2019 to 2022, where his work as Culture Editor and Staff Writer for The World of Chinese won two SOPA awards. He is still recovering from zero-Covid.

Matt Sheehan

Matt Sheehan is a Fellow at the Carnegie Endowment for International Peace, specializing on China’s AI safety and governance. He lived and worked in China for six years before becoming a fellow at MacroPolo, where he specialized in technology issues in the PRC.