With nearly constant notices these days of new rules, draft rules, and clean-up campaigns, these are busy times for the Cyberspace Administration of China (CAC), the country’s top internet and information control body.
In the most recent development, the Financial Timesreported Tuesday that the CAC will require licenses for the release of generative artificial intelligence systems. This should not come as a surprise. As CMP wrote back in April, following the release of draft rules for AI applications and that emphasized “socialist core values,” the CCP has long prepared for the harnessing of AI developments. Just as was the case for traditional media outlets as they commercialized and developed in the 1990s, licensing is an obvious and crucial line of control for generative AI.
Released on Monday, the CAC’s list of 13 new rules to strengthen oversight of so-called “self-media” or “we-media” (自媒体), individual user accounts on social media platforms like WeChat that publish self-produced content, are more of a mixed bag.
It might be easiest to think of this latest round of regulations as a set of upgrades to the existing regime of controls — like a set of patches or fixes. These fixes are both regulatory and political in nature. While some could be applied to curtail conduct in cyberspace that is truly harmful to the public, many work solely to enforce Party dominance of information.
Let’s go quickly through a number of the regulations.
The Power of Impersonation
The first of the 13 regulations concerns “imitative and counterfeit behavior” (假冒仿冒行为). It deals specifically with the problem of accounts that resemble or mimic official accounts operated by CCP organs, government agencies, official state media, and military offices. This type of behavior was specifically mentioned in the draft rules earlier this year, the CAC announcing at the time that it had shut down more than 100,000 fake accounts impersonating Party-state media and news anchors alone.
As we noted in “Fake News on the Front Line,” the astonishing extent of such fake official accounts, which probably just scratches the surface, can best be explained as an unintended result of CCP controls on information. As we wrote, “association with the institutions of power is really the only way to ensure that what one says, however false and self-serving, will be heard, perpetuated — and protected.”
The new rules seek to curb this behavior by mandating “manual review” (人工审核) of any registered accounts on social media platforms that contain apparent references to official organs. It will likely be an uphill battle, given that controls themselves have created strong market demand for the privileges that come through association with official bodies. So long as the CCP and its institutions have a monopoly on expression, accounts seeking to profit and survive will find ways to join in that privilege.
Credentials for Containment
The second regulation offers a prime example of how what might at first seem like regulatory moves in the public’s interest are hobbled by a control mentality. This deals with the “strengthening of credential display” (强化资质认证展示).
Specifically, self-media dealing with areas such as finance, education, healthcare, and the law which might be regarded strictly as professional fields are now required to be “strictly verified” (严格核验) by platform providers. This means that their “service qualifications” (服务资质), “professional qualifications” (职业资格), or “professional background” (专业背景) must be verified and clearly specified.
If that sounds fair to you, and you imagine it might offer suitable protection for consumers who deserve reliable information, consider the recent case of 8am HealthInsight (八点健闻), a “think-tank-type media” (智库型媒体) specializing in health coverage that was shut down by the authorities earlier this month for unspecified violations.
Launched in 2019, 8am HealthInsight was long known for its strong coverage of health issues in China, including planned reforms to national healthcare. In late 2022, the account, founded by former journalists working on the health beat, was named one of China’s “ten most innovative” information products. An introduction from Shanghai’s The Paper said the account “aims to provide professional and credible industry information for China’s medical and healthcare community.”
It is probable that 8am HealthInsight was shut down because it fell afoul either of the authorities directly, or of key vested interests in the healthcare industry in China who were unhappy with the account’s reporting. In either case, its closure is a clear loss for China’s public in terms of reliable health-related information and consumer protections.
The CAC’s new “credential display” requirement does not at all guarantee that the public will have access to better information. What it does is remove the filter provided by professional journalism and information services more broadly. This is a typical autocratic move, one that has flourished in the digital era, allowing the party-state to communicate its “authoritative” information directly to the public.
8am HealthInsight and accounts like it will be disqualified because their reporting teams and content creators — however professional they are where it counts for consumers, in the reliability of information — will be unable to pass the strict verification test. The regulations will instead leave the field open only to licensed medical professionals and service providers in the healthcare industry, many of whom serve the government’s interests or their own commercial interests.
Imagine, for example, a self-media account with healthcare service qualifications running misleading information about its treatments or other products. That is not a problem under these regulations, because what matters to the authorities is that the account is credentialed. Credibility and authority are subordinated to this credentialing process. Why? Because those who are credentialed can be trusted politically. That, in any case, is the operating assumption.
Requiring credentials, particularly for these sensitive areas, means those operating self-media accounts will already be within Party-controlled professional regimes. To understand how professional organizations in China work to control and restrain members, not to ensure professionalism, we recommend our recent analysis in “Comedy, Under the Watchful Eye of the State.”
Sources of Contention
Related to this issue of the “authoritative” and credible as being about political trust, the third regulation from the CAC this week demands that self-media clearly label sources of information when releasing news related to domestic and international current affairs, public policies, and incidents in society —and that platforms insist on strict labeling standards. This is one point that Caixin Global mentioned prominently in its report on the rules, and this makes sense given the outlet’s own history with official restrictions on sourcing.
In principle, being clear about the sourcing of information is a basic matter of professionalism. It makes sense, right? Knowing where your information comes from should be as essential as knowing where the food you put in your body comes from. But the treatment of this issue historically by the authorities makes it clear that the real priorities lie elsewhere.
When the CAC, for example, issued its updated list in October 2021 of official authorized domestic sources for internet news providers that can be reposted and reprinted, Caixin was removed from the list despite the fact that it is generally regarded as one of the most professional and reliable journalism outfits. The real question dealt not with Caixin’s authority, but whether it could be trusted, when necessary, not to report the truth.
The requirement on sourcing is likely to be applied in an expedient manner, with the assumption that official Chinese sources such as Xinhua News Agency and other state and provincial media (all licensed and connected to the system) are “authoritative” sources. If your source is a foreign news outlet such as the Associated Press, you are not going to label that source because accessing and using it at all is not permitted. You will have to avoid labeling the source, and therefore risk falling afoul of the regulation, or avoid posting the information at all.
If platforms strictly enforce this requirement on source labeling, this will strongly favor the use of official sources, which again will achieve the overarching political goal of reinforcing a party-state monopoly on information.
The question of authority again comes to the fore in the next three CAC regulations, which apply foggy content standards whose enforcement will be the prerogative of the content platforms hosting individual accounts. Because their primary interest is in compliance in order to maintain their business positions, we can safely assume that platforms will apply a sledgehammer approach as opposed to a surgical one. Making actual determinations to protect content that might not be a violation would be expensive, after all.
Platforms are required under the fourth regulation to “strengthen truthfulness management” (加强信息真实性管理). What does that mean? They are to crack down on practices such as “making something out of nothing” (无中生有), “interpreting out of context” (断章取义), “distorting the facts” (歪曲事实), and (this has to be a favorite) “patchwork editing” (拼凑剪辑).
The first three of these concepts have routinely been applied in China as sledgehammer attacks on information that is authentic and factual. In the Global Times, for example, it is Western media and the US Congress that have “made something out of nothing” in the case of human rights abuses in Xinjiang. In May last year, the CCP’s flagship People’s Daily attacked the American magazine Foreign Affairs for “distorting the facts” and “interpreting out of context” regarding the status of Taiwan — the terms appearing side-by-side in the report.
In the right environment, in which the principles of professionalism and accuracy are upheld, such terms might point to a real debate about fake news and misinformation. But the CCP’s primary interest, salient in every declaration of media policy, is to uphold what it calls “correct guidance of public opinion,” meaning that information must follow the Party line. We have to assume, therefore, that it is the Party’s standard of the truth that platforms will be pressed to enforce.
Similarly, regulations five and six deal with “false information intended to mislead” (争议信息), and with rumors.
The CCP has a long history of using the accusation of “rumor” to target inconvenient truths, when in fact, as the communications scholar Hu Yong (胡泳) wrote more than 12 years ago, controls on information are one of the primary reasons why rumor runs rampant on the internet and social media. “Under the policy of ‘correct guidance of public opinion,’ the traditional media only selectively report major social and political events, and the standards are entirely within their hands,” Hu wrote. “Whatever is regarded as negative, destructive, causing chaos, or smearing is not permitted, and everything that is regarded as positive, constructive, encouraging and praising is openly proclaimed.”
The regulations on “false information” and “rumors” are certain to fall into the same political trap as the stipulations in the fourth regulation, to the detriment of real protections for information consumers.
The vagaries of CAC regulation continue in the seventh item on the list, dealing with the “regulation of account business activity” (规范账号运营行为). This seems a particular area where a regulator, as opposed to a political actor, might do some good to protect information consumers. But the language is again hopelessly vague. Platforms must prevent accounts from “gathering negative information” (集纳负面信息), “hyping hot incidents in society” (蹭炒社会热点事件), and “commercializing disasters and accidents” (消费灾难事故).
Any one of those might describe some form of real abuse detrimental to the public. But their real intent is in fact to criminalize any genuine interest the public might have in consuming independent information on news stories and broad issues of social concern. If audiences show a keen interest in a given topic, or if they crave information on a breaking story, why shouldn’t self-media accounts compete to gather information on these “hot incidents”? Why must related reporting or opinion be seen as “negative”?
The answers to these hypothetical questions are surely obvious to our readers. When China’s cyber chieftains muddle the issue, however, it’s important to be absolutely clear: These regulations are hopelessly entangled with strict political controls on information, the overriding objective being to defend the Party’s interests — and this fact subverts any value they might otherwise have as regulations to protect the public interest.