
Reading between the lines, a dry little document released by the Fujian Police Academy in December last year is a small window onto the future of authoritarianism.
The academy, which is directly under the Fujian provincial government and conducts research to improve public security mechanisms, proposes a new method for detecting an abnormal build-up of people into “potential mass incidents” (潜在群体性事件) — referring to an oft-used official bureaucratic euphemism for collective protests, riots, demonstrations, strikes, and other forms of organized public unrest. The academy’s new method uses AI that is fed data from sound sensors, cameras and official reports. The AI system flags an incident as soon as it starts to develop, giving the police advance warning. If the system overlooks an incident, it reviews the video footage and recordings to improve detection in future. This is machine learning in the service of AI-based surveillance.
This patent is just the tip of the iceberg. Throughout the past year, institutions across China, both private and state-owned, have proposed variations of the same system: taking big data from China’s extensive surveillance system — including input from street cameras and satellites, noise sensors, social media posts, as well as reports from social services — and feeding it into AI models to aid predictive policing. This is part of the government’s vision of a fusion of human and machine response, making for a more robust domestic security system.
The trend does not bode well for the most vulnerable sections of Chinese society.
Hue and CrAI
In 2024, premier Li Qiang introduced the country’s flagship domestic AI policy (“the AI+ initiative”), aiming to expand AI use in every sector of the economy and society. His Government Work Report noted that AI could swiftly modernize “social governance” (社会治理), a broad official concept encompassing the mechanisms the state uses to monitor, manage, and contain social unrest. Since the start of 2025, multiple Chinese institutions have pursued AI systems that serve this purpose, with many capitalizing on information sourced by China’s “grid workers,” (网格员) — typically paid community-level workers who monitor assigned neighborhood grids and report information and incidents to local authorities, their reports uploaded in real time through a dedicated app.

A variety of companies are working out how to empower this system through AI. Huawei, for example, has filed a patent that lets a neural network pinpoint the exact location of photographs taken and uploaded by grid workers, and can even turn the locations depicted in the photos into a 3D model. A research unit under the Jiangxi provincial government has laid out an AI-driven vision of urban management, predicting any incidents through data uploaded by grid workers on portable “smart terminals.”
Using AI to improve the information flows between grid workers and government reflects Xi’s vision of enlisting ordinary citizens in grassroots stability maintenance, part of the concept of the “Fengqiao Experience” — a Maoist-era model of grassroots conflict resolution that Xi has actively revived. In August 2025, the State Council stated that the “AI+ initiative” would include building a “pluralistic co-governance” security system, where AI and humans worked together for a stronger national security system, including through “early warning systems.” Rather than representing a technological break with the past, AI in this context may serve primarily to entrench governance ideas that are six decades old.
While some institutions are making use of Chinese AI models for these projects, Western ones are also being considered. In August 2025 Guizhou Normal University suggested using OpenAI’s GPT models as a “core reasoning tool” in a system to predict “social governance incidents” based on reports of an individual’s “personality traits,” “long-term emotional states” or “degree of exposure to negative cultural influences.” The patent does not specify how data on “negative cultural influences” would be collected, though any such system would depend on extensive pre-existing surveillance infrastructure. While OpenAI has banned individual Chinese users from accessing its products since 2024, businesses in China can still access OpenAI models through Microsoft Azure.
Open-source models are another option. A private company in Shenzhen has proposed using a model from Meta’s Llama family to monitor social media for “negative sentiment” in a tool to detect urban safety risks. Llama is open-source, allowing anyone to download the model for free. The patent cites monitoring natural disasters and urban infrastructure as the primary use case, but the system’s architecture would be equally applicable to monitoring political unrest. However the increasing efficiency of home-grown Chinese models, alongside risking data leaks by entrusting information to Western AI models, makes utilizing a local model more likely: there are multiple references to DeepSeek, Baidu’s Ernie models, or iFlytek’s Spark models.
Gridlocked
How would these inventions impact society? The systems described in these patents would likely fall hardest on the most vulnerable members of Chinese society. The algorithms are programmed around catch-all risk categories commonly associated with violent or disorderly behavior, with little apparent regard for individual circumstances. Guizhou’s risk monitoring system for assessing the danger levels of an individual include a “criminal record, drug abuse record, serious mental illness” as well as tense relationships with family members. It is not clear how the algorithm would make allowances for those, say, who have a criminal record through minor offences as opposed to a major one, or whose family relationships are tense due to living with abusive parents or spouses.
It is also a chance to exert greater control over a system that has persistently caused trouble for local authorities. The Southwestern University of Political Science and Law in Chongqing has created a risk monitoring system specifically targeted at petitioners, individuals who are seeking redress for a wrong done to them either by a local cadre or peer. Petitioners are frequently driven to increasingly desperate acts after years spent navigating a grievance system that rarely produces results — a dynamic that authorities have long treated as a public order problem rather than a governance failure.
The invention would see sensors and cameras placed in spaces where citizens meet officials, flagging a warning to police based on detecting heightened emotion through noise sensors and facial recognition software. But the algorithm is also programmed to take “Life Observations” into account. Subjects are considered high risk if they have spread inflammatory comments on social media over three times in one month, not had steady employment for over a year or do not have any social security, are homeless or reported as “not going out [of the house] for a long time (≥ 7 days).”
Taken together, these patents sketch an emerging architecture for how AI is being enlisted to strengthen China’s domestic security systems. Whether all of these systems will be fully deployed remains an open question. What is clear is that AI is being systematically integrated into China’s grassroots surveillance infrastructure — whether or not these patents ever reach full deployment.
Patently Surveillance
Further AI-related Digital Governance & Monitoring Systems Patents
Henan Songshan Laboratory
Sichuan University
Zhejiang Provincial Post & Telecom
Inspur Software Technology
Junzhuo Technology Group
Fujian Jieyun Software




















