Frustration about poverty in rural China. News report on corrupt Communists. Crying for help about corrupt cops who shake up entrepreneurs.
These are just a few of the 133,000 examples fed into a sophisticated, large-scale language model designed to automatically flag content considered sensitive by the Chinese government.
The leaked database seen by TechCrunch reveals that China has developed an AI system that surpasses the already formidable censorship machines.
The system appears to be primarily intended to censor Chinese citizens online, but can be used for other purposes, such as improving the already extensive censorship of Chinese AI models.
The photo, taken on June 4, 2019, shows the Chinese flag behind a Razorwire on a Younger residential lot south of Casugar in China's Xinjiang region.
Xiao Qiang, a researcher at UC Berkeley who studied Chinese censorship and examined the data set, told TechCrunch that it is “clear evidence” that the Chinese government or its affiliates want to use LLM to improve control.
“Unlike traditional censorship mechanisms that rely on human labor for keyword-based filtering and manual review, LLMs trained in such instructions will significantly improve the efficiency and granularity of state-driven information management,” Qiang told TechCrunch.
This has led to increased evidence that authoritarian regimes are rapidly adopting the latest AI technology. For example, in February, Openai said it used LLMS to track anti-government posts and caught multiple Chinese companies using LLMS to paint Chinese rebels.
The Chinese embassy in Washington, DC opposed “unfounded attacks and slander on China,” telling TechCrunch in a statement that China is extremely important for the development of ethical AI.
Data found by gaze
The dataset was discovered by security researcher Netaskari. TechCrunch shared a sample with TechCrunch after realising it was stored in an unsecured Elasticsearch database hosted on the Baidu server.
This does not indicate that they are not involved from either company. Organizations of all kinds store their data in these providers.
It is not a sign of who built the dataset exactly, but the record records that the most recent entries from December 2024 are recent.
LLM to detect objections
In a language that eerie reminds people how to encourage ChatGPT, the system creators task an unnamed LLM to figure out whether the content has anything to do with sensitive topics related to politics, social life, and the military. Such content is considered “first priority” and should be flagged immediately.
Top topics include pollution and food safety scandals, financial fraud and labor disputes. These are the issues of China's hot buttons and sometimes lead to public protests.
All forms of “political satire” are explicitly targeted. For example, if someone uses historical analogy to insist on “current politicians,” it must be flagged immediately and do anything related to “Taiwanese politics.” Military issues, including military movements, movements and weapons reporting, are widely targeted.
You can see the snippets of the dataset below. The internal code refers to the prompt token and LLM to ensure that the system uses the AI model to make a bid.
Image credit: Charles Lorett
In the training data
From this vast collection of 133,000 examples LLM must evaluate for censorship, TechCrunch has gathered 10 representative content.
Topics that are likely to cause social unrest are recurring topics. For example, one snippet is a post by a business owner complaining about corrupt local police officers shaking entrepreneurs. This is a growing problem in China as the economy is struggling.
Another content laments the poverty of rural China and describes a town where only the elderly and children remain. There are also news reports that the Chinese Communist Party (CCP) has banished local officials due to local corruption and believes in “superism” instead of Marxism.
There is extensive material related to Taiwan and military issues, including commentary on Taiwan's military capabilities and details on the new Chinese jet fighter planes. According to a search by TechCrunch, it has been mentioned over 15,000 times in Taiwan (Taiwan) Chinese words alone.
It seems that subtle opposition is also being targeted. One of the snippets contained in the database is an anecdote about the fleeting nature of power using popular Chinese idioms. “When the tree falls, the monkeys are scattered.”
The transition of power is a particularly nuanced topic in China thanks to an authoritarian political system.
Built for the “work of public opinion”
The dataset does not contain information about the creator. However, it says it is intended to be “the work of public opinion.” It provides strong clues that it is intended to fulfill the Chinese government's goals, an expert told TechCrunch.
Michael Caster, Asia Program Manager for Article 19 of Rights Groups, explained that “public opinion work” is overseen by China's Cyberspace Management (CAC), a strong regulator of the Chinese government, and usually refers to censorship and promotional efforts.
The ultimate goal is to ensure that the Chinese government's narrative is protected online, but other views will be wiped out. Chinese national president Xi Jinping has described the Internet as the “frontline” of the CCP's “public opinion work.”
Repression is wiser
The dataset investigated by TechCrunch is the latest evidence that authoritarian governments are trying to harness AI for repressive purposes.
Last month, Openai released a report revealing that an unidentified actor who is likely to operate from China will use generated AI to forward social media conversations, particularly those advocating for human rights protests against China, to the Chinese government.
Please contact us for more information on how AI is used in State Opporession. You can also safely contact Charles Rollet with the Signal of Charles Rollet.12.
Openai also found that the technology is being used to generate highly critical comments against the well-known Chinese dissident Cai Xia.
Traditionally, China's methods of censorship rely on more basic algorithms that automatically block content that mentions blacklisted terms, such as “Tiananmen Massacre” and “Xi Jinping,” as many users have experienced using DeepSeek for the first time.
However, new AI technologies like LLMS can make censorship more efficient by finding subtle criticism on a vast scale. Some AI systems can continue to improve as we increase more and more data.
“I think it's important to emphasize how AI-driven censorship is evolving. In particular, at a time when China's AI models such as Deepseek are creating a more sophisticated state control over national discourse,” Berkeley researcher Xiao told TechCrunch.