A powerful new video-generative AI model became widely available today, but it comes with a problem: The model appears to censor topics deemed too politically sensitive by the government in its native China.
Developed by Beijing-based company Kuaishou, the model, “Kling,” was launched on a waiting list earlier this year for users with a Chinese phone number. Today, it's available to anyone willing to provide an email address. After signing up, users fill out a prompt and the model generates a five-second video of themselves describing themselves.
Kling works as advertised: The 720p videos, which take a minute or two to generate, aren't far off the mark, and Kling seems to simulate physical phenomena like rustling leaves and flowing water roughly on par with video-generation models like AI startup Runway's Gen-3 and OpenAI's Sora.
But Kling doesn't generate any clips on specific subjects at all: prompts like “Democracy in China,” “Chinese President Xi Jinping walking down the street,” and “Tiananmen Square protests” produce vague error messages.
Image credit: Kuaishou
Filtering appears to only occur at the prompt level: Kling supports animating still images, and will generate a video of, say, a portrait of Jinping without complaint, unless the accompanying prompt mentions Jinping's name (e.g., “This man is giving a speech”).
We have reached out to Kuaishou for comment.
Image credit: Kuaishou
Kling's odd behavior is likely the result of strong political pressure from the Chinese government against generative AI projects in the region.
Earlier this month, the Financial Times reported that China's main internet regulator, the Cyberspace Administration of China (CAC), will test China's AI models to ensure their responses to sensitive topics “embodied core socialist values.” According to the FT report, the models will be benchmarked by CAC officials on their responses to a range of questions, many of which involve criticism of Xi Jinping and the Communist Party.
The CAC has reportedly gone so far as to propose a blacklist of sources that cannot be used to train AI models, and companies submitting models for review must prepare tens of thousands of questions designed to test whether the models produce “safe” answers.
As a result, AI systems are reluctant to answer topics that might anger Chinese regulators: Last year, the BBC found that Chinese company Baidu's flagship AI chatbot model, Ernie, hesitated or deflected answers to potentially politically hot-button questions such as “Is Xinjiang a good place?” and “Is Tibet a good place?”
This strict policy threatens to slow China's AI progress: Not only would it have to scrutinize data to weed out politically sensitive information, but it would require spending huge amounts of development time creating ideological guardrails that, as Kling cites as an example, may still not work.
From a user's perspective, China's AI regulations have already created two kinds of models: those that are restricted by strict filtering, and those that are not so restricted. Is this really a good thing for the entire AI ecosystem?