Home>>

China's AI content-labelling rules tame risks

(Xinhua) 13:07, February 10, 2026

BEIJING, Feb. 10 (Xinhua) -- When Wang Hao, 43, a B&B owner of northwest China's Lanzhou City, posted a short video online last August celebrating his "new Maybach," congratulations poured in. So did phone calls from "old friends" asking for loans.

There was just one problem. The luxury car existed only on screen. The clip was generated by artificial intelligence (AI), but nothing on the screen said so.

"If I had posted it a month later," said Wang, who actually owns a van worth less than one-tenth of a Maybach. "People wouldn't have believed it." By then, China's new rules would have required the video to declare its artificial origins.

On Sept. 1, 2025, China ushered in a regulatory first in this field. Under the measures for identifying AI-generated content, AI-generated material published online was required to include both visible labels for audiences and invisible metadata for tracing responsibility.

The regulation came just in time as the AI user base in China expanded quickly and authorities called for tighter oversight. The number of generative AI users in China reached 515 million as of June 2025, up 266 million from December 2024, and nearly doubled in just six months, data from the China Internet Network Information Center show.

A VISIBLE FIX

The logic behind the rules is straightforward. As generative AI floods social media, regulators worry that fabricated images, videos and voices could mislead the public and fuel fraud. Labelling restores transparency without stifling innovation.

China's social media platforms have responded quickly. Short-video apps such as Douyin, the Chinese version of TikTok, and Kuaishou now prompt users to declare whether content is AI-generated, while audio-sharing platforms like Ximalaya add spoken disclaimers and text warnings.

Four months after the rules took effect, major AI content-generation platforms, including Doubao, DeepSeek, Qwen and Yiyan, have attached AI labels to more than 150 billion pieces of generated content, spanning text, images, audio and video. Meanwhile, leading social media platforms have applied prominent on-screen disclosures to more than 220 million items of AI-generated content.

According to a research team at Xi'an Jiaotong University, users' scepticism toward unfamiliar content has risen by nearly 40 percent since the rules took effect.

Moreover, because implicit labelling allows regulators to quickly identify both the tools used to generate content and the nodes through which it spreads, accountability has become markedly swifter.

In one cross-border investigation into AI-generated fake news, the time required to trace responsibility was cut from an average of 72 hours to just 12, the research team said.

"Introduction of AI content labelling rules has addressed long-standing industry pain points," said Wang Xuelian, general manager of an information technology company based in Gansu Province. "Under policy guidance, progress is visible, pointing to a more orderly and positive development trajectory for the sector."

NEW CHALLENGES

As visible labels spread, so do attempts to erase them. A search by reporters across major e-commerce platforms and social-media sites for phrases such as "AI mark removal" reveals a burgeoning grey market.

From basic tools costing just 9.9 yuan (about 1.4 U.S. dollars) to bespoke services priced at thousands, an openly advertised business has emerged around evading AI-generated content labelling.

More worrying is the sophistication. According to technical experts, evasion has evolved from simple cropping into a layered process involving metadata cleansing, repeated file-format conversions and cross-platform reposting. Content flagged on one platform may pass unnoticed on another.

"Because platforms apply different standards and possess varying levels of AI-detection capability, content that is required to carry an AI label on one platform may evade scrutiny on another after a simple change in format," said Shen Yulin, deputy director of the Gansu provincial computing center.

Shen added that anti-labelling techniques are far more than isolated tricks. "The result may be an escalating arsenal of tactics, fueling a new generation of increasingly sophisticated AI misuse," Shen said.

Experts and observers also say the penalties for violating the labelling rules are yet to be made clearer and the marks are yet to be standardized.

Jiang Yanshuang, an assistant research fellow at the China Institute of Education and Social Development at Beijing Normal University, argues that regulatory technologies on most platforms remain fragile.

Jiang suggests accelerating the standardisation of AI-labelling technologies, with clearer technical specifications tailored to different platforms and content types, to prevent regulatory blind spots created by technical inconsistencies.

"Only through multiple-layer defence and coordinated action," Jiang said, "can we steer AI away from unchecked expansion and towards becoming a genuine enabling tool for the wider economy."

(Web editor: Zhang Kaiwei, Liang Jun)

Photos

Related Stories