China regulates AI human-like interaction services to protect minors
BEIJING, April 10 (Xinhua) -- China on Friday issued new rules regulating providers of AI systems that simulate human personality traits, thinking patterns and communication styles in continuous emotional interactions with users, placing strict safeguards on content for minors.
The interim measures, jointly released by the Cyberspace Administration of China along with four other government departments, aim to balance technological innovation with safety and public interest.
Under the new regulations, such services cannot generate content for minors that could encourage unsafe behavior, trigger extreme emotional responses or promote harmful habits that may affect their physical or mental well-being.
The regulation also bans AI systems from producing content that encourages self-harm or suicide, uses verbal abuse, or induces emotional dependency that could distort real-life social relationships.
Authorities further prohibit the use of emotional manipulation to induce users to make irrational decisions or to infringe upon their legitimate rights and interests.
The framework comes as human-like AI interaction tools expand rapidly in China, with applications emerging in cultural communication, childcare and elderly companionship.
The rules emphasize a "development with security" approach, combining encouragement of innovation with tiered supervision, with the aim of guiding the sector toward "healthy and responsible" growth.
The rules will take effect on July 15, 2026.
Photos
Related Stories
- AI drives high-quality development of cultural tourism across China
- China moves to bring AI into classrooms as it accelerates digital push
- Tiny tokens, big market: how AI usage is powering China's smart economy
- Chinese AI models take top six spots in global usage rankings: data
- China's smart breeding industry blossoming in glow of AI-driven tech
Copyright © 2026 People's Daily Online. All Rights Reserved.








