China drafts law regulating 'digital humans' and banning addictive virtual services for children
China is set to introduce a comprehensive legal framework for 'digital humans' and impose restrictions on virtual services that may harm children.
The News
China is set to introduce a comprehensive legal framework for "digital humans" and impose restrictions on virtual services that may harm children [1]. The draft law, currently under review by the National People's Congress (NPC), reflects growing concerns about the societal impact of AI technologies, particularly generative AI and virtual influencers [1]. The legislation targets the creation, deployment, and use of digital humans—including virtual idols, AI avatars, and synthetic media—while imposing strict limits on virtual environments and games that exploit psychological vulnerabilities to keep minors engaged [1]. Specifics remain unclear, but initial reports suggest creators must disclose AI-generated content and obtain user consent for data collection [1]. Virtual service providers could face penalties for designing platforms that intentionally encourage addictive behaviors in children [1]. This follows rapid growth in China’s digital human market, driven by advancements in generative AI and a developing metaverse ecosystem [1].
The Context
China’s regulatory intervention stems from a mix of technological progress, economic priorities, and social stability concerns [1]. The rise of digital humans has accelerated due to a large consumer base embracing novel entertainment formats and a regulatory environment that previously encouraged innovation [1]. These entities, often indistinguishable from real people, are now used in entertainment, advertising, education, and customer service [1]. Their development relies on generative adversarial networks (GANs) and transformer models, enabling realistic avatars and synthetic voices/movements [1]. However, the unchecked spread of these technologies, combined with increasingly sophisticated virtual environments, has raised alarms about their potential to manipulate users, especially children [1].
The focus on addictive virtual services aligns with global scrutiny of digital platforms’ psychological impacts [2]. Intuit’s AI agents, achieving an 85% repeat usage rate by integrating human expertise with AI [2], highlight the importance of trust and perceived value in adoption. Marianna Tessel, Intuit’s EVP and GM, noted the “massive ask” from customers to combine AI with human oversight, underscoring that mere AI deployment is insufficient; responsible usage is critical [2]. This contrasts with China’s preemptive regulatory approach, prioritizing harm mitigation over innovation [1]. The timing of the legislation coincides with global AI ethics discussions and the U.S. struggle with its AI data center buildout [3, 4]. Trump’s initiative, hindered by tariffs on Chinese imports, illustrates the complexities of fostering domestic AI infrastructure amid protectionist policies [4]. These tariffs, intended to shield U.S. industries, have created supply chain bottlenecks for data center components, slowing progress [4]. This underscores the interconnectedness of global technology supply chains and the risks of protectionist policies [4].
Why It Matters
The draft law’s implications span developers, enterprises, and the broader AI ecosystem in China [1]. For AI engineers specializing in digital humans, the legislation introduces technical and legal complexities [1]. Compliance will require transparency mechanisms—clearly labeling AI-generated content—and robust consent systems for data collection [1]. These measures may increase development costs and slow innovation in the short term [1]. The need to disclose AI generation also poses technical challenges, as deepfakes and synthetic media become harder to detect [1]. Developing reliable detection tools will be essential for enforcing transparency [1].
Enterprises and startups in digital human and virtual entertainment sectors face significant business model shifts [1]. Companies relying on addictive game mechanics or virtual environments for engagement will need to rethink strategies [1]. This could drive a shift toward educational or utility-focused applications of digital humans, rather than purely entertainment-driven ones [1]. Compliance costs—including legal fees, technology upgrades, and potential fines—will also strain smaller startups [1]. Conversely, companies prioritizing ethical AI development and user well-being may gain a competitive edge [1]. Intuit’s 8,500% repeat usage rate, achieved through AI-HI (AI-Human Interaction) integration [2], suggests that trust and value creation can yield sustainable business models [2]. The law thus creates a clear divide between “winners”—those adapting responsibly—and “losers”—those unable or unwilling to comply [1].
The Bigger Picture
China’s regulatory move aligns with global AI scrutiny but adopts a more proactive, interventionist approach than seen in most Western nations [1, 3]. While the U.S. and EU grapple with AI regulation through frameworks like the EU AI Act, China’s approach is immediate and prescriptive [1, 3]. The U.S. struggles with its AI data center buildout, largely due to tariffs impacting supply chains [4], highlighting challenges in fostering domestic AI ecosystems amid protectionist policies [4]. This contrasts with China’s historical emphasis on government-led investment in key technologies [1]. The law signals a potential shift toward prioritizing social stability and ethical considerations over laissez-faire innovation [1]. This could influence global AI regulation, shaping ethical guidelines for the industry [1]. The focus on digital human regulation is notable, as virtual influencers and AI avatars gain global popularity [1]. The law’s impact on the global market remains to be seen, but it may drive the development of industry-wide ethical standards [1]. Success will depend on balancing regulatory oversight with fostering innovation [1].
Daily Neural Digest Analysis
The mainstream narrative often frames China’s AI policies as solely driven by authoritarian control, overlooking genuine concerns about AI’s potential to exacerbate societal inequalities and psychological vulnerabilities [1]. While the government’s motives are complex, the law’s focus on protecting children from addictive virtual environments represents a legitimate effort to address a growing social issue [1]. The technical risk lies not only in enforcing the law—requiring advanced AI detection tools to identify non-compliant content—but also in potential unintended consequences [1]. Overly strict regulations could stifle innovation, driving digital human development underground and complicating oversight [1]. The U.S. experience with its AI data center initiative, hampered by protectionist policies [4], serves as a cautionary tale about the importance of fostering an open, collaborative AI ecosystem [4]. A critical question remains: Can China balance AI regulation to protect citizens while sustaining technological advancement? The answer will shape AI’s future, both in China and globally [1].
References
[1] Editorial_board — Original article — https://reddit.com/r/artificial/comments/1seqb6n/china_drafts_law_regulating_digital_humans_and/
[2] VentureBeat — Intuit's AI agents hit 85% repeat usage. The secret was keeping humans involved — https://venturebeat.com/orchestration/intuits-ai-agents-hit-85-repeat-usage-the-secret-was-keeping-humans-involved
[3] MIT Tech Review — The Download: AI’s impact on jobs, and data centres in space — https://www.technologyreview.com/2026/04/07/1135208/the-download-ai-impact-jobs-data-centres-space/
[4] Ars Technica — Trump ignores biggest reasons his AI data center buildout is failing — https://arstechnica.com/tech-policy/2026/04/sad-trumps-ai-data-center-push-is-failing-blame-his-own-tariffs/
Was this article helpful?
Let us know to improve our AI generation.
Related Articles
Anthropic debuts preview of powerful new AI model Mythos in new cybersecurity initiative
Anthropic PBC, the San Francisco-based artificial intelligence company , has unveiled a preview of a new, highly capable AI model, codenamed Mythos, as part of a cybersecurity initiative dubbed Project Glasswing.
Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything
Anthropic PBC, the San Francisco-based AI company , has announced Project Glasswing, a novel cybersecurity initiative designed to proactively identify and remediate software vulnerabilities before malicious actors can exploit them.
Firmus, the ‘Southgate’ AI data center builder backed by Nvidia, hits $5.5B valuation
Firmus, an Asia-based AI data center builder backed by Nvidia, secured $1.35 billion in funding over six months, raising its valuation to $5.5 billion.