New AI safety governance framework unveiled
The 2.0 version of the Artificial Intelligence Safety Governance Framework was released on Monday at the main forum of the 2025 Cybersecurity Week in Kunming, Yunnan province.
The framework was jointly developed by the National Computer Network Emergency Response Technical Team/Coordination Center of China (known as CNCERT/CC) and AI professional institutions, research institutes and industry enterprises, following the release of the 1.0 version in 2024.
According to a news release from the Cyberspace Administration of China, the 2.0 version builds on the first edition by integrating developments in AI technology and application practices, tracking risk changes, refining risk classifications and updating preventive measures.
The news release quoted an official from the CNCERT/CC as saying the new version aligns with global AI development trends, balancing technological innovation with governance practices and deepening consensus on AI safety, ethics and governance.
The framework promotes the formation of a safe, trustworthy and controllable AI development ecosystem, establishing a collaborative governance model that spans borders, fields and industries, the official said.
The release of the 2.0 version also supports advancing AI safety governance cooperation under multilateral mechanisms and promotes the inclusive sharing of technological achievements worldwide, according to the release.
- Xizang unboxed: Fresh perspectives
- China's first double-deck cable-stayed suspension bridge opens to traffic over Yangtze
- Southern Ocean releases far more CO2 in winter than previously thought: study
- China-Arab States Forum on Radio and TV opens in Chongqing
- China's Global Mangrove vision takes root in Shenzhen
- Tianwen 1 Mars orbiter captures rare images of interstellar comet 3I/ATLAS































