Update to China's AI framework
China introduces key management regimes in governance framework for AI safety
Dieser Artikel wurde am 9.9.2024 bei MLex veröffentlicht. MLex ist ein Nachrichtendienst von LexisNexis mit globalen News & Analysen zu regulatorischen Pflichten.
China’s national standard-setting body has incorporated a tiered and category-based management system for artificial intelligence applications and a traceability management system for AI services in a governance framework aimed at promoting AI safety.
The framework, released by TC 260 on the first day of China Cybersecurity Week from Sept. 9 to 15, was hailed by Gao Lin, Director of the Cyber Security Coordination Bureau of the Cyberspace Administration of China, as one of the „landmark achievements“ in the government’s efforts to advance AI governance work this year.
Gao noted that the framework has refined and deepened the concepts of tiered classification, agile governance, and shared governance proposed in the Global Governance Initiative China released last October (see here).
The framework sets the tone of being tolerant and prudent, upholding the equal importance of development and security, and prioritizing the innovative development of AI. It analyzes and sorts out safety risks arising from inherent technical flaws, as well as from abuse or malicious use of the technology before proposing measures to guard against risks.
The inherent safety risks could stem from models and algorithms due to low explainability, bias and discrimination, stealing and tampering or adversarial attacks, as well as illegal data collection and use, improper content and poisoning in training data, unregulated data annotation, or data leakage.
To address such risks, the framework initiates technical measures, such as implementing secure development norms, strict screening of training data, strengthening intellectual property protection, and appropriate disclosure of principles, abilities, and risks of AI technology and products.
For application-related risks, technical measures include establishing security protection mechanisms, improving the ability to trace the end use of AI systems, and intensifying research and development of AI-generated content testing technologies.
The framework highlights comprehensive management measures beyond technical ones, such as a tiered and category-based management system for AI applications.
It proposes classifying and grading AI system based on their features, functions, and application scenarios, as well as establishing a testing and assessment system for AI risk levels. To strengthen the end use management of AI, it initiates requirements on the adoption of AI technologies by specific users and in specific scenarios to combat abuse.
Additionally, a record-filing requirement should be introduced for AI systems whose computing and reasoning capacities have reached a certain threshold or those are applied in specific industries. They should have safeguarding abilities throughout the design, R&D, deployment, application and maintenance.
China is currently employing a record-filing system for developers of large-scale models before allowing them to offer products or services. They need to complete record filing for algorithm and large language models with Internet regulators.
A traceability management system for AI services will be also established, with digital certificates used to label AI systems serving the public. Standards and regulations on AI output labeling will be formulated and introduced to clarify requirements for explicit and implicit labels, helping users identify and judge information sources and credibility.
TC 260 issued a practical guide to the methods for labeling content in generative AI services last year (see here).
The framework also addresses clarifying data security and personal information protection requirements in stages such as AI training, annotation, use and output. It also pledges to create a responsible AI R&D and application system, ensure AI supply chain security, advance AI explainability research, and establish a shared information and emergency response regime.
Specific safety guidelines for AI development and application are also provided for China’s AI model and algorithm developers, AI service providers, users in key areas, and general users.
Dieser Artikel wurde am 9.9.2024 bei MLex veröffentlicht. MLex ist ein Nachrichtendienst von LexisNexis mit globalen News & Analysen zu regulatorischen Pflichten – jetzt kostenfrei testen.