A further move in AI regulation - China enforces new rule

Chinese authority plans an enforcement campaign against abuse of recommendation algorithms, targeting at “information cocoons” and discriminative treatment.

31 March 2022

Publication

The Cyberspace Administration of China (CAC) revealed the enforcement plan during a press conference held in mid-March 2022, stating that the campaign will be kicked off soon and last until the end of this year. This campaign, according to the CAC spokesperson, aims to urge relevant service providers to implement the requirements set out in the Administrative Provisions on Algorithmic Recommendations for Internet Information Services (Chinese version, the Algorithm Rule), which applies to online services deploying recommendation algorithms (Algorithm-recommended Services) provided within China. Key requirements under the Algorithm Rule include the following.

  • Transparent disclosure – service providers are required to inform users in a conspicuous way if algorithms are being used to push content to them, and to disclose the basic principles, purposes and mechanics of their Algorithm-recommended Services. Such obligations reflect the call for transparency and explainability of artificial intelligence (AI) technology, which is becoming an increasing focus for companies and regulators worldwide. Further, service providers are required to provide users with convenient opt-out channels or services that are not based on personalised recommendation.

  • Ethical use of recommendation algorithms – service providers are not allowed to use Algorithm-recommendation Services to induce overconsumption, facilitate monopoly or unfair competition, push inappropriate or unhealthy content to children, etc. The Algorithm Rule also specifically requires service providers to cater to the rights and interests of elderly people and workers (for example ride-hailing or delivery workers).

  • Strict oversight on services with “public opinion and social mobilisation capacities” – the Algorithm Rule introduces a multi-level security management system, taking into account the Algorithm- recommended Services’ public opinion and social mobilisation capacities, types of content, user scale, degree of data importance and intervention in user behaviour, etc. Services with “public opinion and social mobilisation capacities” are under the strictest oversight, including social media, online forums, short video and webcasting platforms, etc. They are required to file with the regulator within ten working days of the date of providing services via an online filing platform and conduct security assessments.

The Algorithm Rule is not an isolated move but marks China’s further attempt in AI regulation. At state level, multiple AI-related policy plans, regulatory documents and national standards have been published in the recent years. In September 2021, nine ministry-level regulators jointly publish the Guiding Opinion on Enhancing Comprehensive Governance of Algorithms of Internet Information Service (Chinese version). It proposed to establish the governance framework of algorithms in three years, including promulgating relevant regulations and constructing systems of risk monitoring, security assessment, ethical review and filing management. Also in the same month, a commission under the Ministry of Science and Technology published a set of AI ethical guidelines (Chinese version), putting forward basic ethical requirements for AI and specifying that humans should maintain “full autonomous decision-making power” and that AI is forbidden to “harm public interests”.

Regional regulators are also active in this aspect. In July 2021, Shenzhen issued a draft rule on promoting the AI industry. The draft rule proposed hierarchical, classified and differentiated AI supervision, factoring the risk levels, scenarios, spheres of influence and other specific conditions of different AI applications, which has also shared some spirits of the European Union’s draft AI Regulation (2021). We expect that Chinese legislators and regulators will continue to amplify their efforts on bringing AI under systematic regulation.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.