This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 5 minute read

EU AI Act unpacked: the spillover effect in Asia, Part 1: binding AI regulation in Asia

The GDPR has had a huge effect in shaping the development of privacy laws in Asia over the past few years. It remains to be seen whether the EU AI Act will play a similar role in shaping AI regulation in the region.

While some influence from the EU AI Act is already detectable in China’s regulation of generative AI, and in proposals for comprehensive legislation in Thailand, South Korea and Vietnam, each also has its own unique features as well.

In this three-part mini-series, we will look at the state of development of AI regulation in Asia, and how the core risk issues addressed in the AI Act are being dealt with in Asia.

Part one is below. Please follow the links for Parts two and three.

Part 2: guidelines and self-regulation for AI in Asia 

Part 3: the impact in Asia of the extra-territorial reach of the EU AI Act 

 

Part 1: binding AI regulation in Asia

This is the first part of a three-part series. Please follow the links for parts two and three.

China

China introduced the Interim Measures on Generative AI Systems in August 2023. Many of the obligations, for example to avoid discriminatory output, IP infringements and misuses of personal data resonate with the goals of the EU AI Act. 

On the other hand, the Interim Measures additionally pursue national policy goals that are largely unique to China, such as obligations to uphold core socialist values and to avoid generating content that could incite subversion or disrupt economic or social order in the country, etc. Providers of services that have the capacity to affect public opinion or achieve social mobilisation will need to file the algorithm with the Cyberspace Administration of China (the CAC) and undergo security assessment. 

According to a National Information Security Standardisation Technical Committee specification, data sets that contain more than five per cent of any of the 11 types of ‘illegal information’ and nine types of ‘unhealthy information’ specified in the Provisions on the Governance of the Online Information Content Ecosystem (in effect since March 2020) should not be used for training or fine-tuning AI. Examples of prohibited categories include content that subverts the national regime or national unity, harms China’s honour or interests, disseminates rumours or which otherwise disrupts economic or social order.

Training data will need to be screened before use. Although the specification emphasises diversity in the sources of training data, in practice only datasets sourced from behind the ‘great fire wall’ or which are curated from other domestic sources may be able to comply with these stringent requirements.

Additionally, the CAC has issued enforceable measures regulating algorithmic recommendations and deepfakes.

India

India’s Ministry of Electronics and Information Technology (MeitY) issued advisories in December 2023 and twice in March 2024 to address the risk of misinformation posed by deepfakes and AI in the run-up to India’s general elections held between April and June of this year. AI models/ systems must not ‘permit any bias or discrimination or threaten the integrity of the electoral process’.

A requirement in the first March advisory for ‘under-tested’ or ‘unreliable’ AI models/ systems to be approved by the government was quickly withdrawn a few weeks later. Such systems are nevertheless required to be labelled with a disclaimer explaining the ‘inherent fallibility or unreliability of the output generated’.

Given that neither advisory explains what is to be understood by ‘under-tested’ or ‘unreliable’ AI systems, the disclaimer would either need to be universally adopted or some degree of government liaison will be inevitable. However, the sense is that the advisories may in fact have mostly been intended to be declaratory in nature. 

Incoming laws in China and India

Both countries have also announced that they are working on more comprehensive legislative proposals. China’s State Council’s 2024 Legislative Work Plan indicates that a draft AI law will be submitted for deliberation before the end of this year. MeitY announced in March 2023 that a Digital India Act is being prepared that will regulate high-risk AI systems, among other things (to replace the existing IT Act). Neither draft has been published yet. 

A preliminary proposal for China’s law put forward by a group of university scholars is understood to have a generally narrower scope than the EU AI Act while also departing from its approach in important ways. The proposal is also understood to address topics such as the permitted uses of data and ownership of intellectual property rights, which the EU AI Act does not.

In preparation for the Digital India Act, MeitY has constituted four committees on AI. Earlier in 2024, the committee examining legal and ethical issues recommended implementing comprehensive guidelines on ethical issues such as fairness, transparency and accountability in consultation with stakeholders, with incentives provided to promote compliance. For example, by making compliance mandatory in government procurement rules. At the same time, the committee also recommended identifying gaps in existing regulatory schemes as applied to AI-enabled systems. 

Other countries around Asia that are contemplating introducing binding AI laws include Vietnam, South Korea and Thailand.

Vietnam

Vietnam’s recently released draft Law on Digital Technology Industry would introduce a regulatory framework for the use of AI based on a risk classification, taking into account the impact on safety, security and legal rights and interests, etc. The classification is currently under preparation by the Ministry of Information and Communications. Compliance responsibilities, management obligations and technical controls will be assigned based on risk level. Few details are available yet. 

The draft law additionally prohibits seven AI use cases. These include AI systems that are intended to influence an individual’s behaviour without their knowledge, or which exploit weaknesses arising from a person’s age, disability or economic or social circumstances. The draft also prohibits the use of AI for social scoring that leads to detrimental or unfavourable treatment of individuals either in unrelated social contexts or which is unjustified or disproportionate to the social behaviour recorded. Lastly, the draft law would also require AI output to be labelled as such. 

South Korea

South Korea’s draft Act on the Promotion of AI Industry and Framework for Establishing Trustworthy AI, first proposed in February 2023 and currently under review by the National Assembly of Korea, focuses on regulating a class of high-risk AI applications that could have a significant impact on safety, health or fundamental rights, while generally taking a more permissive approach for lower risk systems. Examples of high-risk AI systems include those in the areas of healthcare, transportation (including autonomous vehicles) and automated decision-making that have a significant impact on individual rights or obligations.

Overall, South Korea’s draft AI regulation aims to take a more business friendly approach than the EU AI Act. For example, the draft law does not require registration or conformity assessments, or pre-approval from any government authority. While the draft law also lays down a statutory basis for issuing ethical guidance for AI, it is not understood that these guidelines would be mandatory.

Thailand

Thailand’s draft Royal Decree on AI System Service Business (the Royal Decree) is heavily influenced by an earlier version of the EU AI Act. 

As with the EU AI Act, the Royal Decree takes a risk-based approach, distinguishing between prohibited AI, high-risk AI (risk of unfair discrimination of other adverse impact on individual rights and interests) and limited-risk AI. Since the proposed Royal Decree is based on an earlier draft of the AI Act, it does not seek to regulate general-purpose AI. 

The draft Royal Decree prohibits four AI use cases, which include the use of subliminal techniques to influence or change human behaviour and of real time biometric identification in a public space.

Providers of high-risk AI systems will be required to register with the government. In addition, the Royal Decree will require providers of high-risk AI systems that are based outside of Thailand to appoint a local representative with responsibility for compliance with the filing and document retention requirements. 

One novel feature of the Royal Decree is that it seeks to lay down minimum mandatory terms and conditions for AI services.

The timing of the adoption of the Royal Decree is, however, uncertain. Thai officials recently indicated that they may study regulatory developments in other countries first before finalising the law. 

Tags

ai, eu ai act, eu ai act series