[You can find all episodes of our EU AI Act unpacked blog series by clicking here.]
This is the second part of a three-part series. Please follow the links for parts one and three.
Singapore
In Singapore, the recent policy focus has been on promoting responsible AI development and adoption through guidelines, in particular for systems that involve the use of personal data.
For example, the Info-communications Media Development Authority (IMDA) issued a Model Artificial Intelligence Governance Framework (the AI Model Framework) in January 2019 (updated a year later). The AI Model Framework is founded on the two core principles that AI decision-making should be explainable, transparent, and fair, and that the protection of the well-being, safety and interests of humans should be the primary consideration in the design, development and deployment of AI.
The AI Model Framework provides guidance on measures that organisations should adopt in four key areas:
- Internal governance structures and measures: to include a multi-disciplinary governing body, with responsibility for (i) a clear allocation of roles and responsibilities for the ethical development and deployment of an AI system across the full team of people involved, and (ii) implementing suitable risk management controls and monitoring and reporting systems.
- Determining the level of human involvement in AI augmented decision-making: undertaking risk assessments to determine the necessary role for human involvement and oversight (within the categories of human-in-the-loop, human-over-the-loop and human-out-of-the-loop).
- Operations management: implementing measures to ensure good governance in the different phases of operating an AI system, including through good data selection and accountability practices (eg, establishing and recording data provenance, and periodic reviewing and updating of datasets for accuracy and representativeness, etc), refinement of models, carrying out repeatability assessments and regular model tuning.
- Stakeholder interaction and communication: consistency in disclosures of the use of AI (as well as the use of user inputs to train AI), the intended purpose of the system and the role of the AI system in decision-making processes. Providing mechanisms to collect feedback on the performance and output of AI systems.
In May 2024, the IMDA also issued a Model AI Framework for Generative AI (Generative AI Model Framework), specifically addressing Generative AI systems. The Generative AI Model Framework highlights AI-related risks and outlines potential mitigation approaches. For example, the Generative AI Model Framework promotes the adoption of digital watermarking and cryptographic provenance to allow end-users to recognise AI generated content.
The Generative AI Model Framework also indicates that the government intends to explore potentially establishing a third-party testing and accreditation mechanism.
Hong Kong
The Hong Kong government published an Ethical Artificial Intelligence Framework in July 2024. The framework lays down ethical principles for the design and development of AI applications, such as ensuring that output is not discriminatory and that organisations are able to explain AI decision-making processes in a clear and comprehensible manner. It also provides that the degree of human intervention should be based on the degree of ethical risk involved.
Further Guidance issued by Hong Kong’s Office of the Privacy Commissioner for Personal Data (the PCPD) (Guidance on Ethical Development and Use of AI, 2021) stresses the important role of data governance when using personal data to train or as an input to AI systems - another core theme of the EU AI Act; and specifically the need for:
- participation by top management in the internal governance structure and throughout the lifecycle of AI deployment - with monitoring and reporting systems to ensure that top management is aware of issues
- internal policies (directly especially at the acceptable usage of personal data and establishing a proper legal basis for usage)
- proper training for personnel involved in charge of overseeing AI decision-making (eg, to detect and rectify bias, discrimination and errors).
The PCPD proposes an AI Governance Committee comprising senior management and interdisciplinary collaboration (to include legal and compliance professionals) led by a C-level executive.
Personal Data Protection Commission in Singapore issued similar guidance in March 2024 (Advisory Guidelines on the Use of Personal Data in Recommendation and Decision Systems). While less prescriptive than the guidance of the PCPD in Hong Kong, the importance of taking decisions at an appropriately senior management level is also emphasised in Singapore.
Similarly, while Singapore’s AI Model Framework does stress the need for the sponsorship, support and participation of the organisation’s top management (including its board of directors) in AI governance, it warns against complete reliance on a centralised governance mechanism. De-centralising governance is considered a more effective means of embedding ethical considerations into day-to-day decision-making at the operational level.
More recently, the PCPD issued guidelines for the procurement of AI systems from third party vendors. These guidelines focus on the procurement of generative and predictive AI systems from third party vendors and the use of personal data in the customisation and operation of third-party AI systems. The guidelines recommend, among other things:
- establishing multi-disciplinary internal governance committees with a direct line to top management
- implementing procurement policies to ensure that AI systems are only acquired from reputable suppliers
- conducting privacy impact assessments, security audits and ensuring human oversight based on the risks involved.
See our earlier briefing on the Hong Kong procurement guidelines.
South Korea
Guidance issued by the Personal Information Protection Commission (PIPC) in South Korea in July 2024 recommends organising an AI privacy committee around the chief privacy officer. The PIPC views this AI privacy committee as the appropriate focal point for ensuring the legality and safety of AI and data processing.
The AI privacy committee should be held responsible for ensuring compliance with data privacy laws in all aspects concerning AI, monitoring risk factors such as significant technological changes or concerns about privacy violations, and managing incidents, such as data breaches.
Japan
Japan is leading the Hiroshima AI Process, launched under its presidency of the G7 in May 2023, to introduce a harmonised global governance framework for AI.
It also issued its own AI Guidelines for Business in April 2024. The AI Guidelines for Business are non-binding and contain recommendations based on ten guiding principles. These include recommendations to implement:
- measures against bias, misinformation or disinformation
- ongoing monitoring and record keeping processes, including keeping logs of training processes
- ongoing training to enhance the AI literacy of users.
Advanced AI systems, such as GenAI, systems that are capable of operating without human intervention (eg, autonomous vehicles), adaptive AI and interactive AI (including advanced chatbots and virtual assistants) are additionally expected to comply with the Hiroshima International Guiding principles for all AI Actors and the Hiroshima Process International Code of Conduct for Organisations Developing Advanced AI Systems.
Japan is also reported to be considering introducing legally binding regulations on developers of large-scale AI systems.
ASEAN
In an effort to encourage the alignment and interoperability of AI frameworks across Southeast Asia, the Association of Southeast Asian Nations (ASEAN) issued an ASEAN guide on AI governance and ethics (ASEAN Guide) in February 2024. The ASEAN Guide was presented as the first intergovernmental common standard for AI defining the principles of good AI governance and providing guidance for policymakers in the region.
The ASEAN Guide is framed around seven guiding principles that closely track the OECD AI Principles first adopted in 2019. The more detailed recommendations are closely modelled on the Singapore AI Model Framework, although are somewhat more granular.
Regarding the desired the degree of centralisation or decentralisation of a governance structure, the ASEAN Guide advises that the appropriate balance between flexibility and rigidity needs to be suitable for each organisations’ own culture - corresponding to the level at which operational, business and process execution decisions are taken. The ASEAN Guide seeks to square the circle by designing in escalation mechanisms: ‘where AI systems and use cases that are of higher risk are escalated to a governing body with higher authority for review and decision-making’.
The ASEAN Guide provides a risk assessment template to help deployers score against the requirements of the guide and to assess the probability, nature and severity of harm arising from the use of AI systems and how many people could be affected. Also relevant is the reversibility of the harm and the ability of individuals to obtain recourse. The assessment should also scope the feasibility/ practicability of human involvement in the decision-making.
The guide acknowledges that an assessment of the right level of human involvement is context dependent. The ASEAN Guide nevertheless recommends that all systems assessed as high-risk (ie, high severity and/or probability of harm) should be subject to a high-level of human control to ensure that the system is not able to independently make decisions that have unintended or dangerous outcomes.