On 11 June 2024, the Office of the Privacy Commissioner for Personal Data (PCPD) in Hong Kong released an Artificial Intelligence: Model Data Protection Framework for AI (the AI Procurement Framework).
The AI Procurement Framework follows on from the PCPD’s Guidance on the Ethical Development and Use of Artificial Intelligence issued in August 2021, which is applicable to the self-development of AI. In contrast, the non-binding AI Procurement Framework addresses:
- the procurement of AI systems from third party vendors; and
- the use of personal data to either customise or operate vendor-supplied AI systems.
Key points:
- The AI Procurement Framework applies to all types of AI: both generative AI systems (GenAI) and predictive AI systems (i.e., AI systems designed to analyse historical data to make forecasts or predictions about future trends or behaviour).
- AI strategy and governance: the AI Procurement Framework recommends establishing an internal AI governance committee to provide comprehensive oversight throughout the AI system’s lifecycle, as a key part of an overall governance strategy. The AI Procurement Framework stresses the importance of involvement by top management, with direct reporting lines to the board, and support by a cross-functional team comprising business operations, procurement, legal and cybersecurity (among other functions). The strategy should also comprise ethical guidelines that identify acceptable and unacceptable use cases for AI.
- Internal policies for the ethical procurement of third-party AI systems: to source AI systems only from reputable suppliers that follow international technical and governance standards (such as ISO or IEEE), and to test and audit AI systems for security and privacy risks.
- Risk assessment and human oversight: the PCPD recommends that organisations (through the internal governance committee) conduct privacy impact assessments (both during the procurement process and when significant updates are made to an existing AI system), which should take into account factors such as the type, volume, sensitivity and relevance of any personal data being processed. The AI Procurement Framework recommends human oversight as a risk mitigation strategy: the higher the risk, the higher the extent of human intervention required (a ‘human-in-the loop’ approach). If the risk profile of an AI system cannot be adequately calculated, the AI Procurement Framework recommends the adoption of a ‘human-in-control’ system that allows for human intervention as and when needed.
- Data preparation and management: the PCPD emphasises the importance of managing personal data in a manner that ensures that input data fed into AI systems is accurate, complete and unbiased - to avoid flawed outputs. For example, organisations should implement data labelling and annotation processes to ensure that no ethnic and gender groupings, etc. are either under or over-represented in a dataset. The guidance recommends that datasets are also aggregated or rebalanced where required to ensure that the original composition of the input data does not lead to biased output.
- Reliability tests: to ensure that AI systems deliver consistent and replicable results with identical datasets. The extent of testing should correlate with the system’s risk level - for example, fully autonomous AI systems will require particularly rigorous validation processes.
- AI Incident Response Plan: should be established to monitor, contain, and investigate AI incidents. The AI Procurement Framework clarifies that this plan is in addition to, and not a replacement for, an organisation’s data breach response plan.
- Engagement with individuals: organisations are urged to clearly and prominently disclose the use of AI and provide adequate information on the role of AI in their products or services, especially when the impact on individuals is significant. Organisations are recommended to provide channels for individuals to provide feedback, seek explanations and/ or request human intervention when AI decisions significantly affect them. Opt-out options from AI systems can also be offered to individuals as an alternative, if feasible.