The Office of the Privacy Commissioner for Personal Data (PCPD) in Hong Kong has recently issued a media statement outlining key personal data privacy and security risks associated with the use of OpenClaw and other agentic artificial intelligence (AI) tools, arising from their autonomous operation and high-level system access.
This blog post outlines key takeaways from the PCPD’s statement in the context of emerging regulatory responses to agentic AI in other jurisdictions in Asia and beyond.
PCPD’s agentic AI statement
The PCPD regards agentic AI as representing a whole new risk category not typically associated with chatbots or other conventional AI tools – arising from the unprecedented level of access and autonomy granted, combined with rapid adoption, limited user understanding and security controls that frequently have yet to be extensively tested at scale.
The PCPD notes that agentic AI presents heightened privacy and security risks due to the extensive levels of access to systems, user credentials and data required for agentic AI to carry out its tasking, increasing the risk of unauthorised access, data breaches and inadvertent data loss.
The PCPD also points to concerns that the use of unvetted plugins or skills could lead to the embedding of malicious code, enabling account or system takeovers, resulting in the compromise of personal data. A single vulnerability may even result in system-wide exposure if given high-level, cross-system access, compounding the risk profile.
PCPD recommendations
The PCPD’s media statement recommends that organisations and individual users implement a number of practical measures, including:
- Limiting access rights, granting only minimum permissions and avoiding unnecessary processing of sensitive personal data
- Using only official and up-to-date AI programmes and plugins or skills obtained from trusted sources, and verifying their authenticity
- Adopting appropriate security safeguards, such as deploying agentic AI systems on separate devices or servers and strengthening network controls
- Conducting ongoing risk assessments with human oversight. Adopt a ‘human-in-the-loop’ approach for final control over higher-risk decision-making, including transmission of data.
While agentic AI-specific regulation has yet to be introduced, the PCPD points to its existing AI Model Framework, issued in late 2024, alongside existing legislation such as the Personal Data (Privacy) Ordinance as further sources of current best practice.
In a separate article in the China Daily the following week, the PCPD referenced the Government’s 2025 Policy Address tasking the Department of Justice with coordinating a review of the wider legal landscape to support the development and wider application of AI, hinting that further regulation in this space may eventually be expected.
The PCPD’s media statement specifically mentions the open source AI agent OpenClaw, which has attracted a huge level of uptake across China. Since its release in November 2025, researchers have identified multiple claimed security vulnerabilities, including a critical one-click ‘remote code execution’ vulnerability allowing attackers to take over and run their own commands on another user’s system, and other ‘command injection’ vulnerabilities.
Additionally, around 12% of skills on OpenClaw’s public marketplace (ClawHub) had been found to contain malware. Cybersecurity firm Censys found over 21,000 OpenClaw instances exposed to the internet, many unintentionally leaking unencrypted API keys, login tokens and credentials.
Other emerging responses to agentic AI
China
Chinese government agencies and state-owned enterprises have warned staff against installing OpenClaw on work devices due to security concerns, following repeated warnings issued by government regulators over inadvertent leakage, misuse and deletion of user data.
In late March 2026, the National Cybersecurity Standardisation Technical Committee issued draft guidelines on the deployment and use of OpenClaw-type AI tools. Key recommendations for individual users include:
- deploying OpenClaw on systems separate to personal or everyday devices that are not exposed to the public internet; the guidelines explicitly recommend deploying in a cloud environment with appropriate security protections
- using only trusted, whitelisted plugins and skills, and keeping software up-to-date through official channels
- encrypting stored credentials and API keys
- limiting agent permissions to the minimum scope necessary, and requiring manual approval for high-risk actions
- monitoring API activity for unusual behaviour and setting resource usage caps
- adding protections such as prompt-injection safeguards
- limiting the input or retention of sensitive personal data.
For enterprises, the guidelines call for formal governance, least-privilege access, active detection of unauthorised ‘shadow’ agents through network scanning and logging and reviewing all agent activity, central registers of approved agents, and providing training to staff on risks such as prompt-injection and credential leakage.
Singapore
Singapore’s Model Agentic AI Framework was released in January 2026 and builds on its existing governance-first approach. The framework is organised around four pillars:
- Upfront risk assessment, including selecting appropriate use cases for agentic AI, limiting tools available to agents, and setting clear operational boundaries to minimise the severity of mistakes or misuse
- Meaningful human accountability through clear allocation of responsibility and human-in-the-loop oversight for high-risk or irreversible actions
- Technical controls, including testing agents before release, and continuous security assurance and monitoring, with alerts and fail-safes after deployment
- End-user responsibility through training users and staff on proper use and risks, ensuring oversight and maintaining core human skills rather than over-reliance on agents.
The framework is explicitly positioned as a “living” document to be updated as agentic AI technologies and practices evolve.
Australia
In October 2025, the New South Wales Government published non-binding guidance for public sector use of agentic AI. The guidance calls for named accountability for each agent, defined authority limits, continuous monitoring, robust data governance and effective fail-safe and incident response measures.
The guidance also included screening checklists to assess suitability of use cases, and readiness checklists covering governance, data protection, cost management, human‑in‑the‑loop design, multi‑agent risk management and workforce training.
United Kingdom
The Information Commissioner’s Office (ICO) published a report in January 2026 highlighting certain challenges posed by agentic AI around accountability, risk management and privacy-by-design.
The ICO emphasised that organisations remain fully responsible for UK GDPR compliance when developing, deploying or integrating agentic AI. It also identified novel risks including unclear controller/ processor roles across supply chains, expanded automated decision-making, excessive or unclear processing purposes, increased cyber risk and difficulties in exercising data subject rights.
While no specific recommendations have yet been made, the ICO has indicated further related guidance will follow in light of the Data (Use and Access) Act, starting with public consultations due to take place later this year.
Looking ahead
OpenClaw’s release and rapid adoption by users have led researchers to identify critical vulnerabilities as it is being deployed at scale. This has exposed limitations in, and will likely act as a stress-test for, existing regulatory frameworks, many of which were largely designed with a different class of AI systems in mind.
The PCPD’s media statement, together with other non-binding guidance issued in other jurisdictions, appears to be an initial sticking plaster response aimed at risk awareness and high-level mitigation measures.
Given the pace of development and adoption of agentic AI and the significance of its ramifications for data protection and data and system security, further guidance and rule setting can be expected.