Businesses across Europe are rapidly moving from experimental chatbots to autonomous AI agents that plan, reason, access internal data and execute real-world actions with limited human oversight. The data protection frameworks most organisations have in place were not designed for this. The Spanish Data Protection Agency (AEPD) has now published the first detailed regulatory guidance addressing this gap, and its analysis, grounded in the GDPR, is relevant well beyond Spain (see link).
Agentic AI and its impact for data protection
Agentic AI refers to AI systems – often built on large language models (LLMs) – that do more than generate text: they pursue defined objectives through multi-step planning, reasoning, tool use, and execution. These systems can decompose complex tasks into subtasks, invoke internal and external tools and services, maintain short and long-term memory, and can operate with limited or no human oversight. In multi-agent architectures, specialized agents collaborate, share context, and orchestrate workflows, increasing both capability and architectural complexity.
Increasingly, these systems rely on emerging interoperability standards such as the Model Context Protocol (MCP), which standardizes client-server connections between AI applications and external tools, data sources and workflows, and Agent-to-Agent (A2A), which governs how agents discover one another, exchange messages and coordinate tasks. These protocols are designed to make agentic systems more modular, scalable and interoperable. However, they also multiply the number of interfaces at which personal data may be accessed, shared or transformed.
In practice, a single user prompt (e.g., “reschedule all client meetings affected by the new travel policy") can trigger a chain of calendar access, cross-system queries and outbound communications, each involving personal data, with no human decision at any individual step. From a data protection standpoint, this introduces risks that go beyond those associated with conventional generative AI. The automated interaction between components can amplify vulnerabilities and create an attack surface that is significantly broader than the sum of its parts.
Key compliance considerations
The AEPD’s guidance identifies several areas where the introduction of agentic AI systems into personal data processing activities raises specific compliance questions.
Determining controller and processor roles: Agentic AI systems often rely on a chain of external services (LLM providers, cloud infrastructure etc.) each of which may process personal data. The AEPD stresses that controllers must review these data flows and determine whether each third-party acts as a processor, sub-processor, or independent controller. Where the entire agentic system is provided as a service, the service provider will typically qualify as a processor.
Transparency and information obligations: Where the deployment of agentic systems introduces new data recipients, additional automated decision-making, or changes to data retention periods, data subjects must be informed accordingly. The AEPD underscores that this obligation extends to both the individuals whose data is processed and the users (typically employees) who interact with the system.
Minimization, memory and data retention: A defining feature of agentic systems is their use of persistent memory to store context, user preferences and past interactions in order to improve performance. The AEPD flags this as a significant compliance risk, arguing that memory must be compartmentalized between different processing activities and users, subject to strict retention periods, and designed to support data subject rights, including access, rectification, and erasure.
Automated decision-making: Not every agentic workflow will constitute automated individual decision-making under Article 22 GDPR, but the AEPD warns that autonomous actions taken by agents, including those that affect individuals indirectly or through downstream processes, must be carefully assessed. Irreversible actions (such as data deletion or sending communications) will require particular attention.
Risk management and impact assessments: Incorporating agentic AI into existing processing activities can change the nature of that processing and may require a new risk assessment or data protection impact assessment. The AEPD highlights the “rule of 2”: an agent should never simultaneously combine all three risk factors simultaneously without human oversight: (1) process untrusted input, (2) access sensitive data, and (3) take autonomous action. No more than two of the three may be present.
Threats and safeguards
What makes these threats distinctive is that they are often invisible to the end user and difficult to detect through conventional monitoring. Several threats do not depend on a technical vulnerability at all, but instead manipulate the agent's normal behaviour, namely its ability to follow instructions, retrieve information and take action. The AEPD makes a list of threats specific to agentic systems, including prompt injection (both direct and indirect), memory poisoning, shadow leaks (silent, incremental exposure of sensitive information through seemingly legitimate interactions), and compounding errors across long reasoning chains.
Practical safeguards include whitelisting permitted tools and services, filtering data flows between reasoning stages, pseudonymizing user interactions, implementing circuit breakers and hard step limits, and ensuring meaningful human oversight.
Practical implications
The AEPD’s guidance provides organisations with a framework for deploying agentic AI responsibly. The safeguards it sets out (from controller/processor mapping and memory compartmentalization to the “rule of 2”) may be treated as a compliance checklist for any business integrating these systems into personal data processing activities.
Although issued by the Spanish regulator, the analysis is grounded in the GDPR. The compliance challenges it identifies are not jurisdiction-specific. No other European supervisory authority has published guidance of comparable depth on agentic AI so far. Organisations deploying these systems under the GDPR should treat the AEPD’s framework not as a regional curiosity, but as the current benchmark – and a likely preview of what other regulators will expect.
