Italy has become the first EU Member State to adopt a comprehensive AI Framework Law. Approved on 17 September 2025, the statute is not a second AI Act, but a national scaffolding that sits alongside Regulation (EU) 2024/1689. It defines principles, allocates supervisory powers, adjusts rules in sensitive sectors and amends areas left to national discretion such as labour, healthcare, copyright and criminal law.
1. Core principles and scope
Italy’s AI Framework Law opens with a set of general principles designed to guide the adoption, development and deployment of AI. These provisions are deliberately high-level: they do not impose new compliance duties on providers or deployers of AI systems, but they set the legal and political tone for how AI is expected to operate in Italy.
The statute affirms an anthropocentric approach – AI must support human decision-making, respect fundamental rights and never displace human responsibility. Research, experimentation and use of AI must comply with constitutional rights, EU law and principles such as transparency, proportionality, accuracy, non-discrimination, gender equality, privacy and cybersecurity. Human oversight and the ability to intervene remain essential throughout the AI lifecycle.
A notable innovation is the express guarantee that AI use must not prejudice democratic debate. The law prohibits systems being deployed in ways that undermine democratic processes or the free exchange of opinions, reflecting growing concerns over algorithmic amplification, deepfakes and disinformation.
Finally, the Law limits its own scope: it’s not intended to create new obligations beyond the EU AI Act. Businesses should treat the AI Act as the source of compliance duties and view the Italian law as a framework that sets national guard-rails, allocates supervisory responsibilities and adds sectoral rules in health, labour, public administration, IP and criminal law.
2. Governance and institutional roles
The AI Framework Law draws the supervisory map for Italy, aligning national responsibilities with the EU AI Act while addressing domestic priorities. It defines who is in charge of oversight, coordination and enforcement.
At the centre is a new Coordination Committee established at the Office of the Prime Minister, responsible for designing and updating Italy’s national AI strategy and ensuring inter‑institutional consistency.
Two specialist agencies are designated as national AI authorities:
- AgID (Italy’s Digital Transformation Agency) acts as the notifying authority. It manages the accreditation and monitoring of conformity assessment bodies, promotes AI adoption, and oversees the operation of sandboxes.
- ACN (National Cybersecurity Agency) becomes the market-surveillance authority. It has investigative and sanctioning powers, particularly focused on security and resilience of AI systems, and serves as Italy’s single contact point with the EU.
Both authorities must cooperate with existing regulators: (i) AGCOM as Digital Services Coordinator under the DSA; (ii) the Garante for data protection; and (iii) Bank of Italy, CONSOB and IVASS in the financial sector.
3. Sectoral guard-rails
The AI Framework Law supplements the EU AI Act by introducing targeted national rules in areas of high social (and legal) sensitivity. These provisions are less about technical compliance duties and more about setting boundaries that reflect constitutional values and political priorities.
3.1. Healthcare and disability
AI is recognised as a valuable tool for prevention, diagnosis and treatment. AI systems may not determine or restrict access to healthcare on discriminatory grounds, and patients have a right to be informed when AI is used in their care. To foster innovation, the law qualifies certain uses of health data for AI research as being of “relevant public interest,” enabling the Minister of Health to issue implementing decrees that streamline secondary use of data for R&D. In parallel, a national AI platform for territorial care is entrusted to AGENAS (National Agency for Regional Health Services), but its outputs are limited to non-binding suggestions for clinicians.
3.2. Employment
Employers are required to ensure that workplace AI is safe, reliable and non-intrusive, and workers must be informed when such tools are deployed. This reflects and builds upon existing Italian labour law duties (in particular under the Legislative Decree 152/1997). The law also creates a Labour AI Observatory within the Ministry of Labour to monitor AI’s impact, promote training and guide policy responses.
3.3. Public administration and justice
Public bodies may deploy AI to simplify procedures and improve efficiency, but accountability cannot be delegated. Transparency and traceability are expressly required. In the judiciary, the line is even clearer – judges retain exclusive decisional power. AI may support organisational or analytical tasks, but it may never replace judicial reasoning or interpretation of law. In addition, jurisdiction for disputes about the functioning of an AI system is assigned to the Tribunale (court of first instance), a deliberate move to centralise expertise and consistency.
3.4. Minors
The AI Framework Law goes further than the GDPR by setting specific AI-related consent rules for under-18s. Children under 14 may only access AI technologies with parental consent; those aged 14 – 17 may consent on their own provided that information is clear and comprehensible.
4. Intellectual property, content and criminal law
Alongside sectoral guard-rails, the AI Framework Law amends Italy’s intellectual property and criminal codes to address risks posed by synthetic content and AI-assisted creativity. These are not duplications of the EU AI Act but national adjustments in areas where Member States retain competence.
4.1 Copyright and text-and-data mining
The Law amends Italy’s Copyright Law (Law 633/1941) to reaffirm that protected works remain “works of the human intellect,” even if AI tools assisted in their creation. Protection applies only if the human author has made a genuine creative contribution.
The Law also sharpens rules around text-and-data mining (TDM) for AI training. Developers are pointed back to the EU’s TDM exceptions and limitations (Arts. 70-ter and 70-quater Copyright Law). Crucially, the Law amends the criminal provisions (Art. 171) to make unauthorised TDM a criminal offence, elevating what was previously a matter of civil liability into potential criminal exposure.
4.2 Criminal law innovations
The statute introduces a new standalone offence of unlawful dissemination of AI-generated or altered images, video or audio (i.e., deepfakes) where such material causes unjust harm. Codified as Art. 612-quater of the Criminal Code, it carries penalties of one to five years’ imprisonment.
More broadly, the AI Framework Law reshapes criminal liability through a layered approach. The use of AI can now operate as a general aggravating circumstance under the new Art. 61(11-decies) of the Criminal Code which provides that any offence committed “by means of AI” in ways that increase insidiousness, hinder defence, or aggravate consequences attracts heavier penalties. Alongside this general rule, a series of special aggravating circumstances tighten penalties for specific crimes: (i) infringement of political rights, (ii) false personation, (iii) fraudulent price manipulation, (iv) fraud and computer fraud, and (v) financial offences including money laundering, use of illicit proceeds and self-laundering.
In financial markets, the Framework Law modifies both the Civil Code (Art. 2637) and the Consolidated Financial Act (Art. 185) to raise sanctions for market manipulation conducted through AI. Penalties include imprisonment from two to seven years, plus fines up to €6 million under the Consolidated Financial Act.
5. Economic development, public procurement signals, and national strategy
Italy’s AI Framework Law couples governance with a pro-investment and pro-procurement agenda. It is designed to influence three areas: national strategy, procurement choices, and the flow of public investment.
- National AI Strategy. The Office of the Prime Minister must prepare and keep updated a National AI Strategy, approved every two years, and coordinate its implementation while reporting annually to Parliament. The strategy is expected to align incentives, skills and public-sector adoption, and to identify priority use-cases.
- Public procurement steer. Administrations are directed to tune e-procurement platforms so they can privilege AI solutions that: (i) keep strategic data processed and stored in Italian data centres with disaster-recovery/business-continuity also in Italy; and (ii) for generative AI, demonstrate high security standards and transparency in training methods.
- Capital for scale-ups. Art. 23 authorises up to €1 billion (equity and quasi-equity) via the state venture vehicle to invest directly or indirectly in Italian AI/cybersecurity companies and enabling technologies (including quantum and telecoms such as 5G, mobile edge computing, open architectures and Web3), with a strong seed-to-scale-up bias and the possibility to back national tech champions. Investments are channelled through Italy’s Venture Capital Support Fund framework.
6. Delegated legislation
Italy’s AI Framework Law does a lot on its face, but it also sets a demanding programme of secondary legislation.
Within 12 months of entry into force, the Government must adopt three legislative delegations:
- Training data, algorithms and methods. The Government must adopt one or more legislative decrees establishing an “organic” national framework for the use of data, algorithms and mathematical methods to train AI systems. The delegation expressly covers: (i) who has which rights and obligations when using training data/algorithms; (ii) tort/injunctive remedies and sanctions for breach; and (iii) exclusive jurisdiction of the enterprise courts for related disputes.
- Illicit uses of AI. In parallel, a second 12-month delegation empowers the Government to “adjust and specify” rules on unlawful development and use of AI systems, including interim and takedown measures to inhibit dissemination of illicit AI-generated content, backed by effective, proportionate and dissuasive sanctions.
- Alignment with the EU AI Act. A separate delegation requires legislative decrees to align national law with the EU AI Act, including: conferral of inspection and sanctioning powers on the national authorities designated by the Law; amendments across sectoral regimes (banking, financial services, insurance, payments); use of secondary rules by those authorities where appropriate; and implementation of Art. 99 EU AI Act sanctioning architecture.
In the near term, ministerial measures arrive sooner. The Minister of Health has 120 days to issue a decree enabling simplified, GDPR-compliant pathways for AI/ML research data (including secondary use). The Ministry of Labour must establish the Workplace AI Observatory within 90 days. Until the EU AI Act is fully implemented, any AI pilots in ordinary courts require authorisation by the Ministry of Justice. AGENAS may also issue guidance on anonymisation and synthetic data in healthcare.
7. Conclusions
The Italian AI Framework Law is not a second compliance code layered on top of the EU AI Act. Its real significance lies in setting the institutional architecture and the points of legal and commercial friction that companies may encounter in Italy. Procurement priorities, duties towards workers and patients, safeguards for minors, and the elevation of copyright/TDM breaches into criminal territory are now part of the national baseline.
That said, it remains to be seen whether companies could argue that any national measures which go beyond – or are perceived as going beyond – the EU AI Act’s harmonised regime might conflict with Union law, given the Act’s objective of ensuring free movement of AI systems and limiting additional Member-State restrictions unless expressly authorised.