Überblick
On September 17, 2025, the Italian Parliament approved a new law on artificial intelligence (Law no. 132/2025, the “AI Law”), the first of its kind in the EU. The AI Law sets the core principles governing AI and its use, establishes the responsibilities for AI governance and supervision in Italy, providing for specific rules in critical sectors such as healthcare, public administration, justice, labour, IP and criminal law. The AI Law, which will enter into force on October 10, does not introduce any new compliance obligations beyond those established under Regulation 2024/1689 (“AI Act”). Rather, it provides more detailed and sector-specific provisions that build upon the existing framework, particularly in compliance, transparency, accountability, and governance.*
Weitere Informationen
Principles underpinning the AI Law
Human-centered vision of AI
AI should enhance and not replace human decision-making and responsibility. AI systems must be designed to operate in compliance with fundamental rights, constitutional rights and EU law, while reflecting values such as transparency, fairness, safety, privacy, gender equality, and sustainability. The law also stresses that human oversight is not optional: people must remain in control, with the ability to understand, monitor, and intervene throughout the entire lifecycle of AI systems.
Protection of personal data
AI systems must handle data in a lawful, fair, and transparent way. Users must be clearly informed, in plain and accessible language, about how their data is being used, what risks may be involved, and how they can exercise their right to object. It explicitly safeguards freedom of expression and the pluralism of media, ensuring that AI systems do not compromise the objectivity, completeness, impartiality, or fairness of information.
Protection of the democratic debate
The AI Law explicitly prohibits the use of AI in ways that could interfere with democratic institutions or distort public debate. This reflects growing concerns about the use of AI to spread disinformation, manipulate opinions, or undermine trust in democratic processes.
AI as a key lever for economic growth
Public authorities actively support the development and use of AI to improve human-machine interaction, including through robotics, across production sectors and organizational processes. They encourage AI adoption to boost productivity along the entire value chain and help launch new businesses. Their efforts focus especially on strengthening Italy’s industrial base, which relies heavily on micro, small, and medium-sized enterprises.
Sectoral provisions
In healthcare, AI systems are designed to support and not replace medical professionals. They can assist with prevention, diagnosis, and treatment under strict medical supervision and ethical oversight. Patients must be clearly informed whenever AI technologies are involved in their care, reinforcing transparency and trust. The AI Law also promotes the development and adoption of AI solutions that improve the quality of life for people with disabilities, enhancing accessibility, independent mobility, safety, social inclusion, and enabling personalized life planning.
In the labor sector, AI is used to genuinely improve working conditions by protecting employees’ physical and mental well-being, while also supporting performance and productivity. Its deployment must be safe, transparent, and respectful of human dignity and privacy. Workers have the right to know whenever AI technologies are used, whether in recruitment, evaluation, or daily operations. Equally important, AI systems must uphold the fundamental rights of workers and cannot be applied in ways that lead to discrimination based on gender, age, ethnicity, religion, sexual orientation, political beliefs, or personal, social, or economic conditions.
In the public sector, public administrations can adopt AI to improve efficiency, reduce the time needed to complete procedures, and enhance the quality and accessibility of services. AI serves only as a support tool: all decisions and official acts remain under the full responsibility of human officials. Public bodies must also implement technical, organizational, and training measures that promote responsible adoption and strengthen the skills of those who interact with these technologies.
In the justice system, AI can be used to support the organization of judicial services, streamline administrative tasks, and ease the workload of courts. However, its role remains strictly auxiliary. All decisions regarding the interpretation and application of the law, the evaluation of facts and evidence, and the decision-making process rest exclusively with judges. The Ministry of Justice is responsible for regulating the deployment of AI in this context, with specific training for judges to foster their awareness of AI risks and benefits.
Supervisory and enforcement authorities
Italy’s new AI Law outlines the bodies that oversee and enforce AI at the national level. It aligns domestic responsibilities with the EU AI Act, while also reflecting Italy’s strategic priorities. The following authorities are responsible for oversight, coordination, and enforcement in the field of AI:
- The Presidency of the Council is responsible for AI national strategy. This is to be approved on a bi-annual basis by the Interministerial Committee for Digital Transition.
- The newly established Coordination Committee oversees strategic guidance and promotes research, experimentation, development, adoption, and application of AI systems and models by public or private entities under state supervision or receiving public funding. Representatives of the national AI Authorities may be invited to the meetings.
- Agency for Digital Italy (AgID) acts as the notifying authority. Its role includes promoting innovation and the development of AI. AgID is responsible for defining procedures and carrying out tasks related to notification, evaluation, accreditation, and monitoring of entities appointed to verify the conformity of AI systems. AgID exercises these responsibilities in accordance with both national and EU legislation, ensuring that AI systems meet the required standards before entering the market or being deployed.
- National Cybersecurity Agency (ACN) is designated as Italy’s surveillance and sanctioning authority for AI. It is responsible for overseeing AI systems in accordance with both national and EU legislation, to ensure the security and resilience of AI systems. ACN also works as single point of contact with the European Union.
- The AI law also requires close collaboration between the AI Authorities and existing regulators, including: the Bank of Italy; CONSOB (the National Commission for Companies and the Stock Exchange), and IVASS (the Institute for the Supervision of Insurance), AGCOM (Italian Communications Regulatory Authority) and the Data Protection Authority.
Changes to criminal and IP law
From the criminal law standpoint, the AI Law:
- Introduces a new offense for the unlawful dissemination of AI-manipulated content (so-called “deepfakes”), which may result in imprisonment.
- Establishes a general aggravating circumstance for crimes committed using AI tools, leading to higher penalties.
- Introduces special aggravating circumstances tightening the penalties for specific crimes – market manipulation, agiotage, infringement of political rights – to cover conduct committed through the use of AI systems.
- Introduces a new offense of unauthorized text-and-data mining (“TDM”), which consists of any automated technique aimed at analyzing text and data in digital form to generate information.
From a copyright perspective, the AI Law:
- Updates Article 171 of Law no. 633 of 22 April 1941 (“Italian Copyright Law”), punishing the reproduction or extraction of text or data from works or materials available online or in databases in violation of the Italian Copyright Law, even by means of AI systems.
- Explicitly clarifies that copyright protection applies to “works of human authorship” and to works created with the aid of AI tools, but only if they are the result of the author’s genuine intellectual labor. In other words, copyright does not extend to content generated solely by machines, without any creative human contribution.
The AI Law also allows TDM for the purpose of training AI models (including generative AI) if the user has lawful access. However, this must comply with Articles 70-ter and 70-quarter of the Italian Copyright Law, which regulate exceptions and limitations, including the rightsholder’s opt-out mechanism. As mentioned above, the AI Law also criminalizes unauthorized TDM.
Changes to jurisdiction
The AI Law amends the Italian Code of Civil Procedure, assigning exclusive jurisdiction to the tribunal for cases concerning the operation of AI systems. The Justice of Peace is explicitly excluded from handling such matters, centralizing competence in higher courts.
Legislative delegation
Within twelve months of the AI Law coming into effect, the government must issue one or more legislative decrees to:
- Align the Italian legal framework with the EU AI Act, including by granting the Authorities under para 3 above with all the relevant supervisory, investigative and sanctioning powers provided for under the EU AI Act.
- Introduce new specific crimes related to the insufficient adoption/implementation of safety measures for the creation, circulation or use of AI systems causing a risk for the life or public safety or for the security of the state.
- Introduce measures to prevent and remove unlawfully generated AI content, backed by effective, proportionate, and dissuasive sanctions.
- Define rules governing both individual criminal liability and administrative liability of organizations, based on the actual level of control exercised over AI systems. This includes clarifying the criteria for attributing responsibility in cases involving unlawful conduct linked to AI, ensuring that liability reflects the real influence and oversight of the person or entity involved.
- As to damages caused by AI, the government will allocate the burden of proof considering the type of AI systems listed under the EU AI Act. This approach seeks to guarantee fair compensation while recognizing the complexity of AI technologies.
Investment in AI and emerging technologies
To support the development of AI and cybersecurity companies, the government is providing up to €1 billion in public investment into “equity” and “quasi-equity” instruments, either directly or indirectly to companies. This will be targeted at: (i) innovative small and medium-sized enterprises with high growth potential, based in Italy and operating in the fields of artificial intelligence and cybersecurity or related technologies including quantum technologies and telecommunications; (ii) other highly innovative companies.
Implications for companies
While the debate on how to regulate AI is still ongoing, Italy has made its move by implementing new regulations that focus on its national priorities while complementing the EU AI Act.
Some of the changes have direct implications for companies, especially regarding criminal law. The AI Law changes certain predicate crimes that can make a company legally responsible – if suitable to trigger the administrative liability of the entity in case those crimes are committed or attempted in Italy or abroad in the interest or for benefit of the Company. Companies should therefore update and revise their organizational, management and control models under the 231/2001 Decree, to reflect on the changes and generally address AI-related risks, shielding the company from being held responsible in case of AI-related crimes.
Companies will also need to monitor the approval of decrees implementing the AI Law, which will address key issues such compensation for damages caused by AI, including easing the burden of proof for the damaged party. These changes will need to be properly addressed to mitigate risks, including in contracts, to reflect risk allocation and responsibilities, ensure indemnities or insurance coverage depending on the circumstances.
The coming changes will certainly have a significant impact and require careful consideration for all companies, not just those operating in the AI space.
*Trainee Valeria Kiseleva also contributed to this article.