ChatGPT: A GDPR-Ready Path Forward? - McDermott Will & Emery


On March 30, 2023, the Italian data protection authority, the Garante, issued a decision (the “Decision”) that required the US based company developing and managing ChatGPT, OpenAI, not to process any more personal data of Italian users through that online service. As a result, OpenAI suspended access to ChatGPT in Italy. Although the Decision is only interim, it is likely to have material implications not only in this specific case, but also more broadly for the use of ChatGPT and similar technologies in Italy, in the European Union (EU), and even globally.

In this article, we explore the Garante’s concerns as well as the conditions that ChatGPT must fulfil (following a second decision issued by the Garante) to be GDPR compliant. The conditions laid out by the Garante are likely to feature in future discussions about artificial intelligence implementation in any jurisdiction which has high technology and/or data protection laws in place. We also explore what the Garante’s actions against ChatGPT may mean for companies that use these types of technologies and whether there are risks with using these services while the Garante’s action against OpenAI is pending and while other data protection authorities (DPAs) in Europe and elsewhere are still assessing the technology.

In Depth

What Triggered the Garante‘s Investigation?

The Garante’s investigation into ChatGPT was not, as some might think, started as a general investigation of the service’s compliance with GDPR. Rather, it was triggered by a personal data breach notified to Garante on March 20, 2023, affecting ChatGPT users’ conversations and information on payments by subscribers to the service. The Garante took the breach notification as an opportunity, however, to also assess OpenAI’s compliance with GDPR, particularly as to the processing of personal data with ChatGPT.

What was the Content of the Garante’s Decision?

The Garante’s March 30 Decision found several GDPR compliance concerns.

First, the Garante found that the information required under Articles 13 (information to be provided where personal data is collected from the data subject) and 14 (information to be provided where personal data has not been obtained from the data subject) of GDPR was not provided to data subjects whose data is collected and further processed by OpenAI through ChatGPT.

Next, the ChatGPT service, which states it is reserved to users aged 13 and over, did not include an age verification mechanism in its registration process. The Garante considered that ChatGPT may therefore expose children using the service to inappropriate responses, taking into account the degree of development and self-awareness of those children.

Further, the Garante asserted that the personal data processed by ChatGPT likely does not meet the GDPR’s accuracy obligation under Article 5(1)(d). This Article requires personal data to be accurate and, where necessary, kept up to date as well as requiring that every reasonable step be taken to ensure that personal data that is inaccurate, having regard to the purposes for which it is processed, is erased or rectified without delay, since the information made available by the service does not always match the factual circumstances.

Finally, the Garante determined that the legal basis on which OpenAI collected and processed personal data for the purpose of training the underlying algorithms was not sufficiently and clearly determined.

The Garante’s Decision does not include all the GDPR compliance concerns others have raised with these types of technologies. Privacy activists have, for example, also questioned whether and how OpenAI would comply with the GDPR’s data subject rights, such as the right to be forgotten, right to rectification, and right of access. Both French and Spanish DPAs have stated that they will investigate. For the sake of harmonization, the European Data Protection Board has recently announced the setting up of a task force to consider ChatGPT’s GDPR compliance.

Was the Garante’s Action and Decision Foreseeable?

Probably. Since the Schrems II ruling in July 2020, the Court of Justice of the EU has highlighted the enforcement role of EU member state DPAs that have been increasingly active both in fining per se GDPR violations and putting commonly used and evolving digital technologies under heavy scrutiny to confirm they align with essential EU data privacy principles.

Indeed, in line with significant fines issued by the Greek, French, and UK DPAs, in May 2022 the Garante fined US based Clearview AI EUR 20,000,000 for unlawfully extracting facial images from public web sources and matching them with its biometrics database. Months later, in July 2022, the Garante – in congruence with the Spanish DPA – issued a warning to TikTok over its handling of personal data used for targeted advertising after the China-based social network announced its intention to serve ads to users aged 18 and over based on legitimate interest instead of informed consent.

Does the Garante’s Decision Leave Italy and, Ultimately the EU, as a Whole at a Disadvantage in its Adoption of ChatGPT?

Having ChatGPT’s GDPR compliance currently obscured in Italy, and possibly in other European Union countries, seemingly puts the EU at a disadvantage. While the EU stalls on this privacy issue, other countries will continue their exploration of the potentials of this technology and may advance beyond the EU.

This gives rise to pressing considerations for the EU. The Garante’s action forces the EU to give immediate attention to how these technologies can meet GDPR requirements and what must change to ensure they meet GDPR going forward. The action may also have compelled the European Data Protection Board to form its ChatGPT task force so that GDPR concerns across the EU are considered and addressed before the technology takes off (more than it already has) and becomes too integrated into European business and society. The European Parliament is currently debating a regulation over AI that may not come into force for several years. Again, the Garante’s action may also compel the European Parliament to accelerate its AI regulatory efforts. Overall, it appears that the Garante’s action has speeded up the EU’s overall review of artificial intelligence technologies. This may mean the EU takes the lead in terms of regulation in this sector. Meanwhile, other countries have issued proposals about how they will regulate artificial intelligence, with the United Kingdom issuing its policy document “AI Regulation: A Pro-Innovation Approach” on March 29, 2023 and the UK DPA issuing data protection guidance earlier in March 2023. However, many countries are yet to consider these technologies from a data protection or other legal perspective and may have to play catch up to determine how they will regulate the sector.

What Happens Next?

In its Decision, the Garante provided OpenAI an opportunity to address and remedy the identified GDPR violations within 20 days. Among its options, OpenAI could have filed an application in the Rome ordinary Civil Court and started a procedure to attempt to reverse the temporary ban. Instead, OpenAI moved to cooperate with the Garante and address its concerns. Representatives of OpenAI and the Garante met (virtually) on April 5. That meeting was followed by additional correspondence indicating OpenAI’s interest in cooperating with the Garante. OpenAI’s collaborative approach has enabled the Garante to seek a GDPR compliance pathway for ChatGPT.

On April 11, the Garante issued its second decision – also a temporary decision – with specific requirements that OpenAI must meet by April 30, 2023. This included additional requirements, such as an information campaign, to be complied with in May 2023 (the “Second Decision”). The Second Decision provides that if OpenAI meets the requirements, the Garante will lift the injunction, although it acknowledged that future issues or failure to comply with the Second Decision’s requirements may warrant the reimposition of the injunction.

The Garante will continue its investigation on ChatGPT’s compliance with GDPR. Nonetheless, the Garante’s provision of a pathway for lifting the injunction is a strong positive for ChatGPT and similar technologies facing these compliance issues.

The Garante will likely not be the only DPA or regulator that explores ChatGPT’s compliance with data protection law, whether in the EU or elsewhere. Spain just announced a probe into ChatGPT. We understand that OpenAI is addressing concerns raised by regulators in the UK, Canada, Hong Kong, New Zealand, and France.

What do the Garante’s Actions on ChatGPT Mean for Companies who use ChatGPT?

OpenAI is likely to continue to be publicly available for the foreseeable future. The Second Decision provides companies some confidence that ChatGPT will soon be available (again) in Italy and may have a path for GDPR compliance.

We suspect, however, that other regulators (whether in the EU or elsewhere) may similarly restrict ChatGPT’s availability for the same reasons identified by the Garante or for possibly other reasons, such as the aforementioned challenges with complying with data subject rights. If OpenAI is able to comply with the requirements of the Second Decision, that may sway many regulators. The approach taken by regulators may differ depending on approach taken to data subject rights and acceptance of innovation and technology. Some regulators may be aggressive – others are expected to take a more pragmatic approach.

The multijurisdictional and multi-purpose nature of ChatGPT is likely to mean that global regulation applying to its technologies is complex and fragmented, not least because use of ChatGPT may take many forms. For example, companies may use the available APIs to integrate the technology into their offerings. Or they may simply enable access to ChatGPT by employees through company networks. There may also be more sophisticated use cases, for example, the development of versions that, in addition to publicly available sources, operate from non-public sources and databases. As a result, these technologies may have to address and comply with piecemeal, particularized requirements of data protection regulators around the world.

For the moment, regulators appear focused primarily on ChatGPT and related technologies. While some companies have already indicated their intent to commercialize services and technologies that integrate ChatGPT (e.g., Microsoft’s Security Copilot), we are aware of only ChatGPT (the core technology, not its derivative uses) as having been mentioned or identified by regulators for review.

Many companies across the world are rapidly exploring potential use cases for artificial intelligence. Certainly, for the right company and the right use case, ChatGPT and similar technologies may represent a significant business opportunity. General counsel of companies exploring these issues, should be a key part of the business discussion of how ChatGPT will be used. As part of their review, legal teams will want to take careful account of data privacy obligations but also be alert to sensitive areas, which may include the use of the technology for employee screening or hiring decisions.

These issues are hot topics for boards too and in terms of privacy compliance, companies should proactively update their privacy policies, perform impact assessments, update their records of processing activities and take other required measures to meet generally applicable privacy requirements. If ChatGPT is prohibited or restricted in particular countries, companies will need to be prepared to comply and adjust policies and operations, possibly on short notice. Accordingly, companies need to be agile and prepared to act quickly. For the moment, however, companies may likely proceed with many use cases, while remaining alert to meet evolving legal requirements.


People are seeing ChatGPT from different perspectives. Some see it as a corporate game changer, presenting new business opportunities. Some, however, worry about ChatGPT’s potential and have concerns that it could evolve at a pace that escapes the ability of privacy and other regulators to control its use and development. For now, businesses can likely continue to explore the technology, while keeping an eye on legal developments.