Patent pending

LegisGate™ is a proprietary compliance intelligence engine — deterministic regulatory rules with AI-assisted analysis layered on top. Not a chatbot. AI Tool Intelligence Reports your Data Protection Team can act on at the click of a button.

GDPR

ChatGPT and GDPR: What Your Organization Actually Needs to Have in Place

Everyone is using it. Most organizations have not done the work to use it legally. Here is what actually needs to happen before your next prompt.

14 min read

Let us start with the uncomfortable truth.

If you work at a large organization operating in Europe or processing data of EU residents, there is a reasonable chance that employees across your business are using ChatGPT right now. Some of them are pasting customer data into it. Some of them are summarizing contracts that contain personal information. Some of them are using it to draft responses to employee complaints. Almost none of them have thought about whether any of this is permitted under GDPR.

This is not a criticism of those employees. ChatGPT is genuinely useful, it is easy to access, and nobody told them not to use it. The gap is not malicious. It is a governance gap, and it belongs to the privacy team to close.

This post is about how to close it properly. Not by banning ChatGPT, which rarely works and tends to push usage underground, but by assessing it correctly, deploying it lawfully, and governing what your people actually do with it.


The Version You Use Changes Everything

Before we get into the compliance requirements, there is a foundational point that many organizations miss entirely. "ChatGPT" is not one product from a compliance perspective. It is several products with materially different data handling characteristics, and the version your organization uses determines almost everything about your GDPR obligations.

ChatGPT free tier and ChatGPT Plus (consumer products): OpenAI uses conversation data to train and improve its models by default. Users can opt out, but the default is opt in. For any organizational use involving personal data, this is a serious problem. You are sharing personal data with a processor who may use it for their own purposes, which changes the controller relationship and potentially makes any DPA you execute insufficient.

ChatGPT Team: Conversations are not used for training by default. This is an improvement but it is still a consumer-grade product not designed for enterprise data governance.

ChatGPT Enterprise: Conversations are not used for training. Data is encrypted. You get admin controls, usage visibility, and a data processing agreement. This is the version that can realistically support a GDPR-compliant deployment, with appropriate safeguards.

OpenAI API via Azure OpenAI Service or directly: Data processing terms can be negotiated contractually. Microsoft's Azure OpenAI Service provides additional enterprise controls, data residency options, and a well-established DPA framework. For organizations with strict data residency requirements, this route provides more flexibility than the consumer products.

If your organization has not made a deliberate decision about which version employees are authorized to use, and communicated that decision clearly, then some portion of your workforce is almost certainly using a version that does not meet GDPR requirements for processing personal data.


What Counts as Personal Data in a ChatGPT Prompt

This is where privacy professionals have to push back against a very common misconception. The misconception is that ChatGPT is fine as long as people do not put obviously sensitive information into it. Names and email addresses, obviously bad. General questions, obviously fine.

The reality is more complicated.

Personal data under GDPR Article 4 is any information relating to an identified or identifiable natural person. The test is not whether the data is obviously sensitive. The test is whether the data, alone or in combination with other information, could be used to identify a specific individual.

Consider what actually gets pasted into ChatGPT in a typical organization:

A lawyer summarizing a contract dispute pastes in the factual background, which includes the names and roles of the individuals involved. Personal data.

An HR manager asks ChatGPT to help draft a performance improvement plan and includes context about why the employee is underperforming. Personal data, and potentially special category data if the context involves health or disability.

A customer service manager pastes in three support ticket examples to ask ChatGPT to identify patterns in customer complaints. Personal data of three customers, possibly including account details, contact information, and descriptions of their problems.

A financial analyst summarizes an earnings call transcript to ask ChatGPT for key takeaways. Probably not personal data, unless the transcript contains statements attributed to identified individuals in a way that could affect their reputation or interests.

A recruiter pastes a CV into ChatGPT and asks it to identify the candidate's strongest qualifications. Personal data of a job applicant.

None of these people thought they were doing anything wrong. The issue is not intent. The issue is that GDPR obligations apply whenever personal data is processed, regardless of the purpose or the perceived sensitivity.


The Four Compliance Questions for Any ChatGPT Deployment

Once you have confirmed which version of ChatGPT is in scope and identified the categories of personal data likely to flow through it, four compliance questions need answers.

Question 1: What is the lawful basis for processing?

GDPR Article 6 requires a lawful basis for every processing activity. For most organizational uses of ChatGPT, the relevant bases are legitimate interests under Article 6(1)(f) or, in some cases, performance of a contract under Article 6(1)(b).

Legitimate interests requires a three-part test: the interest must be legitimate, the processing must be necessary for that interest, and the interests of the data subject must not override the legitimate interest given the context. For low-risk internal productivity uses of ChatGPT with no personal data, legitimate interests is generally available. For uses involving customer data, employee data, or special category data, the analysis becomes more complex.

Whatever basis you rely on, document it. The accountability principle under Article 5(2) requires you to be able to demonstrate compliance, which means the lawful basis analysis needs to be recorded somewhere.

Question 2: Is a Data Processing Agreement in place?

If you are using ChatGPT Enterprise or the OpenAI API, OpenAI is acting as a data processor on your behalf and Article 28 requires a binding DPA. OpenAI provides standard DPA terms. Review them. Confirm they cover the specific processing activities you intend to conduct. Confirm the sub-processor list and understand where processing occurs geographically.

If employees are using the free consumer version or ChatGPT Plus for work involving personal data, there is no compliant DPA framework for that processing. This is one of the primary reasons why a sanctioned enterprise deployment with proper controls is preferable to an ungoverned shadow AI situation.

Question 3: What are the transfer implications?

OpenAI is a US company. ChatGPT processing occurs on infrastructure that includes US-based data centers. For EU personal data, this means you need a valid transfer mechanism under GDPR Chapter V.

OpenAI's DPA includes Standard Contractual Clauses. For the Azure OpenAI Service route, Microsoft's comprehensive DPA and transfer framework applies. Verify which SCCs are in use, that the correct module is selected for your transfer scenario, and whether your use case requires a Transfer Impact Assessment given the volume and sensitivity of data involved.

For high-risk processing, a TIA assessing US government access risks under FISA 702 and other surveillance frameworks is advisable. The Schrems II decision established that SCCs alone are not sufficient if the specific transfer presents risks that the clauses cannot address. The TIA is how you demonstrate you have done that analysis.

Question 4: Is a DPIA required?

Run the threshold assessment against your specific ChatGPT use case. For most enterprise deployments, you will hit the "innovative technology" trigger at minimum, given that generative AI is explicitly flagged by the EDPB as a novel technology that warrants DPIA consideration. If your use case also involves employee data, customer data at scale, or any form of automated decision-making, additional triggers will apply.

Conduct the DPIA before the deployment goes live. Document it. Review it when the use case materially changes.


The Use Case Determines the Risk Level

Here is where the ChatGPT assessment gets genuinely interesting from a compliance perspective. Because ChatGPT is a general-purpose tool, it can be used for an enormous range of activities. And those activities carry dramatically different risk profiles.

Low-risk use cases that can generally be supported with a properly executed enterprise deployment:

Using ChatGPT to draft internal communications that do not reference specific individuals. Using it to summarize publicly available information. Using it for coding assistance that does not involve personal data. Using it to generate ideas, outlines, or first drafts that are reviewed and edited by humans before any personal data is added.

Moderate-risk use cases that require careful scoping, clear conditions, and documented lawful basis:

Using ChatGPT to assist with customer communications where prompts may reference customer accounts or situations. Using it to summarize meeting notes that contain the names and views of employees. Using it to analyze business documents that contain personal information of counterparties or third parties.

High-risk use cases that require the most rigorous assessment, and in some cases may not be appropriate regardless of controls:

Using ChatGPT to assist with HR decisions including performance management, disciplinary processes, or recruitment screening. Using it to process health data, financial data, or other special category information. Using it in any workflow where its outputs directly influence decisions about identified individuals without meaningful human review. Using it with data of children or other vulnerable populations.

The same ChatGPT Enterprise deployment can be a lawful, low-risk productivity tool and simultaneously a compliance exposure depending entirely on what your people are doing with it. This is why governance of the use case, not just governance of the tool, is the work.


What an Approved with Conditions Designation Looks Like for ChatGPT

For most organizations, the right outcome of a ChatGPT assessment is not blanket approval and not blanket denial. It is an Approved with Conditions designation that defines precisely what the tool may and may not be used for.

A well-constructed set of conditions for a ChatGPT Enterprise deployment might look like this:

Permitted use cases: Drafting and editing internal documents that do not contain personal data of identifiable individuals. Coding assistance and code review for applications not processing personal data. Research synthesis of publicly available information. Generating templates, frameworks, and first drafts for human review and completion.

Prohibited use cases: Processing personal data of customers, clients, or members of the public in any prompt or attached document. Processing special category data under GDPR Article 9 including health information, HR data involving protected characteristics, or financial data subject to regulatory obligations. Any use case where ChatGPT output directly influences a decision about an identified individual without documented human review. Use by employees who have not completed the required AI literacy training.

Data restrictions: No personally identifiable information in prompts. No document upload containing personal data. No pasting of customer communications, HR records, legal correspondence involving named individuals, or financial data.

User restrictions: Authorized users only. Completion of AI literacy training required before access is granted. Usage subject to acceptable use policy with consequences for policy violations documented.

Vague conditions are not conditions. "Use responsibly" is not a governance decision. The specificity of the conditions is what makes them enforceable, auditable, and defensible.


The Shadow AI Problem

No discussion of ChatGPT and GDPR is complete without addressing what happens when you do not have an authorized deployment.

If your organization has not provided employees with a sanctioned, compliant AI tool for productivity tasks, they will find one themselves. This is not a prediction. It is what has already happened in virtually every organization of any size in the world. The tools are free, they are powerful, and the productivity benefits are real.

The compliance problem with shadow AI is not just that unapproved tools are being used. It is that unapproved tools are being used with no DPA, no transfer mechanism, no DPIA, no usage controls, and no visibility for the privacy team. Every one of those gaps is a potential GDPR violation, and none of them are documented, which means none of them can be defended.

Providing employees with a properly governed ChatGPT Enterprise deployment, with clear conditions on permitted use cases, is not a compliance compromise. It is a compliance improvement over the shadow AI reality that exists in the absence of a sanctioned option.

The question is not whether your organization will use ChatGPT. The question is whether it will use it in a way you can defend.


What Regulators Are Watching

It is worth knowing that ChatGPT specifically has already attracted significant regulatory attention in Europe. The Italian data protection authority, the Garante, temporarily blocked ChatGPT in Italy in 2023 over GDPR concerns including the lawful basis for data collection, accuracy of outputs, and age verification. Several other European DPAs have investigated or issued guidance on ChatGPT deployments.

The EDPB established a dedicated task force on ChatGPT and published findings in 2024. The consistent message from European regulators has been that GDPR applies to ChatGPT deployments, that organizations are responsible for ensuring compliance when they direct employees to use or permit the use of the tool, and that the absence of a clear governance framework is itself a compliance failure.

The regulatory environment around generative AI tools is not softening. If anything, as EU AI Act enforcement ramps up through 2026 and 2027, the expectations around documented assessment and governance of AI tools will increase.

Do the assessment. Document the decision. Govern the use case.

This article is for informational purposes only and does not constitute legal advice. AI governance and regulatory compliance requirements vary by organization, jurisdiction, and use case. Consult qualified legal counsel before making compliance determinations or relying on this content for any legal, regulatory, or business purpose.

Related reading