How to Assess an AI Tool for GDPR Compliance: A Step-by-Step Guide for DPOs
The same AI tool can be perfectly fine in one deployment and a regulatory nightmare in another. Here is how to tell the difference before your organization finds out the hard way.
There is a meeting that happens in organizations everywhere, roughly every two weeks now. Someone from marketing, or engineering, or HR, or sometimes the CEO's office, shows up with a new AI tool they want to use. They are excited. The tool is impressive. They have already signed up for the free trial.
And somewhere in the room, a DPO takes a quiet breath.
Because what that person with the free trial does not know is that "we want to use this AI tool" is not a complete sentence from a compliance perspective. The complete sentence is: "We want to use this AI tool, for this specific purpose, processing this specific data, in this specific context." That second sentence changes everything.
This guide is about how to assess AI tools for GDPR compliance properly. Not as a checkbox exercise, but as a genuine risk evaluation that protects your organization and, more importantly, the people whose data you are responsible for.
Why "Does This Tool Have a Privacy Policy?" Is the Wrong Question
Most organizations, when they assess an AI tool for GDPR compliance, start by looking at the vendor's privacy policy and data processing addendum. They check that a DPA exists. They confirm the vendor has SCCs or DPF certification for transfers. They tick the boxes and call it done.
This approach has a fundamental flaw. It treats the tool as if it exists in a vacuum.
It does not.
The GDPR's obligations on your organization do not flow from the tool. They flow from what your organization does with the tool. The nature of the processing, the categories of data involved, the purposes for which data is used, and the potential impact on the individuals whose data passes through it.
Consider Microsoft Copilot. If your legal team uses it to draft internal memos about contract renewal dates, the GDPR risk profile is relatively modest. If your HR team uses it to summarize performance reviews and make recommendations about promotions, you are now in the territory of Article 22 automated decision-making, potential special category data under Article 9, and employment law obligations that vary by jurisdiction. Same tool. Radically different assessment.
This is the first and most important principle of AI tool GDPR assessment: the use case is the assessment.
Step 1: Define the Use Case Before You Touch the Vendor Documentation
Before you open the vendor's privacy policy, before you request a DPA, before you ask a single question about data residency, write down exactly what your organization intends to do with this tool.
Be specific. "Using AI for HR" is not a use case. "Using an AI writing assistant to draft job descriptions reviewed by a human recruiter" is a use case. "Using an AI tool to screen CVs and rank candidates for shortlisting" is a different use case entirely. That second scenario triggers Article 22, potential algorithmic bias liability under employment discrimination law, and if you are processing data of EU residents, a DPIA requirement under Article 35.
Ask the requesting team these questions and get written answers:
- What specific task will the tool perform?
- What personal data will be input into the tool, directly or indirectly?
- Will the tool's outputs influence decisions about individuals?
- Who will use it, and what data will they bring to their prompts?
- Will any data be retained by the vendor or used to train their models?
That last question is the one that currently generates the most unpleasant surprises.
Step 2: Classify the Personal Data Involved
GDPR distinguishes between ordinary personal data and special category data under Article 9. The difference matters enormously for your assessment.
Ordinary personal data such as names, email addresses, job titles, and general behavioral data requires a lawful basis, a DPA with the vendor, and appropriate security measures. Significant but manageable.
Special category data including health information, racial or ethnic origin, political opinions, religious beliefs, trade union membership, biometric data, genetic data, and data concerning sex life or sexual orientation requires not just a lawful basis but an explicit derogation under Article 9(2). It requires heightened security measures. It almost certainly requires a DPIA.
Here is where AI tools get complicated. Special category data does not have to be explicitly input to be implicitly present.
If your customer service team uses an AI tool to analyze support tickets, and some of those tickets mention a customer's disability or medical condition, special category data is now flowing through your AI tool. If your security team uses an AI tool to analyze endpoint telemetry from systems in an oncology department, the telemetry may carry signals adjacent to ePHI. If your HR AI tool analyzes writing patterns in employee communications, it may be inferring psychological characteristics that constitute health data.
This is not hypothetical. The EDPB has been clear that inferred data, meaning data derived or generated from other data, carries the same classification as the source data when the inference reveals special category information.
Map the data flows before you map the vendor requirements.
Step 3: Determine the GDPR Roles
Who is the data controller? Who is the data processor? Are there joint controllers?
In most standard AI tool deployments, your organization is the controller and the vendor is the processor. Article 28 requires a binding data processing agreement covering the scope of processing, security obligations, sub-processor controls, audit rights, and data return or deletion terms.
The role analysis is not always straightforward with AI tools. Some vendors use customer data to train or improve their models, which can constitute independent processing for the vendor's own purposes. That makes them a controller or joint controller for that processing activity. This changes your legal obligations and your negotiating position.
Ask the vendor directly: does any customer data including prompts, inputs, outputs, or usage metadata feed into model training, fine-tuning, or model improvement? Get the answer in writing. If the answer is yes, or "yes unless you opt out," understand what opting out actually prevents and get contractual commitments around it.
If the vendor cannot clearly answer this question, treat that as a finding.
Step 4: Apply the Transfer Mechanism Checklist
If your vendor processes data outside the EU/EEA, you need a valid transfer mechanism under GDPR Chapter V. The available options are:
- Adequacy decision. The European Commission has determined the destination country provides adequate protection. Current examples include the UK post-GDPR, Japan, Canada for commercial organizations, and the US under the EU-US Data Privacy Framework.
- Standard Contractual Clauses. The 2021 SCCs from the European Commission, correctly configured for the relevant transfer scenario such as controller-to-processor or controller-to-controller.
- Binding Corporate Rules. For intra-group transfers within multinationals.
- Derogations under Article 49. Narrow exceptions for specific situations, not a general-purpose fallback.
The EU-US Data Privacy Framework matters here. As of this writing it is in effect, but it has been legally challenged before. Schrems I invalidated Safe Harbor and Schrems II invalidated Privacy Shield. Prudent organizations do not rely on DPF as their sole safeguard. They use DPF-certified vendors and maintain SCCs as a backstop, and they conduct Transfer Impact Assessments for high-risk processing.
For AI tools specifically, the transfer question has a wrinkle. Where does the AI inference actually happen? A vendor may be headquartered in the EU but run inference on US-based cloud infrastructure. Where the processing occurs, not where the company is registered, determines transfer obligations.
Ask the vendor: where geographically does inference occur? Where are prompts and outputs stored, and for how long? Which cloud provider sub-processors are involved, and where are those data centers?
Step 5: Conduct the DPIA Threshold Assessment
Article 35 requires a Data Protection Impact Assessment before commencing processing that is "likely to result in a high risk to the rights and freedoms of natural persons." The Article 29 Working Party, now the EDPB, has identified criteria that indicate high risk. If two or more apply, a DPIA is required.
Run through this list for your AI tool deployment:
| Criterion | Example in AI context |
|---|---|
| Evaluation or scoring | AI tool scoring job candidates, assessing creditworthiness, or flagging behavioral patterns |
| Automated decision-making with legal or significant effects | AI recommendations that feed into hiring, termination, lending, or access decisions |
| Systematic monitoring | AI tools analyzing employee productivity, communications, or location data at scale |
| Sensitive data or data of a highly personal nature | Health data, HR data, financial data flowing through the AI system |
| Data processed at large scale | Enterprise AI deployments processing data of thousands of employees or customers |
| Matching or combining datasets | AI tool that integrates with CRM, HR system, and communications platform simultaneously |
| Vulnerable data subjects | AI tools used in education, healthcare, or services for children |
| Innovative technology | Generative AI and large language models. The EDPB has specifically flagged novel AI as a DPIA trigger. |
| Transfer preventing data subjects from exercising rights | AI processing that makes it difficult to identify who made what decision and why |
For most enterprise AI deployments in 2026, you will hit at least two criteria. Conduct the DPIA. Document it. If the residual risk after mitigation measures remains high, consult your supervisory authority before proceeding.
Step 6: Assess the EU AI Act Classification
If you are deploying AI tools that affect EU residents, the EU AI Act is now in force and increasingly relevant to your assessment process. As of August 2025, GPAI model obligations apply. High-risk AI system requirements are phasing in through 2026 and 2027.
As a deployer, which is what your organization is when you implement a third-party AI tool, your obligations depend on how the tool is classified in your specific use case.
Prohibited practices under Article 5. No compliant organization should be using AI tools that employ subliminal manipulation, exploit vulnerabilities, perform social scoring, assess individual criminal risk, scrape facial images, recognize emotions in workplace or educational settings, or categorize individuals based on biometric data revealing sensitive characteristics. If the tool does any of these things, the conversation ends there.
High-risk systems under Article 6 and Annex III. If the AI tool falls into categories including employment and worker management, education and vocational training, access to essential services, law enforcement, or critical infrastructure management, significantly stricter obligations apply to both the provider and the deployer. As a deployer you must implement human oversight measures, ensure staff have AI literacy training under Article 4, and maintain logs of system operation.
Limited-risk systems under Article 50. Most enterprise AI tools fall here. Transparency obligations apply. Users must be informed they are interacting with AI, and AI-generated content must be labeled as such.
Minimal risk. Basic tools with no significant interaction with personal data or decision-making. Minimal obligations.
The classification is use-case dependent. A customer service chatbot that answers FAQs is limited-risk. The same chatbot configured to make automated decisions about refunds or account access may tip into high-risk territory.
Step 7: Document the Designation Decision
Every AI tool assessment should end with a documented decision. Approved, Approved with Conditions, or Denied.
Approved means the tool can be used as described, the regulatory requirements have been met or are in the process of being met, and the residual risk is acceptable. Document why. Set a review date because AI tools change their terms, features, and data practices. An approval today is not an approval forever.
Approved with Conditions means the tool can be used, but only within defined parameters. This is not a soft denial or a pending approval. It is a governance decision. The conditions are the governance. Be specific: "Approved for use by the legal team for internal document drafting only. Prohibited for processing personal data of customers or employees. Prohibited for any use case involving special category data. No personally identifiable information to be included in prompts."
If you are vague about the conditions, you have not made a governance decision. You have issued a disclaimer.
Denied means the tool does not meet the organization's requirements given the intended use case, and the risks cannot be mitigated to an acceptable level. Document the reasons. If the vendor addresses the gaps and comes back for reassessment, you have version control. You can show that the previous denial existed, what the issues were, and what changed.
The documentation of the designation decision is your audit defense. When a supervisory authority investigates a data breach involving an AI tool, the first question is going to be: what due diligence did you conduct before deploying this tool? A documented, reasoned designation is your answer.
The Use Case Is Everything
Let us come back to where we started.
The same AI tool, genuinely the exact same product from the exact same vendor, can be:
- A low-risk productivity enhancement when used by a communications team to draft internal announcements
- A moderate-risk processor requiring a full Art. 28 DPA and SCCs when used to process customer communications
- A high-risk system triggering Art. 22 safeguards, a DPIA, and potentially EU AI Act high-risk classification when used to screen job applicants or influence performance decisions
The vendor's compliance posture matters. The DPA matters. The transfer mechanisms matter. But none of that matters as much as understanding precisely what your organization is doing with the tool, with whose data, in what context, and for what decisions.
That analysis is the foundation of every compliant AI deployment. It determines which GDPR obligations apply, whether a DPIA is required, where the EU AI Act kicks in, and what conditions should govern the tool's use.
It is also, frankly, the analysis that most organizations skip. Because it requires the requesting team to be specific about what they actually want to do, and specificity has a way of revealing risks that enthusiasm tends to paper over.
The free trial is running. The meeting is waiting.
Do the use case analysis first.
This article is for informational purposes only and does not constitute legal advice. AI governance and regulatory compliance requirements vary by organization, jurisdiction, and use case. Consult qualified legal counsel before making compliance determinations or relying on this content for any legal, regulatory, or business purpose.
Related reading
- The AI Tool Assessment Checklist Every Privacy Team Needs in 2026
Print this out. Put it in your intake process. Stop discovering compliance problems after the tool is already in production.
- EU AI Act for Deployers: What Privacy Teams Need to Know Before August 2026
You did not build the AI tool. You did not train the model. You are still responsible for what happens when your organization uses it.
- Approved With Conditions: The AI Governance Decision Most Privacy Teams Are Getting Wrong
It is the most common designation in any mature AI governance program. It is also the one that creates the most compliance exposure when it is done poorly. Here is what getting it right actually looks like.