Patent pending

LegisGate™ is a proprietary compliance intelligence engine — deterministic regulatory rules with AI-assisted analysis layered on top. Not a chatbot. AI Tool Intelligence Reports your Data Protection Team can act on at the click of a button.

Governance

The AI Tool Assessment Checklist Every Privacy Team Needs in 2026

Print this out. Put it in your intake process. Stop discovering compliance problems after the tool is already in production.

16 min read

There is a particular kind of organizational pain that privacy professionals know well. It is the pain of finding out that a tool your organization has been using for six months processes data in a way that requires a DPIA you never conducted, with a vendor who has no signed DPA, transferring data to infrastructure in a country with no adequacy decision.

Nobody meant for this to happen. The tool seemed fine. Someone checked that it had a privacy policy. The requesting team was enthusiastic and the project deadline was real. And now you are six months in, the tool is embedded in three different workflows, and you have a compliance gap the size of a regulatory investigation.

This checklist exists to prevent that conversation.

It is organized around the single most important insight in AI tool compliance: your obligations are not determined by the tool. They are determined by what you do with the tool, whose data you process through it, and where in the world those people are located. A checklist that ignores use case, data type, and geography is not a compliance checklist. It is a false sense of security with bullet points.

Work through every section before a new AI tool goes into production. Not after.


Section 1: Use Case Definition

This is the section most organizations skip. It is also the section that determines everything else on this list.

  • The specific task the tool will perform is documented in writing
  • The business team requesting the tool has confirmed the use case in writing, not just verbally
  • You have distinguished between what the tool can do and what your organization will do with it
  • The use case does not involve automated decision-making with legal or similarly significant effects on individuals without human review (if it does, flag for Article 22 analysis)
  • The use case does not involve employee monitoring, performance scoring, or behavioral analysis at scale (if it does, flag for heightened DPIA and employment law review)
  • The use case does not involve processing data of children or other vulnerable populations (if it does, flag for COPPA, FERPA, or equivalent protections depending on jurisdiction)
  • The use case has been assessed against EU AI Act Article 5 prohibited practices and confirmed to involve none of them
  • The EU AI Act risk classification for this specific use case has been determined: prohibited, high-risk, limited-risk, or minimal-risk

Why use case comes first: The same AI tool can be minimal-risk in one deployment and high-risk in another. Microsoft Copilot summarizing internal meeting notes is a different compliance situation than Microsoft Copilot generating performance assessments that influence promotion decisions. Define the use case before you assess anything else.


Section 2: Personal Data Inventory

  • All categories of personal data that will flow through the tool have been identified
  • You have considered not just data that is explicitly input, but data that may be present in documents, transcripts, tickets, or other content processed by the tool
  • You have assessed whether any special category data under GDPR Article 9 is or could be present: health data, racial or ethnic origin, political opinions, religious beliefs, trade union membership, biometric data, genetic data, sexual orientation
  • You have assessed whether the tool may infer special category data from ordinary inputs, for example inferring health status from behavioral patterns or communication style
  • The volume of personal data involved has been estimated, including number of data subjects affected
  • Data subjects have been categorized: employees, customers, patients, students, members of the public, or other
  • You have confirmed whether any data subjects are located in jurisdictions with specific AI or data protection requirements beyond your primary regulatory framework

The inference problem: AI tools do not just process the data you give them. They derive new data from it. An HR tool that analyzes communication patterns may infer mental health indicators. A customer service tool that processes support tickets may encounter medical information. A security tool analyzing endpoint telemetry on hospital systems may process data adjacent to patient records. Inferred special category data carries the same obligations as explicitly collected special category data.


Section 3: Geographic Footprint

This section is where most organizations underestimate their compliance obligations. "We are a US company using a US vendor" does not end the analysis if your employees, customers, or users include people in other jurisdictions.

  • You have identified all jurisdictions where data subjects whose data will flow through the tool are located
  • For EU/EEA data subjects: GDPR obligations have been assessed
  • For UK data subjects: UK GDPR obligations have been assessed separately from EU GDPR
  • For California residents: CCPA/CPRA service provider obligations have been assessed
  • For other US state residents: applicable state privacy laws have been checked (Virginia CDPA, Colorado CPA, Connecticut CTDPA, Texas TDPSA, and others depending on your footprint)
  • For Canadian residents: PIPEDA or Quebec Law 25 obligations have been assessed depending on province
  • For Turkish data subjects: KVKK compliance has been assessed including data processing agreement requirements
  • For Swiss data subjects: Swiss revFADP compliance has been assessed
  • For Brazilian data subjects: LGPD obligations have been assessed
  • For healthcare data in the US: HIPAA applicability has been determined regardless of where the vendor is located
  • For financial data in the US: GLBA Safeguards Rule applicability has been determined
  • Sector-specific obligations in relevant jurisdictions have been identified beyond general data protection law

Why geography changes the assessment: A tool approved for use with US employee data may require entirely different contractual protections if the same deployment will process data of EU employees. Organizations with global workforces or customer bases cannot assess an AI tool as if it will only touch data in one jurisdiction. The geographic footprint of your data subjects determines the regulatory footprint of your deployment.


Section 4: Vendor and Contractual Controls

  • A signed Data Processing Agreement is in place with the vendor covering this specific deployment
  • The DPA explicitly covers AI-specific processing activities including prompt processing, output generation, and any model inference activities
  • The DPA addresses whether customer data including prompts, inputs, and outputs is used for model training or improvement, and if so under what conditions
  • The DPA includes appropriate sub-processor controls and a current sub-processor list
  • The DPA includes audit rights allowing your organization to verify compliance
  • The DPA specifies data return and deletion obligations at contract termination
  • For HIPAA-covered entities: a signed Business Associate Agreement is in place that explicitly covers AI feature processing
  • The BAA addresses whether AI-generated outputs containing potential PHI are retained and for how long
  • For EU/EEA data: valid transfer mechanisms under GDPR Chapter V have been confirmed if processing occurs outside the EU/EEA
  • Transfer mechanism type has been documented: adequacy decision, Standard Contractual Clauses, Binding Corporate Rules, or other
  • If relying on EU-US Data Privacy Framework: vendor DPF certification has been verified on the DPF list and a backup mechanism such as SCCs is in place
  • If SCCs are in use: correct module has been selected for the transfer scenario and a Transfer Impact Assessment has been conducted for high-risk processing
  • The vendor has confirmed where AI inference occurs geographically, not just where the company is headquartered
  • The vendor has confirmed data residency for stored prompts and outputs
  • The vendor's sub-processor list has been reviewed for processors in non-adequate countries

Section 5: Security Assessment

  • The vendor holds relevant security certifications appropriate to the sensitivity of data being processed, such as SOC 2 Type II, ISO 27001, or FedRAMP
  • You have confirmed whether the current SOC 2 or equivalent report explicitly covers AI features including the LLM inference pipeline
  • If AI features are not within the current audit scope, a bridge letter or timeline for inclusion has been requested
  • AI-specific security controls have been reviewed: prompt injection defenses, input validation, output filtering, and access controls on AI processing
  • Data encryption standards have been confirmed for data in transit and at rest through AI processing pipeline
  • Access controls for the AI system have been reviewed including who can query the system and what data they can expose through prompts
  • Incident response procedures covering AI-specific breach scenarios have been confirmed with the vendor
  • For agentic AI tools that take autonomous actions: the scope of permissions granted to the tool has been minimized to only what is necessary for the defined use case

Section 6: DPIA Threshold Assessment

A DPIA is required under GDPR Article 35 when processing is likely to result in a high risk to the rights and freedoms of individuals. Check how many of the following apply to this deployment. Two or more triggers a mandatory DPIA.

  • The tool evaluates, scores, or ranks individuals
  • The tool supports or makes automated decisions that have legal or similarly significant effects on individuals
  • The tool monitors individuals systematically, including employee productivity monitoring, behavioral analysis, or location tracking
  • The tool processes special category data or data of a highly personal nature
  • The tool processes personal data at large scale, affecting thousands of employees or customers
  • The tool combines or matches datasets from multiple sources
  • The tool processes data of vulnerable individuals including children, patients, or individuals in dependent relationships
  • The tool uses novel or emerging technology, including generative AI and large language models
  • The tool processes data in ways that may prevent individuals from understanding or exercising their rights

If two or more boxes are checked:

  • A DPIA has been conducted and documented
  • The DPIA identifies the necessity and proportionality of the processing
  • The DPIA identifies and assesses risks to data subjects
  • The DPIA documents mitigation measures for identified risks
  • If residual risk remains high after mitigation: prior consultation with the supervisory authority has been completed or is scheduled

Section 7: EU AI Act Compliance

  • EU AI Act Article 5 prohibited practices analysis has been completed and no prohibited practices are present in this deployment
  • Risk classification has been determined based on the specific use case, not the tool in the abstract
  • For high-risk systems under Article 6 and Annex III: human oversight mechanisms have been implemented
  • For high-risk systems: logs of system operation are being maintained
  • For high-risk systems: a fundamental rights impact assessment has been conducted
  • For limited-risk systems under Article 50: users are informed they are interacting with AI
  • For limited-risk systems: AI-generated content is labeled as such
  • AI literacy training under Article 4 has been provided or scheduled for all staff operating or relying on the AI system
  • For GPAI models used via API: provider-tier obligations have been assessed and deployer-tier obligations specific to your use case have been identified

Section 8: Designation Decision and Documentation

  • A formal designation decision has been recorded: Approved, Approved with Conditions, or Denied
  • For Approved: the rationale for approval is documented including which risks were assessed and how they were mitigated
  • For Approved with Conditions: the conditions are documented with specificity. Permitted use cases, prohibited use cases, data restrictions, and user restrictions are each addressed explicitly
  • For Approved with Conditions: the conditions have been communicated to the requesting team in writing
  • For Denied: the specific reasons for denial are documented so that they can be referenced if the vendor requests reassessment
  • A review date has been set for all approvals, because AI tool terms, features, and data practices change
  • The designation record is stored in a retrievable format that can be produced in response to a regulatory inquiry
  • Version history is maintained so that if an assessment is updated or a tool is reassessed, the original designation and its basis are preserved

A Note on What This Checklist Cannot Do

This checklist covers the core compliance dimensions for AI tool assessment across the major regulatory frameworks in effect as of 2026. It does not replace legal advice. It does not account for every sector-specific obligation in every jurisdiction. It does not cover every edge case in every framework.

What it does is give your privacy team a structured, repeatable process for assessing AI tools before they go into production. A process that starts with use case, accounts for geography, and ends with a documented decision that you can defend.

The organizations that are going to find themselves explaining AI tool deployments to supervisory authorities in the next few years are not the ones who had imperfect processes. They are the ones who had no process at all.

This is the process.

This article is for informational purposes only and does not constitute legal advice. AI governance and regulatory compliance requirements vary by organization, jurisdiction, and use case. Consult qualified legal counsel before making compliance determinations or relying on this content for any legal, regulatory, or business purpose.

Related reading