Approved With Conditions: The AI Governance Decision Most Privacy Teams Are Getting Wrong
It is the most common designation in any mature AI governance program. It is also the one that creates the most compliance exposure when it is done poorly. Here is what getting it right actually looks like.
Of the three possible outcomes of an AI tool assessment, Approved with Conditions is by far the most interesting.
Approved is straightforward. The tool meets the requirements for the intended use case. Document the decision, set a review date, move on.
Denied is also straightforward, if sometimes uncomfortable. The tool does not meet the requirements. Document why. The conversation with the requesting team is not fun, but the compliance position is clean.
Approved with Conditions is where things get complicated. It is the designation that acknowledges the tool has value, that the requesting team has a legitimate need, and that the compliance picture is not clean enough for an unqualified approval. It is the designation that says: you can use this, but not like that, and not for this, and not with those people, and not until this other thing happens.
Done well, Approved with Conditions is sophisticated governance. It reflects a nuanced understanding of how risk works, how use cases vary, and how an organization can deploy capable AI tools responsibly while managing real regulatory exposure.
Done poorly, it is a disclaimer with a green light attached. And a disclaimer with a green light attached is worse than either a clean approval or a denial, because it creates the appearance of governance without the substance of it.
The Fundamental Misunderstanding
Most organizations that issue Approved with Conditions designations think of them in one of two ways, both of which are wrong.
The first misunderstanding is that Approved with Conditions is a temporary status. The tool has some issues, those issues need to be resolved, and once they are resolved the designation will be upgraded to Approved. Under this mental model, the conditions are a to-do list. The approval is contingent on completing the list. Once the list is done, the conditions go away.
This is sometimes the right model, specifically when the conditions represent remediation requirements for identified compliance gaps. If a tool is approved on the condition that a BAA is executed within thirty days, the BAA execution is a genuine prerequisite for compliant use, and the status of that action item matters. But treating all conditions as temporary to-do items misses what conditions are actually for.
The second misunderstanding is that Approved with Conditions is a softened approval. The tool is basically fine, there are a few things to be aware of, but go ahead. The conditions are more like advisory notes than governance requirements. People can use the tool for pretty much anything as long as they keep the conditions in mind.
This misunderstanding is the more dangerous one, because it produces designations that look like governance and function like rubber stamps.
What Approved with Conditions Actually Means
Approved with Conditions is a scoped approval. It means the tool is approved for specific uses, in specific contexts, by specific people, subject to specific restrictions. The conditions are not a to-do list and they are not advisory notes. They are the definition of what is permitted.
The clearest way to understand this is through the lens of use case.
The same AI tool can be appropriate for some uses within your organization and inappropriate for others. A general-purpose AI writing assistant can be appropriate for drafting internal communications that contain no personal data and inappropriate for processing customer records. Approving the tool without conditions would create the impression that all uses are permitted. Denying the tool would prevent legitimate uses that carry no meaningful compliance risk. Approved with Conditions is the designation that allows the legitimate uses while clearly prohibiting the problematic ones.
The conditions are not limitations on an otherwise unrestricted approval. They are the definition of what the approval covers. Outside the conditions, the tool is not approved. Inside the conditions, it is.
This reframing has significant practical implications. It means that an employee who uses an AI tool in a way that violates the conditions is not bending a guideline. They are using a tool that has not been approved for that use. The compliance exposure that results from that use is not a grey area. It is a documented deviation from a governance decision.
The Four Elements Every Approved with Conditions Designation Needs
A conditions designation that does its job contains four specific elements. The absence of any of them creates gaps that undermine the governance value of the designation.
Permitted use cases.
What is this tool actually approved to do? The answer should be specific enough that any authorized user reading it understands exactly what they can use the tool for. Not "internal productivity tasks" but "drafting and editing internal communications that do not contain personal data of identifiable individuals, coding assistance for applications not processing personal data, and research synthesis of publicly available information."
The level of specificity required depends on the risk profile of the tool. A lower-risk tool with a narrow intended use might need only a brief description of the permitted scope. A higher-risk tool with a broader potential range of applications needs a more detailed specification of exactly which applications are within scope.
Prohibited use cases.
What is this tool specifically not approved to do? This element is often omitted from conditions designations on the theory that anything not explicitly permitted is implicitly prohibited. In practice, people do not read governance documents that way. They read them looking for what they can do, and if what they want to do is not explicitly prohibited, they assume it is permitted.
Explicit prohibitions serve several purposes. They make the governance decision legible to the people it governs. They document that the privacy team considered and specifically rejected certain use cases. And they create a clear record of what was known to be out of scope at the time of designation, which matters if a future incident involves an out-of-scope use.
Data restrictions.
What data may and may not flow through this tool? This is frequently the most important element of the conditions for tools that have any meaningful contact with personal data.
Data restrictions should be specific about categories. Not "no sensitive data" but "no personally identifiable information in prompts, no document upload containing personal data, no special category data under GDPR Article 9 including health information and HR data involving protected characteristics, no financial data subject to regulatory obligations."
Data restrictions should also be specific about the reason, at least internally. The reason matters because it determines whether an exception is appropriate and under what circumstances. A data restriction that exists because the vendor's BAA does not cover certain data types is different from a data restriction that exists because the tool processes data through infrastructure in a non-adequate country. The first might be resolved by BAA amendment. The second requires a different analysis.
User restrictions.
Who is authorized to use this tool, and under what conditions? User restrictions can operate by role, by team, by function, or by completion of specific prerequisites. An AI tool approved for use by the legal team but not the HR team is a different governance position from an AI tool approved for any employee who has completed AI literacy training. Both are legitimate conditions. Neither is implied by the other.
User restrictions also serve a documentation function. They make explicit that the designation applies to a specific population of users and that the risk assessment was conducted with that population in mind. If the tool is subsequently used by people outside the designated user group, the governance record clearly shows that the assessment did not cover that use.
The Conditions That Are Not Conditions
Having established what a proper set of conditions looks like, it is worth being equally explicit about what conditions are not.
Vague admonitions are not conditions.
"Users should exercise appropriate judgment." "Use responsibly." "Be mindful of data privacy." These phrases appear in AI governance documents with depressing frequency. They convey nothing actionable, they cannot be enforced, they cannot be audited, and they provide no meaningful protection to the organization or to the individuals whose data may flow through the tool. They are the compliance equivalent of saying "be careful." They make the person writing them feel like they have done something. They do not constitute governance.
General awareness requirements are not conditions.
"Users should be aware of the tool's limitations." "Employees are reminded that AI outputs may contain errors." These statements may be worth including in user guidance, but they are not conditions on the use of the tool. A condition creates a specific obligation or restriction. A reminder creates awareness. Awareness is not governance.
Pending items that have not been resolved are not conditions that make a tool safe to use.
If a tool is being approved on the condition that a DPA is executed, the DPA needs to be executed before the tool goes into production, not after. An Approved with Conditions designation that lists unresolved compliance gaps as conditions and then authorizes deployment in advance of resolution is not a governed approval. It is a deferred compliance failure with a positive designation attached.
The distinction between conditions that define the scope of a permission and conditions that are prerequisites for that permission to become effective needs to be explicit in the designation record. A tool can be designated as Approved with Conditions where the conditions include both ongoing use restrictions and prerequisite actions that must be completed before use begins. But those two categories of conditions need to be clearly distinguished, and the prerequisite conditions need to be tracked and verified.
Conditions as the Governance Program
There is a broader point embedded in this discussion that is worth making explicit.
The conditions attached to AI tool designations are one of the primary mechanisms through which an organization's AI governance program translates regulatory requirements into operational reality. Regulations tell you what the law requires in the abstract. Conditions tell the people in your organization what they specifically can and cannot do with specific tools.
A mature AI governance program is not just a collection of assessments and designations. It is a set of operational constraints, built from regulatory analysis, that govern how AI tools are actually used across the organization. Those constraints live in the conditions.
This means that the quality of an organization's AI governance program is substantially determined by the quality of its conditions. Vague conditions mean vague governance. Specific, enforceable, auditable conditions mean real governance.
It also means that conditions need to be communicated. A designation record in a compliance system that the requesting team never reads is not a governance mechanism. The conditions need to reach the people who are going to use the tool. They need to be understood. They need to be available for reference when questions arise about what is and is not permitted.
And they need to be maintained. AI tools change. Vendor terms change. Regulatory requirements change. Use cases evolve. A set of conditions that was adequate when it was written may become inadequate as circumstances change. The review date attached to every designation exists for exactly this reason. Conditions are not permanent. They are the governance position as of the assessment date, and they need to be revisited when the material facts change.
What a Regulator Sees When They Read Your Conditions
It is worth closing with a concrete image of what the governance value of well-constructed conditions looks like from the outside.
When a supervisory authority reviews an organization's AI governance program following an incident, they are looking for evidence of intentionality. Did this organization think seriously about the risks of deploying this tool? Did they make deliberate decisions about what was and was not permitted? Did they communicate those decisions to the people who needed to follow them? Did they create a record that can be verified?
A Final Designation Report with specific, detailed conditions answers all of those questions affirmatively. The conditions demonstrate that the organization considered the specific risks of the specific use case. They demonstrate that the organization made deliberate decisions to permit certain uses and prohibit others. They demonstrate that the organization's governance was not just a posture but an operational reality with defined parameters.
A designation record with vague conditions or no conditions answers those questions poorly. It suggests that the organization went through a process without reaching substantive conclusions. It raises the question of whether the assessment was real or performative. It leaves open whether anyone actually knew what they were and were not permitted to do with the tool.
The difference between these two pictures is not the quality of the underlying compliance analysis. It is the quality of how the conclusions from that analysis were translated into clear, specific, enforceable governance decisions.
That translation is what Approved with Conditions is for. Done right, it is some of the most valuable work a privacy team does.
This article is for informational purposes only and does not constitute legal advice. AI governance and regulatory compliance requirements vary by organization, jurisdiction, and use case. Consult qualified legal counsel before making compliance determinations or relying on this content for any legal, regulatory, or business purpose.
Related reading
- How to Assess an AI Tool for GDPR Compliance: A Step-by-Step Guide for DPOs
The same AI tool can be perfectly fine in one deployment and a regulatory nightmare in another. Here is how to tell the difference before your organization finds out the hard way.
- What Is a Final Designation Report? How Leading Privacy Teams Document AI Tool Decisions
Most organizations make AI tool decisions informally and document them poorly. Here is what a governance-grade decision record actually looks like and why it matters when a regulator comes asking.
- Why Your AI Tool Intake Process Is Broken (And What Good Looks Like)
Most organizations are managing AI tool requests the same way they managed software procurement in 2015. The tools have changed. The risks have changed. The process has not.