Patent pending

LegisGate™ is a proprietary compliance intelligence engine — deterministic regulatory rules with AI-assisted analysis layered on top. Not a chatbot. AI Tool Intelligence Reports your Data Protection Team can act on at the click of a button.

Process

Why Your AI Tool Intake Process Is Broken (And What Good Looks Like)

Most organizations are managing AI tool requests the same way they managed software procurement in 2015. The tools have changed. The risks have changed. The process has not.

14 min read

Here is a story that will feel familiar.

A business team submits a request to use a new AI tool. The request goes into a queue somewhere, possibly a Jira board, possibly a shared inbox, possibly a spreadsheet that one person updates when they remember to. Someone from IT or legal or the privacy team eventually picks it up, sends the vendor a questionnaire with fifteen questions that the vendor answers in ways that are technically responsive and completely uninformative. The answers come back. Nobody is quite sure what to do with them. A few weeks pass. Someone follows up. A few more weeks pass. Eventually a decision gets made, or the business team just starts using the tool anyway because the project could not wait.

Sound familiar?

This is not an edge case. It is the standard operating procedure for AI tool governance at the majority of organizations in the world right now. And it is a process that was designed for a different era, a different risk profile, and a different volume of requests.

It is not working.


What the Old Process Was Designed For

The traditional software procurement review was built around a specific set of assumptions. Software was acquired infrequently. Vendors were large, established companies with legal teams and compliance programs. The questions that mattered were mostly about security certifications, data residency, and contractual terms. The review could take weeks because the stakes of any individual tool were high enough to justify the time and the volume was low enough to manage it.

Those assumptions do not hold for AI tools in 2026.

Business teams are not requesting one new tool per quarter. They are requesting several per month, sometimes several per week, across every function of the organization. The tools are not all large established vendors with mature compliance programs. Some are startups with terms of service that change quarterly and privacy policies that were clearly written in an afternoon. The risks are not just about security certifications. They are about model training data, inference infrastructure, automated decision-making, EU AI Act classification, DPIA obligations, and transfer mechanisms that vary by jurisdiction.

And the consequences of getting it wrong are not just contractual. They include regulatory investigations, supervisory authority findings, and the particular organizational embarrassment of discovering that a tool your employees have been using for eight months processes data in a way that should have triggered a DPIA that nobody conducted.

The old process cannot handle this. It was not designed to.


The Five Symptoms of a Broken AI Intake Process

You probably do not need a consultant to tell you whether your AI intake process is working. You can diagnose it yourself by looking for these five symptoms.

Symptom 1: Requests sit in the queue for weeks without meaningful progress.

Not because the reviewers are lazy, but because the process does not give them a clear path from intake to decision. They receive a request, they are not sure exactly what information they need, they are not sure which frameworks apply, and they spend more time figuring out what to look for than actually looking. A review that takes six weeks is not doing six weeks of analysis. It is doing two days of analysis across six weeks of process friction.

Symptom 2: The vendor questionnaire is doing most of the work.

Vendor questionnaires are useful for gathering information that is genuinely in the vendor's possession, things like sub-processor lists, security certifications, and data residency details. They are not useful for assessing the compliance implications of your organization's specific use case, because the vendor does not know your use case, your data subjects, or your geographic footprint. A process that relies primarily on vendor responses is outsourcing the compliance analysis to the party with the least incentive to identify problems.

Symptom 3: The same tool gets assessed multiple times by different teams.

Marketing assessed Salesforce Einstein last year. Legal is now assessing it again for a different use case. Nobody knows the first assessment happened because there is no central record. This is not just inefficient. It is a governance failure. Different assessments of the same tool for different use cases may reach different conclusions, and without a central record, there is no way to ensure consistency or build institutional knowledge.

Symptom 4: Approved with conditions decisions are made but conditions are never enforced.

A tool gets approved on the condition that a BAA is executed and certain data types are excluded from processing. Six months later, nobody has checked whether the BAA was signed or whether the conditions are being followed. The approval record exists. The governance does not.

Symptom 5: Business teams work around the process.

This is the most revealing symptom because it tells you the process has lost legitimacy with the people it is supposed to serve. When business teams start using tools before approval is granted, or find workarounds to avoid triggering the review, it means the process is experienced as an obstacle rather than a service. The tools are being used either way. The only difference is whether the privacy team knows about it.


What the Assessment Process Is Actually Missing

The problems above are symptoms. The underlying cause is that most AI tool intake processes are missing three things that are fundamental to assessing AI tools correctly.

They are missing use case specificity.

The request form asks what tool is being requested. It does not ask with enough precision what the tool will be used for, whose data will flow through it, or what decisions its outputs will influence. Without that information, the assessment cannot determine which GDPR obligations apply, whether a DPIA is required, how the EU AI Act classifies the deployment, or what conditions should govern use. The assessment is working with incomplete inputs and producing outputs that reflect that incompleteness.

They are missing geographic context.

An organization with employees, customers, or users in multiple jurisdictions needs to assess AI tools against the regulatory frameworks that apply to each of those populations. A tool that is straightforward to approve for US-only data may require GDPR analysis, KVKK compliance, revFADP safeguards, and LGPD assessment for a global deployment. Most intake processes do not capture geographic footprint in a structured way, which means jurisdiction-specific obligations are identified inconsistently or not at all.

They are missing a structured findings framework.

When a reviewer identifies a compliance gap, where does it go? In most processes, it goes into an email, or a comment in a document, or a note in a ticket field that nobody looks at. There is no structured way to record findings, assign severity, link to regulatory citations, or track remediation. The analysis that gets done is not preserved in a way that can be referenced, audited, or built upon.


What Good Actually Looks Like

A functional AI tool intake process in 2026 has a small number of characteristics that separate it from the broken version.

It starts with use case, not vendor.

The first questions in the intake form are about what the organization intends to do with the tool, not about which tool it is. Use case, data types, data subjects, geographic footprint, and decision-making context are captured at intake and used to determine which assessment path the request follows. A low-risk internal productivity tool with no personal data follows a lighter process. A tool processing patient data in a clinical decision support context follows a comprehensive one. The process is calibrated to the risk, not uniform regardless of it.

It produces findings tied to specific regulatory provisions.

When a compliance gap is identified, it is documented as a finding with a clear regulatory citation, a severity level, and a recommended action. Not "GDPR concerns" but "GDPR Article 28: no data processing agreement in place for this processing activity, required before processing begins." The specificity is what makes the finding actionable, what allows the requesting team to understand what needs to happen, and what creates the documentation that defends the organization if the deployment is ever reviewed by a regulator.

It results in a designated decision, not just a status.

The output of the process is one of three things: Approved, Approved with Conditions, or Denied. Not "pending further review" or "legal is looking at it." A real decision with documented rationale, specific conditions if applicable, and a clear record of who made it and when. That record is the organization's evidence of due diligence. It is what exists when the question is asked later.

It is fast enough to be used.

A process that takes six weeks for a low-risk tool is not a governance process. It is a deterrent to using the governance process. The assessment should be proportionate to the risk and efficient enough that business teams experience it as helpful rather than obstructive. For straightforward deployments, a structured assessment should produce a designation in days, not weeks. For complex high-risk deployments, a more thorough process is justified and the business team requesting it should understand why.

It creates a central record.

Every assessment, every finding, every designation, and every condition is stored in a single place that any authorized member of the privacy team can access. When a regulator asks what AI tools your organization uses and what due diligence was conducted, the answer is not a search through email chains and shared drives. It is a record.


The Cost of Not Fixing It

There is a version of this conversation where the broken intake process is a manageable problem. Where the volume of requests is low enough that the inefficiency is tolerable, the risk profile of the tools being assessed is modest enough that missing something occasionally is not catastrophic, and the regulatory environment is forgiving enough that imperfect documentation is not a serious exposure.

That version of the conversation was plausible in 2021. It is not plausible in 2026.

The volume of AI tool requests at most organizations has increased dramatically and is not going back down. The tools being requested are increasingly sophisticated, increasingly integrated into core business processes, and increasingly capable of affecting decisions about individuals in ways that trigger serious regulatory obligations. And the regulatory environment, between EU AI Act enforcement ramping up, EDPB guidance on generative AI becoming more prescriptive, and supervisory authorities that have explicitly identified AI governance as a priority, is becoming less forgiving of organizations that cannot demonstrate they have done the work.

The organizations that are going to find themselves explaining AI tool deployments to supervisory authorities are not the ones whose intake process was imperfect. They are the ones whose intake process left no record at all.


Starting the Fix

Fixing an AI tool intake process does not require a six month project or a new compliance platform on day one. It requires three things to start.

First, a structured intake form that captures use case, data types, data subjects, and geographic footprint before any assessment begins. If you do not have the inputs you need, you cannot produce the outputs that matter.

Second, a decision framework that produces one of three outcomes for every request: Approved, Approved with Conditions, or Denied. With documented rationale. With specific conditions where applicable. With a named decision-maker and a date.

Third, a central record. Wherever that lives, it needs to be searchable, accessible to the privacy team, and maintained. The record is the governance program. Without it, you have activities. With it, you have a program.

Everything else is refinement. But these three things are the difference between a process that exists and a process that works.

This article is for informational purposes only and does not constitute legal advice. AI governance and regulatory compliance requirements vary by organization, jurisdiction, and use case. Consult qualified legal counsel before making compliance determinations or relying on this content for any legal, regulatory, or business purpose.

Related reading