EU AI Act for Deployers: What Privacy Teams Need to Know Before August 2026
You did not build the AI tool. You did not train the model. You are still responsible for what happens when your organization uses it.
There is a widespread and genuinely dangerous misconception about the EU AI Act circulating in organizations right now. It goes something like this: the EU AI Act is a problem for AI companies. We just use AI tools. We buy them from vendors. The vendors are the ones who have to comply.
This misconception is going to cause real problems for real organizations in the near future. Because the EU AI Act does not distinguish between the organizations that build AI systems and the organizations that deploy them. It assigns distinct and substantial obligations to both. And if your organization uses AI tools that process data of EU residents, in EU member states, or in any context that brings you within the Act's scope, you are a deployer under the EU AI Act whether you have thought about it that way or not.
This post is about what deployer obligations actually mean in practice, what the enforcement timeline looks like, and what privacy teams need to have in place before August 2026.
Who Counts as a Deployer
The EU AI Act defines a deployer as any natural or legal person, public authority, agency, or other body that uses an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.
Read that definition carefully. It does not say organizations that build AI systems. It does not say organizations that sell AI systems. It says organizations that use AI systems. If your organization has deployed a third-party AI tool that affects individuals in the EU, you are a deployer. The obligations that flow from that designation are yours, not the vendor's, regardless of what the vendor's marketing materials say about their own compliance.
This matters enormously for how privacy teams need to think about AI tool assessment. The question is not only "is this vendor compliant with the EU AI Act?" The question is "what are our obligations as a deployer of this tool in this specific use case, and have we met them?"
The Enforcement Timeline You Actually Need to Know
The EU AI Act came into force in August 2024. Its requirements are being phased in over time, and understanding the timeline is essential for prioritizing your compliance work.
February 2025: Prohibited practices under Article 5 became enforceable.
This is the category of AI uses that the EU has decided should simply not exist. Subliminal manipulation, exploitation of vulnerabilities, social scoring by public authorities, individual criminal risk profiling from unrelated data, untargeted scraping of facial images, emotion recognition in workplace and educational settings, and biometric categorization that infers sensitive characteristics. Any AI tool your organization uses that does any of these things has been illegal under EU law since February 2025. If you have not assessed your AI tool portfolio against Article 5, this is the place to start.
August 2025: GPAI model obligations became applicable.
General-purpose AI models, the large foundation models that underpin most of the AI tools organizations use today, became subject to provider-tier obligations. This primarily affects the vendors building on top of these models. But as a deployer using tools built on GPAI models, you need to understand that the GPAI classification affects your own obligations. A tool built on a GPAI model is assessed differently under the Act than a tool using a narrow purpose-built model.
August 2026: High-risk AI system obligations apply to deployers.
This is the deadline that most organizations with any serious AI tool portfolio need to be focused on right now. High-risk AI systems under Article 6 and Annex III cover a significant range of enterprise use cases including employment and worker management, education and vocational training, access to essential private and public services, administration of justice, and management of critical infrastructure.
If your organization uses AI tools in any of these categories, deployer obligations under the high-risk framework apply from August 2026. That gives you months, not years, to get compliant.
August 2027: Remaining obligations for general-purpose AI systems embedded in products.
The full framework is in effect by this point. But organizations waiting until 2027 to start thinking about EU AI Act compliance are already behind.
What Deployer Obligations Actually Require
The EU AI Act assigns deployers a specific set of obligations that exist independently of whatever the provider has done to comply with their own obligations. These are not optional enhancements. They are legal requirements.
Human oversight under Article 26.
Deployers of high-risk AI systems must implement appropriate human oversight measures. This means ensuring that natural persons to whom human oversight is assigned have the competency, authority, and resources to actually exercise that oversight. A checkbox that says "human reviewed" is not human oversight. A process in which the human review is perfunctory or in which the reviewer lacks the context to meaningfully evaluate the AI output is not human oversight.
For practical purposes, this means that for any high-risk AI tool your organization uses, you need a documented process describing who reviews AI outputs, what they are reviewing for, what authority they have to override the AI recommendation, and how overrides are recorded.
AI literacy training under Article 4.
Deployers must ensure that staff operating or relying on AI systems have sufficient AI literacy, taking into account the technical knowledge, experience, education, and training of those staff members, as well as the context in which the AI system will be used.
This is not a one-time training requirement. It is an ongoing obligation to ensure that the people using AI tools understand what those tools can and cannot do, where they are likely to fail, what the consequences of errors are, and when to escalate or override. For high-risk systems, the AI literacy requirement is particularly substantive.
Most organizations have not yet implemented AI literacy programs that meet this standard. Many have provided general awareness training about AI. That is not the same thing as ensuring that the specific people operating a specific AI tool in a specific context have the specific competencies the Act requires.
Monitoring and logging obligations.
Deployers must monitor the operation of high-risk AI systems and keep logs of system operation to the extent that these are under their control. This means that if you are using a high-risk AI tool, you need to be capturing and retaining records of how the system is being used, what outputs it is producing, and what decisions are being made on the basis of those outputs.
For organizations that have deployed AI tools without any logging or monitoring infrastructure, this is a significant gap to close before August 2026.
Transparency to affected individuals.
Where AI systems interact with natural persons or produce outputs that affect natural persons, deployers must inform those individuals that they are subject to AI-assisted processing. For limited-risk systems this is primarily a transparency disclosure requirement. For high-risk systems it is more substantive, including the obligation to provide meaningful information about the system's logic and the significance of its outputs.
Fundamental rights impact assessment for certain public sector deployers.
Public bodies and organizations providing public services using high-risk AI systems in certain categories must conduct a fundamental rights impact assessment before deployment. This is separate from and in addition to a DPIA under GDPR.
The Use Case Classification Problem
Here is where deployer compliance under the EU AI Act becomes genuinely complex, and where most organizations are most exposed.
The EU AI Act does not classify tools. It classifies use cases. The same tool can be minimal-risk in one deployment and high-risk in another, depending entirely on what the deployer is using it for and in what context.
Consider a large language model used for text generation. If your marketing team uses it to draft social media posts, it is minimal-risk. If your HR team uses it to generate candidate assessments that feed into hiring decisions, it is high-risk under Annex III, which explicitly covers AI systems used for employment and worker management including recruitment, selection, and evaluation. If your financial services team uses it to generate credit risk summaries that inform lending decisions, it is high-risk under the provision covering access to essential private services including credit.
The implications of this for your AI tool assessment process are significant. You cannot assess a tool once and apply that assessment to all uses of the tool across your organization. You need to assess each material use case. A tool approved for marketing use is not automatically approved for HR use. The compliance obligations, and your responsibilities as a deployer, are different.
This is why the intake process needs to capture use case with specificity at the point of request, and why assessments need to be tied to specific deployments rather than to tools in the abstract.
What the Article 5 Prohibited Practices Mean for Your Portfolio
Before worrying about the high-risk framework, every organization should have completed a review of their current AI tool portfolio against the Article 5 prohibited practices. This is not a future obligation. It became enforceable in February 2025.
The prohibited practices are worth understanding in detail because some of them are more expansive than they might initially appear.
The prohibition on AI systems that deploy subliminal techniques beyond a person's consciousness to materially distort behavior in a way that causes harm covers AI tools used in persuasion, engagement optimization, or behavioral nudging contexts. If your organization uses AI to optimize content for maximum engagement in ways that exploit psychological vulnerabilities, this provision warrants careful analysis.
The prohibition on emotion recognition systems in workplace and educational settings is broad. It covers AI systems that infer or classify emotions, not just those explicitly marketed as emotion recognition tools. If any AI tool your organization uses in a workplace context analyzes facial expressions, voice patterns, or physiological signals to infer emotional states, it falls within this prohibition.
The prohibition on biometric categorization that infers sensitive characteristics including political opinions, religious beliefs, and sexual orientation from biometric data covers a range of AI tools that go beyond obvious face recognition applications.
If you have not reviewed your current AI tool portfolio against Article 5, that review should happen now.
Connecting EU AI Act Compliance to Your GDPR Assessment Process
The good news for privacy teams is that EU AI Act compliance for deployers does not require building an entirely new compliance program from scratch. It requires extending the compliance program you already have.
The DPIA process you use for GDPR high-risk processing is the natural foundation for the EU AI Act fundamental rights impact assessment. The data mapping you do for GDPR purposes gives you the data flow visibility you need to assess EU AI Act classification. The vendor assessment process you use for GDPR DPA compliance is the right process to extend for EU AI Act provider documentation review.
The most important extension is the use case classification step. Every AI tool assessment needs to include an EU AI Act classification determination based on the specific use case, not just the tool category. That classification then determines which deployer obligations apply and whether the August 2026 deadline is relevant to that specific deployment.
Organizations that integrate EU AI Act classification into their existing AI tool assessment process, rather than treating it as a separate compliance workstream, will find this significantly more manageable than those approaching it as a standalone project.
The deadline is real. The obligations are yours. The assessment work is the path through.
This article is for informational purposes only and does not constitute legal advice. AI governance and regulatory compliance requirements vary by organization, jurisdiction, and use case. Consult qualified legal counsel before making compliance determinations or relying on this content for any legal, regulatory, or business purpose.
Related reading
- How to Assess an AI Tool for GDPR Compliance: A Step-by-Step Guide for DPOs
The same AI tool can be perfectly fine in one deployment and a regulatory nightmare in another. Here is how to tell the difference before your organization finds out the hard way.
- The AI Tool Assessment Checklist Every Privacy Team Needs in 2026
Print this out. Put it in your intake process. Stop discovering compliance problems after the tool is already in production.
- HIPAA and AI Tools: The BAA Questions Every Healthcare Privacy Officer Should Be Asking
Your vendor has a BAA. That BAA was written before their AI features existed. Here is why that matters more than most healthcare organizations realize.