HIPAA and AI Tools: The BAA Questions Every Healthcare Privacy Officer Should Be Asking
Your vendor has a BAA. That BAA was written before their AI features existed. Here is why that matters more than most healthcare organizations realize.
There is a document sitting in a contract management system somewhere in your organization that was signed two, three, maybe five years ago. It is a Business Associate Agreement with a major software vendor. At the time it was signed, the vendor's product did not have AI features. The BAA was written to cover the vendor's access to PHI through their core platform functions, data storage, data transmission, maybe some analytics.
Since then, the vendor has launched AI features. Those features process data that flows through the platform. Some of that data is PHI. The BAA has not been updated.
This scenario is not hypothetical. It is the current state of affairs at a significant number of healthcare organizations, and it represents one of the most widespread and least discussed HIPAA compliance gaps in the industry right now.
Why AI Features Are Not Automatically Covered by Existing BAAs
A Business Associate Agreement under HIPAA covers specific activities by a business associate involving PHI. The scope of a BAA is determined by what the agreement says, not by what the vendor happens to do with data that flows through their systems.
When a vendor adds AI features to a platform covered by an existing BAA, two questions become immediately relevant. First, do those AI features constitute a new use or disclosure of PHI that is not covered by the existing agreement? Second, does the vendor's AI processing meet the HIPAA requirements that were implicitly understood to apply to the original BAA?
On the first question, the answer depends on what the AI features actually do. If they process PHI in ways that go beyond the originally contemplated scope of the BAA, which frequently involves data storage, transmission, and defined service functions, the existing agreement may not cover that processing. An AI feature that analyzes clinical notes, summarizes patient records, generates recommendations based on patient data, or processes conversation transcripts involving patients is doing something materially different from storing and transmitting records. Whether the original BAA covers those activities is a legal question with real compliance consequences.
On the second question, AI processing introduces risks that traditional BAA provisions were not designed to address. Model training data, inference infrastructure, output retention, prompt logging, and sub-processor chains in AI systems are simply not contemplated in BAA templates that predate the generative AI era. A BAA that is perfectly adequate for a pre-AI SaaS platform may be entirely inadequate for the same platform after an AI feature launch.
The Questions Your BAA May Not Be Answering
When evaluating whether an existing BAA adequately covers an AI-enabled platform, or when negotiating a new BAA for a vendor with AI features, these are the questions that need explicit answers.
Does the BAA explicitly cover AI processing of PHI?
The BAA should identify the specific AI features and processing activities that are authorized. Not just "AI tools" as a category, but the specific functions. Clinical note summarization, ambient documentation, diagnostic support, patient communication, prior authorization assistance. Each of these involves different PHI, different processing activities, and potentially different risk profiles. Vague coverage of "software services" is not adequate for AI processing of clinical data.
Does the vendor use PHI to train or improve their AI models?
This is the question that generates the most significant compliance exposure and the most evasive vendor responses. Some vendors use customer data, including data flowing through healthcare platforms, to train or fine-tune their AI models. Under HIPAA, this would constitute a use of PHI beyond the purposes for which the BAA authorizes processing. It may also constitute a disclosure to the vendor's AI training infrastructure that is not covered by the agreement.
Get a direct written answer to this question. Not "we take data privacy seriously" and not a reference to their general privacy policy. A specific, contractual commitment about whether PHI flowing through their platform is ever used for model training, fine-tuning, or model improvement purposes, and under what conditions.
Where does AI inference occur and who has access?
AI inference, the process by which the model generates outputs from inputs, occurs on infrastructure that may be separate from the vendor's core platform infrastructure. That inference infrastructure may involve third-party AI providers, cloud computing services, or specialized model hosting providers that are not identified in the BAA's sub-processor provisions.
Under HIPAA, a business associate that discloses PHI to a sub-contractor must obtain a BAA from that sub-contractor covering the PHI. If your vendor's AI features involve PHI flowing through AI inference infrastructure operated by a third party, that third party is a sub-contractor under HIPAA and must be covered by appropriate agreements. Verify that the BAA identifies all material sub-processors involved in AI processing and that appropriate downstream agreements are in place.
What is the retention period for AI inputs and outputs?
Prompts containing PHI, AI-generated summaries of patient data, and outputs from clinical AI tools are themselves PHI if they contain information that could identify a patient. The HIPAA Security Rule requires covered entities and business associates to implement policies governing the retention and disposal of PHI.
Ask the vendor: how long are prompts retained? How long are AI outputs retained? Are conversation logs stored, and if so where and for how long? Are there mechanisms to delete AI interaction data upon contract termination? These are not edge case questions. They are foundational HIPAA compliance questions that most AI-enabled healthcare vendors are not proactively answering.
What safeguards apply to AI-generated outputs containing PHI?
AI systems can generate outputs that contain PHI even when the prompt did not explicitly include PHI, through inference from other data or through retrieval from data the model has processed. The HIPAA Security Rule requires appropriate access controls, audit controls, integrity controls, and transmission security for all ePHI. Confirm that these controls apply to AI-generated outputs and that there are technical safeguards preventing inappropriate access to or disclosure of PHI through AI outputs.
The Use Case Shapes the Risk
Healthcare organizations use AI tools for an enormous range of purposes, and the HIPAA risk profile varies significantly depending on the use case. Understanding the specific use case is essential for conducting an adequate assessment.
Administrative AI tools such as prior authorization automation, claims processing assistance, scheduling optimization, and revenue cycle tools process PHI but typically do so in ways that are well within the contemplated scope of existing BAA frameworks. The questions above still apply, but the risk profile is generally lower than clinical applications.
Clinical documentation tools including ambient AI documentation, clinical note summarization, discharge summary generation, and medical coding assistance operate in close proximity to highly sensitive clinical PHI. The risk of PHI appearing in prompts, being processed by inference infrastructure, appearing in AI outputs, or being retained in ways that violate the minimum necessary standard is significant. These tools warrant the most rigorous BAA review and the most specific contractual commitments around AI processing.
Clinical decision support tools that analyze patient data to generate recommendations, flag risks, or support diagnostic decisions occupy the highest-risk category from both a HIPAA and a broader regulatory perspective. Under HIPAA, the question is whether PHI is being processed in ways that are covered by the BAA and that meet the Security Rule's technical safeguard requirements. Under the EU AI Act, if the tool affects EU residents, clinical decision support may qualify as a high-risk AI system. Under Joint Commission standards, AI tools that function as clinical decision support may be subject to leadership oversight requirements under LD.04.03.13. And under general negligence principles, if an AI tool generates a clinically incorrect recommendation that a clinician relies upon, the question of whether appropriate safeguards and human oversight were in place will be central to any liability analysis.
Patient-facing AI tools including patient communication chatbots, symptom checkers, appointment scheduling assistants, and AI-generated health information tools involve PHI flowing through systems that interact directly with patients. Transparency obligations, consent considerations, and the particular sensitivity of patient communications add additional compliance dimensions beyond the core BAA questions.
What an Adequate HIPAA Assessment for an AI Tool Actually Looks Like
A HIPAA assessment for an AI-enabled healthcare tool needs to go beyond confirming that a BAA exists. The questions below represent the minimum scope of an adequate assessment.
On the contractual side: is there a BAA in place? Does the BAA explicitly cover AI processing activities? Does the BAA address model training data usage? Does the BAA identify AI-specific sub-processors? Does the BAA include AI-specific provisions for data retention, deletion, and breach notification?
On the technical side: where does AI inference occur? What encryption standards apply to PHI in transit through AI processing pipelines? What access controls govern who can access AI inputs and outputs? What audit logging exists for AI interactions involving PHI? What is the vendor's documented approach to prompt injection and other AI-specific security risks?
On the operational side: has a risk analysis under the HIPAA Security Rule been conducted that accounts for the AI features? Does the organization's HIPAA training program address appropriate use of AI tools with PHI? Is there a process for reviewing and approving new AI features added to covered platforms? Is there an incident response process that covers AI-specific breach scenarios?
On the AI-specific side: has the EU AI Act classification been determined for the specific use case? Has a DPIA been conducted if EU residents' data is processed? Has a NIST AI RMF assessment been conducted for high-risk clinical AI tools?
None of these questions are unreasonable. All of them should have answers before an AI tool is deployed in a healthcare context.
The Hallucination Risk in Clinical Contexts
There is a HIPAA-adjacent issue with clinical AI tools that deserves its own discussion, because it represents a category of risk that most healthcare compliance programs are not yet systematically addressing.
AI systems can generate outputs that are factually incorrect. In clinical contexts, factually incorrect AI outputs about medications, dosages, diagnoses, treatment protocols, or patient information are not just a quality problem. They are a patient safety problem. And if a clinician relies on an AI-generated output that turns out to be incorrect because the system hallucinated, the question of organizational liability will include whether appropriate safeguards were in place.
Under the FTC Act's prohibition on deceptive practices, representing an AI tool as a reliable clinical resource when it is known to produce incorrect outputs at a material rate could constitute a deceptive practice. Under Joint Commission standards, clinical decision support tools require leadership oversight that encompasses accuracy monitoring. Under general negligence principles, deploying a clinical AI tool without documented accuracy testing, without appropriate disclaimers, and without human oversight processes that are capable of catching errors is a risk management failure.
The assessment of clinical AI tools needs to include a documented review of the vendor's approach to accuracy monitoring, hallucination mitigation, and error disclosure. If the vendor cannot provide this documentation, that is itself a finding.
A Note on the Minimum Necessary Standard
The HIPAA minimum necessary standard requires covered entities and business associates to make reasonable efforts to limit the use or disclosure of PHI to the minimum necessary to accomplish the intended purpose.
AI tools create a particular challenge for minimum necessary compliance because the nature of AI processing is that more data generally produces better outputs. A clinical summarization tool trained on comprehensive patient records produces better summaries than one trained on partial records. A diagnostic support tool with access to full patient history produces better recommendations than one with limited context.
The tension between the minimum necessary standard and the data appetite of AI systems is real and has not yet been fully resolved in HIPAA guidance. What it means practically is that healthcare organizations deploying AI tools need to make documented decisions about what data the AI system has access to, why that level of access is necessary for the intended use case, and how access is limited to what is actually needed.
Giving an AI tool access to the entire patient record when the tool's function only requires specific data elements is not consistent with minimum necessary. Documenting the access decision and the reasoning behind it is.
This article is for informational purposes only and does not constitute legal advice. AI governance and regulatory compliance requirements vary by organization, jurisdiction, and use case. Consult qualified legal counsel before making compliance determinations or relying on this content for any legal, regulatory, or business purpose.
Related reading
- How to Assess an AI Tool for GDPR Compliance: A Step-by-Step Guide for DPOs
The same AI tool can be perfectly fine in one deployment and a regulatory nightmare in another. Here is how to tell the difference before your organization finds out the hard way.
- The AI Tool Assessment Checklist Every Privacy Team Needs in 2026
Print this out. Put it in your intake process. Stop discovering compliance problems after the tool is already in production.
- EU AI Act for Deployers: What Privacy Teams Need to Know Before August 2026
You did not build the AI tool. You did not train the model. You are still responsible for what happens when your organization uses it.