HIPAA Compliance for AI Tools: Vendor Requirements
HIPAA compliance means nothing without proof. Your AI vendor must demonstrate BAAs, data handling policies, subprocessor transparency, and audit logging before you sign.
HIPAA compliant is healthcare's most overused marketing term. Your AI vendor claims compliance, but without specifics like Business Associate Agreement in place, explicit data handling policies, subprocessor transparency, and audit logging, the claim means nothing. Practice managers must verify actual compliance mechanisms, not just accept vendor assurances. This guide walks you through the seven critical compliance areas every healthcare AI tool must demonstrate. For more on this topic, see our guide on EHR integration security.
Why HIPAA Compliant Claims Are Meaningless Without Proof
HIPAA is a regulation, not a product feature. It imposes obligations on Covered Entities (your practice) and Business Associates (your vendors). When an AI vendor says their tool is HIPAA compliant, they're sidestepping the real question: Have they structured their product, operations, and contracts to meet your legal obligations? According to the U.S. Department of Health and Human Services (HHS), over 700 healthcare data breaches affecting 10 million individuals have been reported since 2009. Many involved third-party vendors claiming compliance but lacking proper safeguards. The HIPAA Security Rule (45 CFR Parts 160 and 164) establishes baseline standards for electronic protected health information (ePHI). Your AI vendor must demonstrably meet these standards. Your practice remains liable if they fail.
The distinction is critical: HIPAA compliance is not about what the vendor says they do. It's about what you can verify they do through contracts, documentation, and auditable mechanisms.
The Business Associate Agreement: Non-Negotiable Foundation
A Business Associate Agreement (BAA) is your legal foundation. Under HIPAA regulations (45 CFR 164.502(e)), any vendor that accesses, processes, or stores protected health information must sign a BAA. Without one, your practice exposes itself to regulatory and financial liability.
What a BAA must address: Permitted uses and disclosures (vendor may only use PHI for functions you specify, not for model training). Safeguards obligations (vendor must implement Administrative, Physical, and Technical Safeguards outlined in HIPAA Security Rule). Subcontractor management (vendor must impose equivalent BAA obligations on subprocessors). Breach notification (vendor must notify you of suspected breach without unreasonable delay, typically within 24-48 hours). Data return or destruction (upon contract termination, vendor must return or securely destroy all PHI, no retention). Audit rights (you must have the right to audit vendor's compliance mechanisms and request documentation).
Red flag: If a vendor resists signing a BAA or tries to impose standard terms without negotiation, walk away. A compliant vendor expects and welcomes BAA requirements.
Data Handling: Where PHI Lives and Who Touches It
The HIPAA Security Rule's Technical Safeguards (45 CFR 164.312) establish specific requirements for ePHI handling. You must verify that your AI vendor meets each requirement.
Access Controls
Your vendor must implement role-based access controls (RBAC) so that staff members can only view PHI necessary for their specific functions. A billing analyst should not access clinical notes. A model trainer should not access patient identifiers.
Ask your vendor: How do you enforce role-based access to PHI? What documentation do you maintain showing which employees accessed which data? How frequently do you audit access logs for suspicious activity? Do you implement multi-factor authentication (MFA) for all access to systems containing PHI?
Good answer: We use role-based access controls integrated with single sign-on. All access to PHI is logged with timestamps and user IDs. We audit access logs daily for anomalies. MFA is mandatory for all production access. Access reviews occur quarterly.
Red flag: Users have access based on job function without specific logging, audit, or MFA detail.
Data Encryption
The Security Rule requires encryption of ePHI both in transit and at rest. Encryption in transit (TLS 1.2 or higher for data moving between systems) is table stakes. Encryption at rest (AES-256 or equivalent for stored data) is equally non-negotiable.
Ask your vendor: What encryption standard do you use for data at rest? What is the minimum TLS version for data in transit? Who holds encryption keys, and how are they managed? Are encryption keys rotated and how frequently?
Good answer: We use AES-256 encryption for all data at rest and TLS 1.3 for data in transit. Keys are managed by AWS KMS with automatic rotation every 90 days. We enforce encryption on all databases and storage buckets.
Red flag: Vendor cannot specify encryption methods or claims encryption is handled by cloud provider without naming that provider or showing you their security documentation.
The Subprocessor Chain: AI Models and Cloud Platforms
This is where most healthcare organizations trip up. Your AI vendor probably does not build or host everything in-house. They likely use: Cloud infrastructure (AWS, Google Cloud, Microsoft Azure) to store and process data. Third-party AI models (OpenAI's GPT, Anthropic's Claude, Google's Vertex AI). Analytics or logging services (Datadog, Sumo Logic) for monitoring and audit trails. Each introduces a subprocessor. The critical question: Does your vendor's BAA extend compliance obligations to these subprocessors? For more on this topic, see our guide on voice AI vendor evaluation.
The OpenAI Question
Many healthcare AI vendors integrate with OpenAI. This creates immediate compliance risk. OpenAI's standard terms explicitly state that customer data may be used to improve models unless the customer opts into their Business Critical plan (available only to enterprise customers). For most healthcare practices, using standard OpenAI APIs with patient data is a HIPAA violation. The data could be used for model training, and your practice has no mechanism to prevent it.
What you must verify: Does your vendor use any third-party AI platform? (List: OpenAI, Anthropic, Google, etc.) If yes, what is the data handling arrangement? Ask specifically: Are patient data or transcripts used to train or improve third-party models? Is there a BAA or Data Processing Addendum (DPA) between your vendor and the third-party AI platform? Where is the data processed? If data is sent to OpenAI's servers, even briefly, it falls outside your control.
Good answer: We use proprietary in-house model OR Anthropic via their healthcare data processing agreement for AI inference. No patient data leaves our infrastructure. All model inputs and outputs are processed within our secure, encrypted environment. We have contractual guarantees that data is not used for model training.
Red flag: We use OpenAI for processing without specifying a Business Critical agreement, data encryption mechanisms, or contractual exclusions on model training. OR We anonymize data before sending to third parties (anonymization is not HIPAA-compliant when re-identification is possible, which is typical with medical data).
Audit Trails and Logging Requirements
The HIPAA Security Rule requires Audit Controls (45 CFR 164.312(b)): vendors must implement hardware, software, and procedural mechanisms to record and examine PHI access and activity. This means complete, searchable logs showing:
- What data was accessed (patient name, medical record number, specific records)
- Who accessed it (staff member ID, role, department)
- When it was accessed (timestamp to the second)
- What action was taken (read, create, modify, export, delete)
- From where (IP address, device ID)
- Why it was accessed (user cannot be allowed to simply browse PHI without documented justification)
Ask your vendor: Can you export complete audit logs for specific patients, date ranges, or users? How long are logs retained? Are logs immutable? Can they be altered retroactively? Can you alert us of anomalous access patterns in real time? Do you conduct regular log reviews? How do you detect unauthorized access?
Good answer: All access is logged with user ID, timestamp, action, and IP address. Logs are retained for 7 years in an immutable format. You have access to a log dashboard showing all access to your organization's data. We alert you of access from new IP addresses or outside normal business hours. Quarterly, we review logs for anomalies and report findings to you.
Red flag: Vendor cannot provide log exports. Logs are retained for fewer than 2 years. Vendor cannot demonstrate log immutability.
Access Controls and Authentication
Beyond role-based access, your vendor must enforce strong authentication and authorization: Multi-factor authentication (MFA) for all access to systems containing PHI. Session timeouts for inactive sessions (e.g., 15 minutes for clinical data access). Privileged access management (PAM) for vendor staff performing administrative functions. User provisioning and deprovisioning (access revoked immediately, not after 30-day delay).
Ask your vendor: Do you require MFA for all access to PHI? What MFA methods do you support? What is your session timeout policy? Do you have a formal process for deprovisioning users within 24 hours of termination? How do you prevent unauthorized privilege escalation?
Breach Notification and Incident Response
Under HIPAA's Breach Notification Rule (45 CFR 164.404-414), if an unsecured PHI breach occurs, your practice must notify affected individuals and HHS without unreasonable delay, typically within 30 days. If your vendor causes the breach, the vendor must notify you immediately.
Your vendor's BAA must specify: Notification timeline (within 24 hours or defined timeframe). Information provided (nature of breach, data involved, individuals affected, preliminary containment steps). Cooperation (vendor must cooperate with your investigation). Remediation (vendor must explain what went wrong and how it will prevent recurrence). Cost responsibility (typically: vendor if caused by vendor negligence).
You should also ask: What is your incident response plan? Do you conduct annual penetration testing and vulnerability assessments? Do you have cyber liability insurance? Have you ever experienced a breach?
Good answer: We have a formal incident response plan tested quarterly. Any suspected breach triggers immediate notification to you within 2 hours. We conduct annual third-party penetration testing and share non-public results under NDA. We carry $10M in cyber liability insurance. We have not experienced a breach. For more on this topic, see our guide on AI operations business case.
Red flag: Vendor has experienced multiple breaches without explanation. Vendor cannot describe incident response procedures. No cyber insurance.
AI-Specific HIPAA Compliance Requirements
The HIPAA Security Rule applies to AI tools the same way it applies to any system handling ePHI. However, AI introduces unique compliance considerations.
Prohibited Use Cases
Your AI vendor must contractually guarantee they will NOT: Use patient data for model training without explicit, written consent from each affected individual. Use patient data to build competitive products or share insights with third parties. De-identify data and then re-identify it for secondary purposes. Sell or license patient data to pharmaceutical companies, insurers, or other entities. Share data with foreign affiliates without proper Data Processing Agreements and regulatory review.
Ask directly: Can you show me in your terms of service where you explicitly prohibit using our patient data for model training or any purpose other than delivering your service to us?
Model Transparency
You have the right to understand how the AI model processes your data. This does not mean you need the full neural network weights (which are proprietary), but you should understand: What data does the model ingest? What data does the model output? What training data was used to build the model? Has the model been validated in healthcare settings? Are there known limitations or failure modes?
Good answer: Our documentation specifies exactly what data inputs the model accepts, what outputs it generates, and what the model was trained on. We provide accuracy metrics by specialty. We document known limitations. We do not use your data to train or improve the model.
Red flag: Vendor refuses to disclose training data sources. Claims model accuracy without supporting evidence. Uses your data to improve the model without explicit consent.
Bias and Fairness
AI models can perpetuate healthcare disparities. If your AI tool makes clinical recommendations, diagnosis suggestions, or coding decisions, ask: Has the model been evaluated for bias across racial, ethnic, gender, and age groups? Are there documented performance gaps for underrepresented populations? What mitigation strategies has the vendor implemented? This is increasingly important: the FDA and CMS are scrutinizing AI tools for bias, and your practice could face liability if an AI tool produces discriminatory outcomes.
HIPAA Compliance Checklist for AI Vendors
| Requirement | What to Ask | What Good Looks Like | Red Flag |
|---|---|---|---|
| Business Associate Agreement | Can you provide a BAA template? | BAA exists, specifies permitted uses, covers all subprocessors, includes audit rights | Vendor resists signing or says BAA not required |
| Data Encryption in Transit | What TLS version do you use? | TLS 1.2 minimum (1.3 preferred) | Encryption is handled by cloud provider without specifics |
| Data Encryption at Rest | What encryption standard for stored data? | AES-256 or equivalent, keys managed with automatic rotation | We use standard encryption without specifying algorithm |
| Access Controls | How do you enforce role-based access? | RBAC implemented, MFA required, session timeouts, privilege separation | Access is based on job function without logging |
| Audit Logging | Can I export audit logs for specific patients? | Complete logs (user, timestamp, action, IP), 6+ years retention, immutable | Logs retained less than 2 years or cannot be exported |
| Third-Party AI Models | Do you use OpenAI, Anthropic, or other services? | Uses in-house model OR has contractual guarantee no data sent to third parties | We use OpenAI without BAA or data handling details |
| Breach Notification | How quickly will you notify us of a breach? | Written policy, notification within 24 hours, incident response plan | No formal breach notification procedure |
| Data Retention and Deletion | What happens to our data after contract ends? | Data destroyed or returned within 30 days, verified in writing | We may retain data for analytics or no deletion timeline |
| Model Training Policy | Can you use our data to improve your model? | Contractual guarantee: NO use for training, NO secondary purposes | We use aggregated/anonymized data for model improvement |
| Subprocessor Transparency | What cloud provider and third-party services do you use? | List provided, each subprocessor has equivalent BAA obligations | We use various cloud services without specifics |
| Incident Response | Can you describe your incident response plan? | Formal plan, annual testing, forensic capabilities, cyber insurance | No written plan, no cyber insurance |
| Penetration Testing | Have you been penetration tested? | Annual third-party testing, results shared under NDA, remediation tracking | No testing or vendor refuses to disclose |
Building Your AI Vendor Compliance Checklist
- Request the BAA template. Review with your legal counsel. Do not accept a vendor's standard BAA without negotiation.
- Request security documentation. Ask for a SOC 2 Type II report (completed within the last 12 months), which audits controls across security, availability, processing integrity, confidentiality, and privacy.
- Clarify data flows. Create a data flow diagram showing where your practice's PHI goes. Is it stored in the vendor's system? Sent to third-party cloud providers? Processed by third-party AI models?
- Verify subprocessor BAAs. If the vendor uses AWS, Google Cloud, or other infrastructure, those providers should have BAAs with your practice.
- Test audit access. Request a test of audit log access. Can you actually pull logs for a specific patient or date range?
- Verify encryption keys. Understand who holds encryption keys. Ensure they cannot decrypt your data for any reason other than delivering the service.
- Confirm breach insurance and testing. Request proof of annual penetration testing and cyber liability insurance.
Common Compliance Pitfalls to Avoid
Pitfall 1: Assuming Anonymization Is HIPAA Compliant
Vendors often claim they anonymize data before sending it to third-party AI platforms. Under HIPAA, anonymization is permitted only if the data is truly de-identified according to the Safe Harbor method. Even then, if re-identification is possible, which it almost always is with healthcare data, the data is not truly anonymized. Many vendors conflate pseudonymization with anonymization. Pseudonymized data is still PHI.
Pitfall 2: Relying on Cloud Provider Security Alone
Your AI vendor might say AWS is HIPAA compliant, so we're compliant. AWS offers HIPAA-eligible services, but your vendor must configure those services correctly. AWS compliance does not automatically mean your vendor is compliant. Your vendor is responsible for implementing access controls, encryption keys, audit logging, and breach procedures.
Pitfall 3: Not Requiring Subprocessor Agreements
Your vendor might say subprocessors are bound by their standard terms. This is insufficient. You need explicit contractual language requiring the vendor to impose BAA obligations on all subprocessors.
Pitfall 4: No Audit Rights
Your vendor might say we cannot let you audit our systems for security reasons. Compliance requires audit rights. The BAA must grant you the right to audit (or designate an auditor) to verify the vendor's compliance.
Pitfall 5: Indefinite Data Retention
Your vendor might say we keep data for 7 years for compliance purposes. HIPAA does not require data retention beyond the scope of your relationship. Once the contract ends, the vendor must return or destroy your data. Extended retention is a liability.
Using AI Safely: Best Practices for Your Practice
- Limited scope. Use AI tools for specific, documented purposes. Do not use them for general data exploration or secondary analytics without explicit policy.
- Role-based access. Not every staff member needs access to every AI tool. Limit access to roles that directly use the tool.
- User training. Train staff on HIPAA obligations, AI tool functionality, and what to do if they suspect a breach.
- Regular audits. Review audit logs monthly for suspicious access patterns.
- Incident response plan. Document what your practice will do if the AI vendor suffers a breach or if you discover unauthorized access.
- Vendor oversight. Maintain ongoing communication with your AI vendor. Request periodic compliance attestations.
The Bottom Line
HIPAA compliant is not a claim; it's a commitment. Your AI vendor must prove compliance through executed BAAs, documented controls, subprocessor transparency, audit logging, and verifiable safeguards. Use the checklist in this guide to vet vendors. If a vendor resists transparency or cannot provide evidence of compliance, walk away. The regulatory and financial liability of a breach far outweighs the convenience of a compliant AI tool that is not actually compliant. Your practice is liable for your vendor's failures. Verify compliance before signing, and audit it continuously after.
Frequently Asked Questions
See how Cevi compares to Cevi vs Akasa, Cevi vs Infinitus, Cevi vs Bland AI, Cevi vs Vapi, Cevi vs Waystar, Cevi vs Cedar, Athenahealth and eClinicalWorks for voice AI and phone systems.
Common Questions
Are AI tools HIPAA compliant?
No product is inherently HIPAA compliant. AI tools become HIPAA-compliant through specific configurations, controls, and contracts. A vendor's claim of HIPAA compliance is meaningless without proof: an executed Business Associate Agreement, documented data handling procedures, encryption standards, audit logging, subprocessor transparency, and verification through SOC 2 audits. Ask your vendor to prove compliance through documentation, not just marketing claims.
What should a BAA with an AI vendor cover?
A BAA must specify permitted uses of PHI (only for delivering the service, not for model training), require the vendor to implement Security Rule safeguards, obligate subprocessor management, establish breach notification timelines, grant you audit rights, and define data return or destruction upon contract termination. Do not accept a vendor's standard BAA without negotiation.
Can AI vendors use patient data for model training?
Not without explicit, written consent from affected individuals, and most vendors do not obtain such consent. Your BAA must contractually prohibit model training with your data. Be cautious of vendors that use third-party AI platforms like OpenAI; unless they have a specific data processing agreement, sending your patient data to those platforms violates HIPAA.
What is the subprocessor chain for AI in healthcare?
Your AI vendor likely relies on cloud infrastructure (AWS, Google Cloud, Azure), third-party AI models (OpenAI, Anthropic, Google Vertex), and analytics services (Datadog, Sumo Logic). Each is a subprocessor. Your vendor's BAA must require that all subprocessors are bound by equivalent BAA obligations. If your vendor cannot provide a list of subprocessors or confirm they have BAAs with them, that is a compliance red flag.
What HIPAA Security Rule requirements apply to AI tools?
All of them. The Administrative, Physical, and Technical Safeguards in 45 CFR Parts 160 and 164 apply to AI tools the same way they apply to any system handling ePHI. Key requirements include access controls (role-based, MFA), encryption (TLS in transit, AES-256 at rest), audit logging (searchable, immutable logs retained 6+ years), breach notification, and prohibition on prohibited uses. Your vendor must demonstrate compliance with each requirement.
Related Posts
Works with your stack
Athenahealthehr
eClinicalWorksehr
DrChronoehr
ModMedehr
Elationehr
Canvas Medicalehr
Janeehr
RingCentralcommunication
Zoomcommunication
Microsoft Teamscommunication
Slackcommunication
Gmailcommunication
Spruce Healthcommunication