
Your EHR vendor just pitched you an AI-powered documentation assistant. It promises to reduce clinician burnout, improve coding accuracy, and free up hours of administrative time. The demo was impressive. The pricing seems reasonable. Your IT team is ready to sign.
But before you approve that contract, you need to understand what your vendor isn’t volunteering about 42 CFR Part 2 compliance, business associate agreement liabilities, and the regulatory transparency requirements that went into effect in 2024. The gaps represent genuine compliance challenges that many AI vendors haven’t solved yet.
When Standard HIPAA Protection Isn’t Enough
The February 2024 alignment of 42 CFR Part 2 with HIPAA created new possibilities for AI in behavioral health, but it didn’t eliminate the special protections for substance use disorder records (HHS, 2024). The rule now permits a single written consent for treatment, payment, and healthcare operations disclosures, which theoretically allows AI systems to access patient data under standard business associate agreements (HHS, 2024).
Here’s what most vendors won’t emphasize during procurement. While the rule allows broader data access for TPO purposes, it maintains heightened protection for SUD counseling notes. These notes require separate, specific patient consent before an AI system can process them (HHS, 2024). This creates an implementation challenge that many AI vendors haven’t addressed in their product design.
Your vendor’s AI might be accessing general medical records under a TPO consent, but if it’s also processing qualitative counseling notes from substance use disorder treatment without explicit authorization, you’re assuming liability for unauthorized disclosure. The technical architecture to prevent this requires sophisticated role-based access controls that distinguish between operational data and protected clinical narratives (HHS, 2024). When you’re evaluating vendors, ask them to demonstrate how their system enforces these distinctions at the data layer. If they can’t show you the access control matrix, they haven’t solved the problem.
The BAA Language That Actually Protects Your Organization
Business associate agreements have become the primary instrument for managing AI-related legal risk (HHS, 2024). The Office for Civil Rights has established clear requirements for BAAs, specifying that business associates must implement appropriate safeguards to prevent misuse of protected health information and report any security incidents or breaches (HHS, 2024).
Most vendors will readily sign a BAA that covers their infrastructure security. What they resist is language that prohibits using your patient data to train their foundation models. This distinction matters. Organizations using de-identification methods must ensure they don’t have “actual knowledge” that remaining information could be used to identify individuals (HHS, 2012). The challenge with AI systems is that they can potentially combine de-identified information with other datasets to re-identify patients, which puts your organization at risk even if you followed Safe Harbor de-identification procedures (HHS, 2012).
Your BAA should explicitly prohibit using identifiable patient data for model training, establish notification timelines for security events, and grant your organization the right to audit the vendor’s algorithmic performance (HHS, 2024). If your vendor balks at these provisions, that reluctance tells you something about their compliance posture. The vendors who have invested in proper governance frameworks understand these requirements are reasonable. The ones who haven’t built these safeguards yet will try to negotiate around them.
Building Governance Before You Need It
The American Medical Association released an eight-step governance framework specifically because technology adoption in healthcare has outpaced the establishment of safety protocols (AMA, 2024). For behavioral health organizations, where an AI error can lead to profound psychological harm or legal exposure, governance isn’t an operational nice-to-have. It’s a board-level fiduciary responsibility.
The AMA framework emphasizes that clinical experts must be the primary decision-makers for AI implementation and monitoring (AMA, 2024). This means your psychiatrists, psychologists, and social workers need to be evaluating whether an AI application meets the quality standards of their discipline before deployment. The framework positions AI as technology that should inform clinical decision-making while enabling healthcare professionals to independently review the basis for recommendations (FDA, 2022). This approach ensures clinicians don’t rely primarily on AI recommendations but rather on their own judgment (FDA, 2022).
Harvard’s “Boundaries of Tolerance” framework provides a diagnostic tool for assessing your organization’s ethical maturity regarding AI (Harvard Ethics, 2025). Organizations operating at Level 1 or Level 2 are responding reactively to regulatory prompts or fulfilling only minimum legal requirements. Moving to Level 4 requires deep enterprise-wide integration of ethical principles, including board-level AI ethics committees and access to fractional AI ethicists who can translate technical risks into strategic guidance (Harvard Ethics, 2025).
The transparency requirements of the ONC HTI-1 rule make this governance work more concrete. The rule mandates that AI developers disclose detailed information about algorithm design, validation, and performance to enable healthcare organizations to evaluate whether predictive decision support interventions are fair, appropriate, valid, effective, and safe (ONC, 2024). These FAVES disclosures shift accountability to your organization. Your clinicians need the capacity to audit these profiles and assess trustworthiness before deployment (ONC, 2024).
The regulatory landscape isn’t ambiguous anymore. The standards exist. The question is whether your organization is building the governance infrastructure to meet them before an incident forces the conversation.
Are you confident your current vendor agreements and internal protocols would protect your organization if an AI system accessed SUD counseling notes without proper consent, or if a model trained on your patient data was later found to produce biased risk assessments?
If those questions create uncertainty, Xpio Health can help you evaluate your AI readiness from both compliance and operational perspectives. We work with behavioral health organizations to assess vendor contracts, build governance frameworks, and ensure AI deployment serves your mission without creating regulatory exposure. Contact us to discuss what responsible AI implementation looks like for your organization.
#BehavioralHealth #PeopleFirst #XpioHealth #AICompliance #42CFRPart2 #HealthcareGovernance
References
- HHS. Fact Sheet 42 CFR Part 2 Final Rule. HHS.gov. 2024. https://www.hhs.gov/hipaa/for-professionals/regulatory-initiatives/fact-sheet-42-cfr-part-2-final-rule/index.html
- HHS. Business Associate Contracts: Sample Business Associate Agreement Provisions. HHS.gov. 2024. https://www.hhs.gov/hipaa/for-professionals/covered-entities/sample-business-associate-agreement-provisions/index.html
- HHS. Guidance Regarding Methods for De-identification of Protected Health Information in Accordance with HIPAA Privacy Rule. HHS.gov. 2012. https://www.hhs.gov/hipaa/for-professionals/special-topics/de-identification/index.html
- AMA. Governance for Augmented Intelligence: Establish a Governance Framework to Implement, Manage, and Scale AI Solutions. AMA STEPS Forward. 2024. https://edhub.ama-assn.org/steps-forward/module/2833560
- FDA. Clinical Decision Support Software: Guidance for Industry and Food and Drug Administration Staff. FDA. 2022. https://www.fda.gov/media/109618/download
- ONC. Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) Final Rule. HealthIT.gov. 2024. https://www.healthit.gov/topic/laws-regulation-and-policy/health-data-technology-and-interoperability-certification-program
- ONC. HTI-1 Decision Support Interventions (DSI) Fact Sheet. HealthIT.gov. 2024. https://www.healthit.gov/sites/default/files/page/2023-12/HTI-1_DSI_fact%20sheet_508.pdf
- Harvard Ethics. From Code to Conscience: An Ethical Framework for Healthcare AI. Edmond & Lily Safra Center for Ethics. 2025. https://www.ethics.harvard.edu/news/2025/11/code-conscience-ethical-framework-healthcare-ai-0

