The End of the Wild West: Healthcare AI Governance in 2025
- Penelope Solis
- 5 days ago
- 4 min read
In the landscape of healthcare governance, the United States is still waiting for a unified Federal standard, but states are no longer hitting pause. They are pushing forward with their own distinct regulations. From the standpoint of a multi-state health system, ensuring compliance has moved far beyond traditional checklists.
As we enter 2025, we are witnessing the fragmentation of governance into distinct operational burdens, which are illustrated by three states.

The Three Fractures of Governance
1. California-The Transparency Baseline: California has chosen to concentrate on transparency. As of January 1, 2025, a new mandate requires healthcare providers to explicitly tell patients when they are using generative AI in patient communications, unless a human reviews the output.
The "operational lift" of this measure is significant. It requires a detailed forensic analysis of your patient engagement platforms. Unsurprisingly, marketing teams and portal vendors that don’t use disclaimers are likely already non-compliant. The only way to rectify this is to rebuild clinical workflows so that all automated messages carry a warning label or are reviewed by a human in the loop.
2. Colorado: The "Duty of Care" Standard Colorado has moved beyond simple transparency into the sphere of Risk Management. Through SB 205 (effective 2026), the state has laid the groundwork for introducing the concept of "Reasonable Care" into the algorithm procurement process.
This creates a new liability: Algorithmic Discrimination. If a machine plays a crucial role in a significant decision—like a clinical diagnosis or a prior authorization refusal—the health system is responsible if that algorithm discriminates against a particular group. The remedy? Annual impact assessments and a mechanism that allows patients to challenge AI-driven decisions.
3. Virginia: The Accountability Shift In Virginia, the shift involves structural accountability. The Joint Commission on Technology and Science (JCOTS) is recommending that health systems designate an "Accountable AI Officer" or committee.
This recommendation introduces a direct trail of liability. Experts are already raising alarms about the threat of personal exposure for the individuals in this role. If the governance process fails, the audit trail leads to a specific office.
The New "Must-Have" Role: The AI Governance Officer
This accountable officer will be named in governance documents, effectively validating them as the "owner" of AI risk. If a critical sepsis detection algorithm fails, the finger points straight at this officer.
To navigate this complex landscape, health systems need a new kind of leader. The AI Governance Officer, whether a single person or a specialized committee, must oversee three mission-critical systems:
The Registry:Â This is about mapping every AI tool in the hospital, from the MRI suite to the billing office. You must eliminate "Shadow AI" to effectively govern it.
The Human-in-the-Loop Protocol: As Virginia’s JCOTS emphasized, governance must cover the decision-making process, not just the code. The Officer must define exactly when a human clinician intervenes and how that intervention is documented.
The Legislative Tracker: Think of this as your early warning system. It signals when a tool that was compliant yesterday—like a chatbot—requires a warning label today (as seen with California’s AB 3030).
Where should this role sit? If you place this role in IT, it risks becoming a procurement checklist. If you place it in Legal, it becomes a bottleneck. The most successful organizations position this role under the Chief Strategy Officer or Chief Medical Officer, with a dotted line to IT. This ensures governance is treated as a clinical safety net, not just a software update.
The "Federal Preemption" Trap
A common objection from executives is that they don’t see the need to build this infrastructure now. They argue that the incoming Federal administration will simply preempt these state-level regulations. This is known as the "Federal Preemption Trap."
It is true that the incoming administration may use its power to block state-level fragmentation, potentially by withholding funding from states with restrictive AI laws.
However, waiting is not a strategy. Legal battles over Executive Orders can drag on for years. Meanwhile, the standard of care is shifting. Patient trust—especially regarding medical diagnoses—is local. Patients now expect to know if a robot is diagnosing them, regardless of what happens in Washington.
Conclusion: Build Your Governance Engine
The days of treating AI governance as an "IT problem" are over. It is now a core function of hospital operations.
My advice to health system leaders is to build a Governance Engine that is agnostic to state lines. If you build a system that can explain a decision to a patient in Denver, prove oversight to an auditor in Richmond, and disclose usage to a patient in Sacramento, you are future-proofed against whatever regulatory regime ultimately wins out.
The regulations are coming. The question is whether your governance structure will be a panic reaction or a competitive advantage.
References
Assembly Bill 3030. Health care services: artificial intelligence. California State Assembly. 2023-2024 Reg Sess (2024). Accessed December 4, 2025. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240AB3030
Senate Bill 24-205. Consumer Protections for Artificial Intelligence. Colorado General Assembly. 2024 Reg Sess (2024). Accessed December 4, 2025. https://leg.colorado.gov/bills/sb24-205
Kuhn J. AI in Healthcare: Policy Recommendations and Voting. Presentation at: Joint Commission on Technology and Science Meeting; November 5, 2025; Richmond, VA. Accessed December 4, 2025. https://studies.virginiageneralassembly.s3.amazonaws.com/meeting_docs/documents/000/003/073/original/AI_in_Healthcare_PRESENTATION.pdf
Bedayn J. What to know about Trump’s draft proposal to curtail state AI regulations. AP News. November 20, 2025. Accessed December 4, 2025. https://apnews.com/article/trump-executive-order-artificial-intelligence-ai-regulation-646de06404ba543dd7244d225fb27250
National Institute of Standards and Technology. Artificial Intelligence Risk Management Framework (AI RMF 1.0). U.S. Department of Commerce. January 2023. Accessed December 4, 2025. https://www.nist.gov/itl/ai-risk-management-framework
The Office of the National Coordinator for Health Information Technology. Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) Final Rule. U.S. Department of Health and Human Services. January 9, 2024. Accessed December 4, 2025. https://www.healthit.gov/topic/laws-regulation-and-policy/health-data-technology-and-interoperability-certification-program-updates-hti-1
