The "Black Box" Dilemma: Why Healthcare AI Needs Clinical Governance
- Penelope Solis

- 6 days ago
- 2 min read

When discussing the use of Artificial Intelligence (AI) in healthcare, the idea of jumping into the system without a governance structure is basically a recipe for disaster, and we're running on a tightrope.
Just last week in New York, the protests by nurses about AI and patient safety have been going through my mind, and made me take a closer look at the "black box" issue that's at the heart of this problem.
Surveys like the one by National Nurses United, show that 69% of nurses find the computer-generated acuity scores don't match their real-life assessment of a patient's condition. This has serious implications. If an algorithm artificially lowers a patient's acuity score, it's used to justify cutting back on staff and that puts patient safety right up against operational efficiency.
One of the main issues is the clash between IP and patient safety, AI systems can’t always be “opened up” brand-new, modern algorithms are virtually indecipherable, and companies don’t want to show the code. A nurse may rely on it and send the patient down the wrong path, if a machine learning algorithm gets a patient's condition wrong based on previous data that the algorithm was trained on. The question then arises as to who gis liable in the case of a mistake, the company trying to protect their intellectual property, or the hospital that implemented the algorithm.
To close this dangerous gap, health systems should focus on leveraging Explainable AI. This doesn't mean showing clinicians the raw math, but that the system should offer a straightforward and human explanations to justify its recommendation. For example, a black box system provides a “Patient Risk Score: 90/100” whereas a system that is equipped with Explainable AI makes it clear, “Patient Risk Score: 90/100 because blood pressure dropped by 10% and the patient has a medical history of sepsis.” As a Health System Leader, if you don’t see that level of transparency in the algorithm, should consider that a potential red flag.
When hospitals or health systems fail to understand the algorithmic equation, and so don’t have a separate Algorithmic Stewardship Committee or Clinical AI Council, the financial implications can be severe. The problem of not knowing why an AI decision is correct will cause your nurses not wanting to use the AI because they won’t know what’s going through its mind. Well-known, if patients notice that their nurses don’t know exactly what the AI is doing they’re going to become uneasy, and you'll end up losing 100% of the money you invested in the technology.
I’m pushing for participation in AI with a diverse team of clinical staff should be part of the decision-making process and must see the reasoning behind a recommendation before the contract is signed. I am also advocating for setting up a governance framework specifically for AI to be established.




Comments