top of page

AI Ethics in Healthcare: A Consultant's Perspective

  • Writer: Penelope Solis
    Penelope Solis
  • Dec 2
  • 4 min read

Updated: 2 days ago

When artificial intelligence (AI) is injected into the healthcare industry it's changing the way clinicians do things and enabling the health system to make decisions faster, but as we move from theoretical models to real-world use, the problem of AI's "black box" nature starts to reveal itself, and with it, very serious ethical and legal issues.


Coming from a background in healthcare quality and policy, I've seen how technology outpaces governance, and this piece will take a closer look at the critical decisions we're facing in relation to patient acuity, algorithmic prejudice, and the brand-new regulatory landscapes.



Eye-level view of a hospital corridor with digital health technology
A modern hospital corridor showcasing digital health technology.

The Acuity Problem


The most immediate challenge in the ethics of AI in healthcare is that of patient acuity, and a disturbingly well-known example is in the use of algorithms in hospitals to rank patients based on how “sick” they are and therefore how many nurses to allocate to their care.

Nurses in recent protests were experiencing patient acuity scores changing in ways they couldn't understand. In fact, a survey by National Nurses United found that 69% of nurses reported that these computer-generated acuity measurements did not match their real-world assessment. The danger here is that these lower acuity scores are then used to justify reduced staffing levels.


Algorithmic Bias


One algorithm that was meant to predict the health needs of millions of patients showed up in a Science magazine study to be riddled with racial bias. This algorithm was constructed to predict which patients will have complicated health needs, and got it right. But it got one thing wrong: it believed that people in general were sicker than they really are, if they're Black.


The model based this decision on the quantity of healthcare expenses, a proxy that's a huge indicator of access to medical services. And since historically, the system has allocated less funds to treating Black patients, this means that when the algorithm kicks in, it says that these patients are perfectly healthy, and are given the bare minimum care.


The Regulatory Landscape


It's a serious question: who is responsible, the vendor who wants to protect their intellectual property, or the clinician who trusted the tool, when an AI system fails to diagnose a patient's condition?


Well-known regulatory updates like the HTI-1 Final Rule (Health Data, Technology, and Interoperability) now establish transparency requirements for predictive decision support interventions, pushing AI developers to give out the details of their training data and validation processes. Basically from "trust us" to "show us" according to HealthIT.gov.


Understanding AI in Healthcare


AI encompasses a range of technologies, including machine learning, natural language processing, and robotics, that can analyze vast amounts of data to provide insights and predictions. In healthcare, AI applications are diverse, including:


  • Diagnostic tools: AI algorithms can analyze medical images, such as X-rays and MRIs, to assist radiologists in identifying conditions like tumors or fractures.


  • Predictive analytics: By analyzing patient data, AI can predict disease outbreaks, patient deterioration, and treatment outcomes.


  • Personalized medicine: AI can help tailor treatments based on individual patient profiles, improving efficacy and reducing side effects.


While these applications hold great promise, they also raise ethical questions regarding data privacy, bias, accountability, and the potential for dehumanization in patient care.


Case Studies in Ethical AI Implementation


Case Study 1: IBM Watson for Oncology


The "Cautionary Tale" of IBM Watson Health, which was initially touted as a miracle cure-all, has left a very sorry trail. Investigations by STAT News  showed that the system was dispensing treatments that were unsafe and completely wrong.


The IBM Watson Health case is not just a matter of technical error, it's a governance problem. The system was educated on hypothetical, fictional cases rather than real-world patients.


Case Study 2: The Epic Sepsis Model 


Case Study 2: The Epic Sepsis Model


The Epic Sepsis Model is another example of a proprietary system that was in over six hundred US hospitals and didn't quite do its job. A study published in JAMA Internal Medicine discovered that the model failed to identify 67% of patients who were at risk of sepsis and frequently started sending alerts when the doctors were already treating the condition. This highlights the danger of widespread adoption without independent validation.



Case Study 3: Google Health's AI for Breast Cancer Detection


Google Health developed an AI model to assist radiologists in detecting breast cancer in mammograms. The project emphasized ethical considerations by:


  • Conducting extensive testing: They put in the work with extensive validation on diverse sets of data coming from both the UK and the US, and made sure their tool didn't simply memorize one hospital's data.

  • Engaging with patients: Google Health involved patients and doctors in the development process to understand their concerns and the workflow.


The result was a reduction in false positives and negatives, and showed that meticulous testing is really the only way to ensure the safety of an AI system.


Practical Strategies for Health care Leaders


  1. Establish an "Algorithmic Stewardship" Committee Do not leave AI vetting to IT procurement. Create a governance body that includes clinicians, ethicists, and risk managers. This group should have veto power over tools that cannot demonstrate clinical validity in your specific patient population.

  2. "Explainability" (XAI) When possible, reject black boxes. Governance contracts should require that AI tools offer "explainable" outputs—telling the clinician why a recommendation was made (e.g., "Sepsis Risk High because BP dropped 10%"). If your can't explain the why, patients and their caregivers may sound the alarm.

  3. Continuous "Auditing" over "Training" Ethics is not a one-time training session. It is an operational audit process. Implement quarterly reviews where AI predictions are audited against actual patient outcomes to detect "drift" or emerging bias.


Conclusion


The integration of AI into healthcare presents both exciting opportunities and significant ethical challenges. As we move forward, it is critical to shift from a posture of "adoption at all costs" to one of "adoption with governance." By prioritizing transparency, validating tools against real-world clinical data, and listening to clinicians who work with these systems daily, we can ensure technology serves to enhance, rather than undermine, the core values of patient care.

Comments


bottom of page