• Home
  • DTx
  • Explainable and reproducible AI: culturally responsive AI for health equity in minoritized groups

Artificial intelligence (AI) is transforming healthcare by enabling advanced diagnostics, personalized treatments, and improved operational efficiencies. By identifying complex data patterns and correlations, AI could supplement clinical decision-making, enabling more rapid diagnoses and treatment decisions tailored to meet the unique needs of diverse communities. However, realizing these benefits requires that clinical AI models be consistent, reliable, and validated across diverse populations and clinical environments. In addition, as these data patterns and correlations may often be unexpected, AI models require more explainability compared to other medical technologies. This is especially true for complex models, where the processes driving a model to make a prediction are often unclear and uninterpretable to both model developers and medical professionals, resulting in AI models frequently being described as “black boxes”. To address this fundamental challenge of interpretability, explainable AI (XAI) has emerged as a critical approach, providing insight (often in a post-hoc manner) into why models generate their given output. Studies have shown that most physicians prefer XAI to non-explainable AI. This commentary therefore explores key considerations needed to ensure that AI promotes health equity in marginalized communities, building on similar shifts toward anticipatory health action that have been explored in humanitarian and climate AI contexts (8, 9). We argue that equity in AI depends on embedding explainability and reproducibility within culturally responsive frameworks that address historical and structural bias.

Subscribe for Updates

Copyright 2025 dijee Intelligence Ltd.   dijee Intelligence Ltd. is a private limited company registered in England and Wales at Media House, Sopers Road, Cuffley, Hertfordshire, EN6 4RY, UK registration number 16808844