Artificial intelligence (AI) is transforming healthcare by enabling advanced diagnostics, personalized treatments, and improved operational efficiencies. By identifying complex data patterns and correlations, AI could supplement clinical decision-making, enabling more rapid diagnoses and treatment decisions tailored to meet the unique needs of diverse communities. However, realizing these benefits requires that clinical AI models be consistent, reliable, and validated across diverse populations and clinical environments. In addition, as these data patterns and correlations may often be unexpected, AI models require more explainability compared to other medical technologies. This is especially true for complex models, where the processes driving a model to make a prediction are often unclear and uninterpretable to both model developers and medical professionals, resulting in AI models frequently being described as “black boxes”. To address this fundamental challenge of interpretability, explainable AI (XAI) has emerged as a critical approach, providing insight (often in a post-hoc manner) into why models generate their given output. Studies have shown that most physicians prefer XAI to non-explainable AI. This commentary therefore explores key considerations needed to ensure that AI promotes health equity in marginalized communities, building on similar shifts toward anticipatory health action that have been explored in humanitarian and climate AI contexts (8, 9). We argue that equity in AI depends on embedding explainability and reproducibility within culturally responsive frameworks that address historical and structural bias.
Thematic landscapes and temporal trends of disability technology adoption: insights from Structural Topic Modelling
IntroductionIn recent years, the importance of accessible and inclusive technologies has increasingly supported people with disabilities. However, prior studies on the adoption of technology remain



