BackgroundArtificial intelligence (AI) has the potential to revolutionize healthcare delivery in low- and middle-income countries (LMICs), yet its rapid adoption raises complex ethical, regulatory, and implementation challenges. This review investigates these barriers and identifies emerging strategies that support equitable and inclusive AI deployment in resource-limited settings.MethodsFollowing the PRISMA Extension for Scoping Reviews (PRISMA-ScR) guidelines, a systematic mapping of literature was conducted using PubMed, Scopus, and Cochrane Library (2000–2025) alongside global health policy reports. The search was framed using the Population, Concept, and Context (PCC) framework to identify studies addressing AI governance in LMICs. A total of 60 sources addressing ethical, regulatory, or implementation issues were analyzed across three domains derived from the WHO and OECD frameworks: governance, privacy, and AI applications.ResultsThis study reveals that 7.4% of LMICs have adopted national AI strategies. Evidence indicates that over 60% of AI models in LMICs rely on non-representative datasets, increasing contextual bias. Of the 60 included studies, 25 focused on ethics, 17 on regulatory gaps, and 18 on implementation. Findings highlight workforce readiness gaps, with fewer than 10% of institutions offering structured AI training. Case studies from Brazil and India illustrate how these barriers are addressed through context-sensitive design.ConclusionSuccessful AI integration requires context-sensitive design, participatory governance, and capacity building. This scoping review identifies critical gaps in empirical research on operationalization and recommends a transition from digital dependency to local innovation ecosystems.
Real-world outcomes from 2,905 episodes of hospital at home care: a propensity-matched cohort study
BackgroundHospital at home (HAH) services within the UK have expanded rapidly over the last 5 years, but there is comparatively little evidence demonstrating their clinical

