• Home
  • Uncategorized
  • Patient Concerns Regarding Artificial Intelligence Applications in Health Care: Systematic Review and Meta-Synthesis Based on Social Ecological Theory

Background: The use of artificial intelligence (AI) in health care is growing quickly, but there is not enough research that looks at patient concerns from a multilevel perspective. Existing reviews predominantly summarize patient attitudes descriptively, lacking theoretical frameworks to explain the underlying mechanisms of these concerns. Objective: This systematic review and meta-synthesis aimed to identify and analyze patient concerns regarding health care AI applications, using social ecological theory to reveal the multilevel interactive mechanisms of concern at the individual, interpersonal, organizational, and societal levels. Methods: Following the PRISMA-S (Preferred Reporting Items for Systematic Reviews and Meta-Analyses literature search extension) guidelines, databases including PubMed, Embase, Web of Science, CINAHL, and Scopus were searched on March 1, 2026. Qualitative studies exploring patient perceptions of clinical AI applications were included, excluding those involving only healthy populations, technical performance, or nonclinical settings. Two researchers independently screened the literature and assessed methodological quality using the JBI-QARI (Joanna Briggs Institute Qualitative Assessment and Review Instrument) checklist. Confidence in synthesized findings was assessed using the GRADE-CERQual (Confidence in the Evidence from Reviews of Qualitative Research) approach. Results: A total of 25 qualitative studies involving 528 participants from diverse patient groups across multiple countries were included. Six themes emerged: (1) microlevel worries about privacy and data security, including data breaches and loss of control over personal health information; (2) worries about the limits and reliability of technology, especially AI diagnostic accuracy and “black box” decision-making; (3) mesolevel effects on physician-patient relationships, including reduced face-to-face interaction and empathy; (4) trust and accountability issues, including unclear responsibility attribution and institutional oversight problems; (5) macrolevel ethical and equity issues, including algorithmic bias and health care access inequalities; and (6) worries about technology diffusion and possible replacement of health care workers. Conclusions: This review represents the first meta-synthesis applying social ecological theory to construct patient concerns regarding medical AI. Unlike previous descriptive reviews, it reveals the interconnected “ecological imbalance” mechanisms at micro-, meso-, and macrolevels when AI is embedded in health care systems. The findings suggest that many patient concerns are based on facts rather than just misunderstandings, indicating that systemic rather than isolated interventions are needed. Practical implications include explainable algorithm design at the microlevel, improved physician-patient communication, and institutional accountability at the mesolevel, and coordinated global ethical norms and equity-promoting policies at the macrolevel. Limitations include the inclusion of studies primarily from developed regions, significant heterogeneity in AI application scenarios, and constraints inherent to secondary research. Nevertheless, addressing these multilevel concerns remains crucial for balancing technological advancement with patient-centered care and enabling sustainable AI integration. Trial Registration: Trial Registration: PROSPERO CRD420251156502; https://www.crd.york.ac.uk/PROSPERO/view/CRD420251156502

Subscribe for Updates

Copyright 2025 dijee Intelligence Ltd.   dijee Intelligence Ltd. is a private limited company registered in England and Wales at Media House, Sopers Road, Cuffley, Hertfordshire, EN6 4RY, UK registration number 16808844