Background: Background: Large language models (LLMs) such as ChatGPT are rapidly reshaping information management in health care by transforming how knowledge is accessed, communicated, and applied. However, their adoption in sensitive domains raises unresolved concerns regarding trust, privacy, and equity, especially in low- and middle-income countries with varying levels of digital readiness and institutional safeguards. Objective: Objective: This study aimed to examine the factors influencing adoption intent of LLMs among health care professionals (HCPs) and patients/caregivers (PCs) in China, with particular focus on trust, information behavior, and socio-technical readiness. Methods: Methods: We conducted a multicenter mixed-methods study across five tertiary hospitals, surveying 240 HCPs and 480 PCs and conducting semi-structured interviews with 30 participants. Quantitative analyses included logistic regression, random forest, and XGBoost models, supplemented with SHAP-based interpretability. Qualitative data were analyzed thematically to identify role-specific expectations and concerns. Results: Results: Trust, perceived usefulness, and digital readiness emerged as the strongest facilitators of LLM adoption, while privacy concerns, limited literacy, and socioeconomic disadvantage were significant barriers. Predictive models achieved strong performance (AUC = 0.83–0.96), with trust consistently identified as the central predictor across user groups. Qualitative findings highlighted distinct perspectives: HCPs emphasized workflow integration and accountability, whereas PCs prioritized plain-language comprehensibility and emotional reassurance. Conclusions: Conclusions: LLM adoption in health care depends less on technical performance than on managing trust, information behaviors, and socio-technical contexts. These findings extend information management theory by positioning socio-technical readiness as a critical construct and highlight that trust and ethical concerns outweigh demographic factors. Practically, the study points to the need for trust-centered, role-sensitive system design, inclusive digital literacy strategies, and governance frameworks that promote accountability and equitable participation.
CUDA-L2: Surpassing cuBLAS Performance for Matrix Multiplication through Reinforcement Learning
arXiv:2512.02551v2 Announce Type: replace-cross Abstract: In this paper, we propose CUDA-L2, a system that combines large language models (LLMs) and reinforcement learning (RL) to automatically




