The increasing use of electronic health records (EHRs) for real-world evidence (RWE) studies is hindered by substantial heterogeneity in data collection practices and local coding schemes across healthcare providers. Data standardization—particularly the mapping of locally defined medical concepts to standardized vocabularies—is therefore a critical but labour-intensive step, traditionally relying on extensive manual review by clinical experts. While a range of machine-learning (ML) approaches have been proposed to support medical concept mapping, their integration into practical, end-to-end workflows and their performance under real-world conditions remain insufficiently studied. In this work, we present ArcMAP, an end-to-end application that integrates a state-of-the-art biomedical representation model (BioLORD) into a human-in-the-loop workflow designed to streamline and accelerate medical concept mapping. ArcMAP provides a graphical user interface that enables clinical experts to efficiently review, validate, and correct automated mapping suggestions. A core component of the system is a continuous learning pipeline, in which expert feedback is systematically captured and used to update the underlying model, allowing ArcMAP to adapt to evolving coding practices and newly onboarded data sources. We conduct a comprehensive evaluation of ArcMAP across multiple deployment scenarios, including the impact of continuous fine-tuning, the onboarding of a new hospital, and a longitudinal real-world evaluation conducted over a two-month period using medication and laboratory test data from five UK-based NHS hospitals. Our results demonstrate the importance of domain-specific fine-tuning, with top-1 accuracy for laboratory test names increasing from 37.0% to 91.6%. However, when simulating the onboarding of a new hospital, the system achieves a weighted average top-1 accuracy of only 73.5%, indicating substantial variability across NHS hospital systems. In real-world use, the use of ArcMAP indicates an increased mapping efficiency compared to manual workflows, while also revealing considerable variation across individual data-mapping sessions.
Co-designing animated videos to explain large language models and their use in healthcare and research
IntroductionThe increasing development of large language models (LLM) in healthcare research is taking place without patient and public involvement and engagement (PPIE). Part of the



