The exponential growth of biomedical data necessitates advanced tools for efficient information extraction (IE) to support clinical decision-making and research. Large language models (LLMs) have emerged as transformative solutions, yet their application in healthcare raises critical trade-offs between open-source (OSS) and proprietary models. This review evaluates IE workflows such as named entity recognition, relation extraction, and terminology normalization, through five axes: performance (including schema fidelity), reproducibility, cost, transparency & auditability, and patient-centric governance. While proprietary models excel in schema compliance and complex reasoning, OSS models offer advantages in auditability, local control, and cost-effectiveness. Challenges such as schema fidelity, reproducibility, and ethical considerations like algorithmic fairness and data sovereignty are emphasized. The analysis highlights that OSS models, though requiring domain-specific adaptation, enable greater transparency and customization for privacy-sensitive tasks, whereas proprietary systems face limitations in bias mitigation and regulatory alignment. By addressing technical, ethical, and operational challenges, this work underscores the importance of context-aware model selection to balance innovation with accountability in clinical AI deployment. The findings advocate for hybrid approaches that integrate OSS flexibility with proprietary capabilities, ensuring equitable, reliable, and compliant healthcare solutions.
Advancing the adoption of oncology decision support tools in Europe: insights from CAN.HEAL
Effective cancer care increasingly depends on digital decision support tools (DSTs) to interpret complex clinical, molecular, and genomic data and guide personalised treatment decisions. However,


