Artificial intelligence (AI) chatbots powered by large language models (LLMs) such as ChatGPT offer a promising approach for delivering scalable, personalized physical activity interventions. Despite growing interest in applying these tools to health behaviour change, concerns remain regarding accuracy, safety, hallucinations, privacy, and theoretical grounding. This mini-review summarizes current methods for creating customized ChatGPT-based chatbots for physical activity promotion and outlines approaches for evaluating their performance. A literature search was conducted across five databases, white papers, and OpenAI technical reports. Three primary customization strategies were identified: retrieval-augmented generation (RAG), system prompt engineering, and fine-tuning. RAG enhances accuracy by grounding responses in curated guidelines and behaviour-change frameworks. System prompts define the chatbot’s role, tone, and reasoning logic. Fine-tuning adapts the model’s communication style using expert-crafted prompt–response pairs. These methods can be implemented independently or in combination, depending on intervention goals. Evaluation of customized chatbots requires both intrinsic model-based testing and extrinsic human-centred assessment. Additional considerations include protecting user privacy by avoiding collecting identifiable data, implementing data-minimization safeguards, and managing token-based operational costs associated with ChatGPT systems. Customized ChatGPT chatbots offer substantial potential for advancing physical activity promotion; however, safe and effective deployment requires thoughtful design, rigorous evaluation, and careful attention to privacy and cost.
A review for navigating the trade-offs: evaluating open-source and proprietary large language models for clinical and biomedical information extraction
The exponential growth of biomedical data necessitates advanced tools for efficient information extraction (IE) to support clinical decision-making and research. Large language models (LLMs) have


