BackgroundAcademic mentoring plays a critical role in monitoring student progress, maintaining academic integrity, identifying early signs of risk, and delivering personalized guidance to improve learning outcomes. Traditionally, this has relied on face-to-face interactions; however, advancements in artificial intelligence (AI) have introduced new opportunities for AI-assisted mentoring. While promising, many existing AI models for student monitoring and risk identification are complex and difficult to implement in real-world academic settings. To address this challenge, the present study validates a simplified AI comentor model designed to efficiently identify at-risk students and support continuous academic monitoring focused on pedagogy.MethodsThis study employed a prospective mixed-methods pilot design to evaluate the feasibility, acceptability, and analytic agreement of an AI-assisted assessment framework in medical education. Participants included approximately 40 undergraduate medical students and faculty assessors. Primary outcomes focused on implementation feasibility and acceptability, assessed using structured student and faculty surveys, system-usage metrics, and qualitative feedback. Secondary outcomes evaluated the analytic agreement between AI-derived competency profiles and faculty assessments. The AI component used unsupervised machine learning–based clustering to group students according to multidimensional performance indicators, without prior labels. Agreement was examined using confusion matrices, percentage agreement, and Cohen’s Kappa, reported with confidence intervals to account for the exploratory sample size. Given the pilot nature of the study, resampling-based validation (repeated stratified k-fold cross-validation) was used to assess stability rather than definitive diagnostic accuracy. Ethical approval was obtained, and all data were deidentified before analysis.DiscussionThis study will be conducted on a cohort of 40 students from a reputed Health Sciences College, UAE, to evaluate the integrity of a proposed AI comentoring model for monitoring academic performance throughout a semester. The AI models (supervised and segmentation engines) will be tested at two time points: the 5th and 10th weeks. At each time point, categorized student performance data will be uploaded to the AI platform, based on pedagogical parameters, and used to generate a personalized text draft automatically (local pseudonymization + institutional mail merge workflow). To assess the integrity of the used AI, the investigator will perform a manual evaluation of each student’s risk status at both checkpoints, alongside statistical analyses. If successful, the system may alleviate the workload of human mentors, enable timely interventions for at-risk students, and enhance overall student performance and retention.
Extraction and processing of intensive care chart data from a patient data management system
BackgroundRoutine clinical data captured in Patient Data Management Systems (PDMS) in intensive care and perioperative settings are an invaluable resource for clinical research. However, the


