Background: Mental health problems among university students are a growing global concern, yet limited counseling resources and inadequate understanding of counseling procedures often delay timely help-seeking. Informed consent forms (ICFs) are essential for safeguarding autonomy and clarifying counseling procedures, but many universities’ counseling ICFs are incomplete, ambiguous, or overly technical. Large language models (LLMs) may offer scalable assistance for improving clarity and accessibility. Objective: This study aimed to evaluate whether LLM-based rewriting could improve the structure, readability, content quality, and comprehensibility of university counseling ICFs, and compared 2 advanced models (ChatGPT [GPT-5] and Grok-4). Methods: We conducted a comparative evaluation of counseling ICFs collected from 33 Chinese universities (original texts) and generated 2 rewritten versions for each ICF using ChatGPT (GPT-5) and Grok-4. A multidimensional framework assessed (1) textual structure and readability, (2) expert-rated content quality from a counselor perspective, and (3) volunteer-rated reading comprehension from a client perspective. Comparisons between original and rewritten texts were performed using Wilcoxon signed rank tests, with linear mixed-effects models used to validate results while accounting for rater variability. Results: Compared with the originals, both LLM-rewritten ICFs showed significant improvements across all evaluated dimensions. The mean Lee-Yang Readability Index decreased from 28.68 (SD 5.69) to 22.39 (SD 2.13) with ChatGPT (GPT-5) and 24.37 (SD 2.32) with Grok-4 (both <.001), and mean tone friendliness increased from 2.57 (SD 0.29) to 2.67 (SD 0.12) and 2.67 (SD 0.13), respectively. The mean expert-rated content quality improved from 45.33 (SD 8.74) to 52.54 (SD 7.92) and 55.49 (SD 7.81) (<.001), driven mainly by higher completeness and specificity of key information. The mean volunteer-rated reading comprehension scores increased from 19.02 (SD 1.32) to 22.33 (SD 0.81) and 22.05 (SD 0.90) (<.001), indicating improved clarity, readability, and acceptability. Across structural features, Grok-4 tended to produce longer rewritten forms than the originals, highlighting a potential trade-off between added informational content and document length. Conclusions: In this comparative evaluation of 33 Chinese university counseling ICFs, LLM-based rewriting was associated with improved readability, expert-rated content quality, and volunteer-rated comprehension relative to original forms. These findings suggest that LLMs can support the optimization of counseling documentation; however, implementation should consider practical constraints (eg, document length) and retain human oversight.
Assessing nurses’ attitudes toward artificial intelligence in Kazakhstan: psychometric validation of a nine-item scale
BackgroundArtificial intelligence (AI) is increasingly integrated into healthcare, yet the attitudes and knowledge of nurses, who are the key mediators of AI implementation, remain underexplored.


