Generative artificial intelligence (Gen AI) has gained immense significance in recent years, particularly in the field of healthcare. Despite its significant role in streamlining healthcare-related tasks, there remain unanswered concerns regarding the challenges of incorporating this technology into healthcare settings and it effect on diagnostic confidence. The purpose of this research is to address this gap by developing and validating a comprehensive scale that captures risks like hallucinations and measures their impact on diagnostic confidence among healthcare practitioners. A three-step process was carried out to develop the scale. Data were collected from healthcare professionals and analyzed using exploratory factor analysis and confirmatory factor analysis. In the third step, structural equation modeling using SmartPLS was applied to validate the hypothesized relationships. The results indicated a significant impact of awareness of extrinsic hallucinations on diagnostic confidence. However, awareness of intrinsic hallucinations showed no significant impact on diagnostic confidence. This research contributes to the existing literature by examining the risks associated with Gen AI by validating and developing a reliable scale to measure the challenges healthcare practitioners face when using Gen AI tools.
Co-creating a program theory and evaluability assessment for an Irish single-session, synchronous chat-based youth mental health intervention: implications for outcome evaluation
IntroductionSingle-session online synchronous chat offers immediate, anonymous, single-session support for young people. However, the drop-in format attracts a diverse population with urgent and varied needs,

