Background: Generative artificial intelligence (GenAI) is increasingly used in mental health care, from client-facing chatbots to clinician-facing documentation aids. Psychotherapists’ willingness to rely on—or withhold reliance from—these tools has significant implications for care quality, yet little is known about how practicing clinicians calibrate trust and distrust in GenAI across tasks and contexts. Given that the therapeutic relationship is central to psychotherapy outcomes, understanding how GenAI intersects with this relational foundation is essential for responsible integration. Objective: This study aims to examine (1) psychotherapists’ experiences with, perceptions of, and trust or distrust in GenAI in therapeutic contexts and (2) how they perceive the role of GenAI within the therapeutic relationship and how their perceptions shape their trust and distrust in GenAI. Methods: We conducted a qualitative interview study using semistructured interviews with 18 actively practicing psychotherapists in the United States between January and May 2025. Participants were recruited through professional mailing lists, social media, and snowball sampling. Interviews (≈60 min each) were conducted via Zoom and explored psychotherapists’ experiences with, perceptions of, and trust or distrust in GenAI in therapeutic contexts. Data were analyzed using the general inductive approach, with iterative coding and team-based interpretation to identify themes. Results: Our findings show that psychotherapists’ GenAI adoption was highly individualized and contingent on maintaining professional role integrity—not merely technical oversight. Trust was sustained when GenAI operated in clinician-supervised, supportive roles for low-stakes tasks (eg, documentation and brainstorming), but diminished when control shifted, tasks involved high-stakes clinical judgment, or GenAI threatened to encroach on the authentic human connection central to therapy. Participants articulated conditions for trust that went beyond “human-in-the-loop” monitoring to include preservation of interpretive authority, ethical responsibility, and relational primacy. Distrust also extended to the broader sociotechnical ecosystem, including concerns about commercial incentives, insurance pressures, and the absence of clear organizational guidelines. Conclusions: Psychotherapists’ perspectives offer critical insights into GenAI’s current usages in their professional practices and the conditions under which they are willing to trust and distrust GenAI tools. Their experiences highlight the importance of maintaining clinician control, ensuring contextual appropriateness, and preserving the human connection central to psychotherapy. Future work should further examine how therapeutic orientation, professional experience, and client characteristics shape trust and distrust in GenAI. As GenAI becomes more embedded in mental health care, research is also needed to explore how specific GenAI system features can be responsibly designed to support clinical workflows and enhance therapeutic relationships. Organizational and policy frameworks will be essential to ensure responsible, ethically aligned, and human-centered GenAI deployment in psychotherapy.
Assessing nurses’ attitudes toward artificial intelligence in Kazakhstan: psychometric validation of a nine-item scale
BackgroundArtificial intelligence (AI) is increasingly integrated into healthcare, yet the attitudes and knowledge of nurses, who are the key mediators of AI implementation, remain underexplored.

