Technology-based interventions for Autism Spectrum Disorder (ASD) are frequently justified on the grounds that digital tools “increase engagement” and “enhance motivation.” However, across domains such as robot-assisted therapy, immersive environments (virtual and augmented reality), and ICT-based educational applications, outcomes labeled as engagement are often derived from observable indicators including gaze, time-on-task, interaction duration, task adherence, or reduced off-task behavior. While informative, these measures may primarily index sustained attention and, when considered in isolation, do not provide sufficient evidence to support inferences about intentional involvement or intrinsic motivation. In this Perspective paper, we argue that part of the literature implicitly equates increased on-task behavior with increased engagement, despite engagement and motivation being inferential constructs that require clearer operationalization. We first clarify conceptual distinctions between engagement, motivation, and sustained attention, highlighting how overlapping behavioral indicators can lead to interpretative ambiguity. We then summarize a recurring evidence pattern showing that technology-related outcomes are most consistently captured through markers of attentional stability during task performance. Finally, we propose an alternative interpretation: technology may function as a context that supports sustained attention in ASD by leveraging predictable structure, sensory coherence, repetition, and immediate feedback, and in some cases by aligning with restricted interests while the indicators most commonly reported are insufficient to determine whether motivation or deeper forms of engagement have increased. We conclude that improving conceptual precision and measurement practices is essential to interpret intervention outcomes accurately and to identify which technological components modulate attention, motivation, and active participation in autistic individuals.
Beyond the algorithm: embedding ethics for trustworthy AI in radiology and oncology
BackgroundArtificial intelligence (AI) in radiology and oncology promises improvements in diagnostic accuracy and efficiency yet introduces complex ethical and societal challenges. Governance efforts frequently rely

