arXiv:2604.04951v1 Announce Type: cross
Abstract: Imagine receiving a video call from your CFO, surrounded by colleagues, asking you to urgently authorise a confidential transfer. You comply. Every person on that call was fake, and you just lost $25 million. This is not a hypothetical. It happened in Hong Kong in January 2024, and it is becoming the template for a new generation of fraud. AI has not invented a new crime. It has industrialised an ancient one: the manufacture of trust.
This paper proposes Synthetic Trust Attacks (STAs) as a formal threat category and introduces STAM, the Synthetic Trust Attack Model, an eight-stage operational framework covering the full attack chain from adversary reconnaissance through post-compliance leverage. The core argument is this: existing defenses target synthetic media detection, but the real attack surface is the victim’s decision. When human deepfake detection accuracy sits at approximately 55.5%, barely above chance, and LLM scam agents achieve 46% compliance versus 18% for human operators while evading safety filters entirely, the perception layer has already failed. Defense must move to the decision layer. We present a five-category Trust-Cue Taxonomy, a reproducible 17-field Incident Coding Schema with a pilot-coded example, and four falsifiable hypotheses linking attack structure to compliance outcomes. The paper further operationalizes the author’s practitioner-developed Calm, Check, Confirm protocol as a research-grade decision-layer defense. Synthetic credibility, not synthetic media, is the true attack surface of the AI fraud era.
Identifying needs in adult rehabilitation to support the clinical implementation of robotics and allied technologies: an Italian national survey
IntroductionRobotics and technological interventions are increasingly being explored as solutions to improve rehabilitation outcomes but their implementation in clinical practice remains very limited. Understanding patient


