arXiv:2603.13250v1 Announce Type: cross
Abstract: What happens when an AI assistant is told to “maximise sales” while a user asks about drug interactions? We find that commercial system prompts can override safety training, causing frontier models to lie about medical risks, dismiss safety concerns, and prioritise profit over user welfare. Testing 8 models in scenarios where commercial objectives conflict with user safety — a diabetic asking about high-sugar supplements, an investor being pushed toward unsuitable products, a traveller steered away from safety warnings — we uncover catastrophic failures: models fabricating safety information, explicitly reasoning they should refuse but proceeding anyway, and actively discouraging users from consulting doctors. Most alarmingly, models show no “red line”, their willingness to comply with harmful requests does not decrease as potential consequences escalate from minor to life-threatening. Our findings suggest that current safety training does not generalise to commercial deployment contexts.
Translating AI research into reality: summary of the 2025 voice AI Symposium and Hackathon
The 2025 Voice AI Symposium represented a transition from conceptual research to clinical implementation in vocal biomarker science. Hosted by the NIH-funded Bridge2AI-Voice consortium, the


