arXiv:2601.09724v2 Announce Type: replace-cross
Abstract: Large language models exhibit systematic negation sensitivity, yet no operational framework exists to measure this vulnerability at deployment scale, especially in high-stakes decisions. We introduce Syntactic Framing Fragility (SFF), a framework for quantifying decision consistency under logically equivalent syntactic transformations. SFF isolates syntactic effects via Logical Polarity Normalization, enabling direct comparison across positive and negative framings while controlling for polarity inversion, and provides the Syntactic Variation Index (SVI) as a robustness metric suitable for CI/CD integration. Auditing 23 models across 14 high-stakes scenarios (39,975 decisions), we establish ground-truth effect sizes for a phenomenon previously characterized only qualitatively and find that open-source models exhibit $2.2x higher fragility than commercial counterparts. Negation-bearing syntax is the dominant failure mode, with some models endorsing actions at 80-97% rates even when asked whether agents not act. These patterns are consistent with negation suppression failure documented in prior work, with chain-of-thought reasoning reducing fragility in some but not all cases. We provide scenario-stratified risk profiles and offer an operational checklist compatible with EU AI Act and NIST RMF requirements. Code, data, and scenarios will be released upon publication.
Identifying needs in adult rehabilitation to support the clinical implementation of robotics and allied technologies: an Italian national survey
IntroductionRobotics and technological interventions are increasingly being explored as solutions to improve rehabilitation outcomes but their implementation in clinical practice remains very limited. Understanding patient

