arXiv:2509.13281v5 Announce Type: replace
Abstract: Current safety evaluations of language models rely on benchmark-based assessments that may miss localized vulnerabilities. We present RepIt, a simple and data-efficient framework for isolating concept-specific representations in LM activations. While existing steering methods already achieve high attack success rates through broad interventions, RepIt enables a more concerning capability: selective suppression of refusal on targeted concepts while preserving refusal elsewhere. Across five frontier LMs, RepIt produces evaluation-evading model organisms with semantic backdoors, answering questions related to weapons of mass destruction while still scoring as safe on standard benchmarks. We find the edit of the steering vector localizes to just 100-200 residual dimensions, and robust concept vectors can be extracted from as few as a dozen examples on a single RTX A6000, highlighting how targeted, hard-to-detect modifications can exploit evaluation blind spots with minimal resources. Through demonstrating precise concept disentanglement, this work exposes vulnerabilities in current safety evaluation practices and demonstrates a need for more comprehensive, representation aware assessments.
Cognitive Alignment At No Cost: Inducing Human Attention Biases For Interpretable Vision Transformers
arXiv:2604.20027v1 Announce Type: cross Abstract: For state-of-the-art image understanding, Vision Transformers (ViTs) have become the standard architecture but their processing diverges substantially from human attentional

