arXiv:2509.13281v5 Announce Type: replace
Abstract: Current safety evaluations of language models rely on benchmark-based assessments that may miss localized vulnerabilities. We present RepIt, a simple and data-efficient framework for isolating concept-specific representations in LM activations. While existing steering methods already achieve high attack success rates through broad interventions, RepIt enables a more concerning capability: selective suppression of refusal on targeted concepts while preserving refusal elsewhere. Across five frontier LMs, RepIt produces evaluation-evading model organisms with semantic backdoors, answering questions related to weapons of mass destruction while still scoring as safe on standard benchmarks. We find the edit of the steering vector localizes to just 100-200 residual dimensions, and robust concept vectors can be extracted from as few as a dozen examples on a single RTX A6000, highlighting how targeted, hard-to-detect modifications can exploit evaluation blind spots with minimal resources. Through demonstrating precise concept disentanglement, this work exposes vulnerabilities in current safety evaluation practices and demonstrates a need for more comprehensive, representation aware assessments.

Subscribe for Updates

Copyright 2025 dijee Intelligence Ltd.   dijee Intelligence Ltd. is a private limited company registered in England and Wales at Media House, Sopers Road, Cuffley, Hertfordshire, EN6 4RY, UK registration number 16808844