arXiv:2603.03359v2 Announce Type: replace-cross
Abstract: ASR systems exhibit persistent performance disparities across accents, but whether these gaps reflect superficial biases or deep structural vulnerabilities remains unclear. We introduce ACES, a three-stage audit that extracts accent-discriminative subspaces from ASR representations, constrains adversarial attacks to them, and tests whether removing them improves fairness. On Wav2Vec2-base with seven accents, imperceptible perturbations (~60 dB SNR) along the accent subspace amplify the WER disparity gap by nearly 50% (21.3->31.8 pp), exceeding random-subspace controls; a permuted-label test confirms specificity to genuine accent structure. Partially removing the subspace worsens both WER and disparity, revealing that accent-discriminative and recognition-critical features are deeply entangled. ACES thus positions accent subspaces as powerful fairness-auditing tools, not simple erasure levers.
LeWorldModel: Stable End-to-End Joint-Embedding Predictive Architecture from Pixels
arXiv:2603.19312v1 Announce Type: cross Abstract: Joint Embedding Predictive Architectures (JEPAs) offer a compelling framework for learning world models in compact latent spaces, yet existing methods




