arXiv:2602.11675v3 Announce Type: replace
Abstract: Large language models may answer causal questions correctly for the wrong reasons, substituting associational shortcuts P(Y|X) for the interventional query P(Y|do(X)). Current RL methods reward what the model answers but not why, reinforcing these shortcuts until distribution shift exposes them. We introduce Epistemic Regret Minimization (ERM), a framework that identifies causal reasoning flaws from reasoning traces, with no ground-truth labels. On CausalT5K (N=1,360, 6 frontier LLMs), models bifurcate: compliant models already correct under outcome-only reprompting, but reasoning-heavy models (GPT-4 Turbo, GPT-5.2, Claude Sonnet 3.5) resist outcome-only correction yet respond significantly to ERM’s targeted causal critique. Ablation on 4,054 scenarios confirms causal content, not prompt structure alone, drives correction for stubborn models (p=0.006), and a scenario-blind judge argues against answer leakage. Cross-benchmark evaluation on CLadder confirms Rung Collapse generalizes beyond CausalT5K. We extend ERM to cross-episode RL, where interventional evidence accumulates into a reward signal for open-domain problems lacking ground-truth verifiers. A separation theorem proves outcome-only RL cannot distinguish correct from flawed causal models in confounded environments, and preliminary experiments across four LLMs show epistemic reward carries signal where outcome reward does not. This establishes signal existence, not yet policy improvement.
Cognitive Alignment At No Cost: Inducing Human Attention Biases For Interpretable Vision Transformers
arXiv:2604.20027v1 Announce Type: cross Abstract: For state-of-the-art image understanding, Vision Transformers (ViTs) have become the standard architecture but their processing diverges substantially from human attentional

