arXiv:2604.15010v1 Announce Type: cross
Abstract: When do transformers commit to a decision, and what prevents them from correcting it? We introduce textbfprolepsis: a transformer commits early, task-specific attention heads sustain the commitment, and no layer corrects it. Replicating citeauthorlindsey2025biology’s (citeyearlindsey2025biology) planning-site finding on open models (Gemma~2 2B, Llama~3.2 1B), we ask five questions. (Q1)~Planning is invisible to six residual-stream methods; CLTs are necessary. (Q2)~The planning-site spike replicates with identical geometry. (Q3)~Specific attention heads route the decision to the output, filling a gap flagged as invisible to attribution graphs. (Q4)~Search requires $leq16$ layers; commitment requires more. (Q5)~Factual recall shows the same motif at a different network depth, with zero overlap between recurring planning heads and the factual top-10. Prolepsis is architectural: the template is shared, the routing substrates differ. All experiments run on a single consumer GPU (16,GB VRAM).
AI needs a strong data fabric to deliver business value
Artificial intelligence is moving quickly in the enterprise, from experimentation to everyday use. Organizations are deploying copilots, agents, and predictive systems across finance, supply chains,



