arXiv:2603.10123v1 Announce Type: cross
Abstract: The “Lost in the Middle” phenomenon — a U-shaped performance curve where LLMs retrieve well from the beginning and end of a context but fail in the middle — is widely attributed to learned Softmax artifacts or the distance-decay of positional encodings like RoPE. This paper makes a single, precise claim: emphthe U-shape is already present at initialization, before any training or positional encoding takes effect. It is an inherent geometric property of the causal decoder with residual connections.
We model multi-layer causal attention as iterated powers of the Ces`aro matrix and derive the exact closed-form influence density in the continuous limit. Causal masking forces a logarithmic divergence of gradient influence at the start of the prompt (the Primacy Tail), while residual connections create an isolated $mathcalO(1)$ anchor at the final token (the Recency Delta). Between these extremes lies a factorial dead zone of order $mathcalO(1/(H-1)!)$, where $H$ is the network depth, making middle-context retrieval and training structurally hostile. We validate empirically that untrained Qwen2 and GPT-2 architectures exhibit this U-shape at Step~0, and that it is identical with or without RoPE. Comparing initialized and pretrained networks, we show that standard training does not overcome the topological valley, confirming that the U-shape persists as an architectural baseline under standard pretraining objectives.
We do not claim that this bias is insurmountable, nor that interventions such as RoPE modifications are useless. We establish what the baseline is and where it comes from, so that future efforts to overcome it can be precisely targeted.
Trust and anxiety as primary drivers of digital health acceptance in multiple sclerosis: toward an extended disease-specific technology acceptance model
BackgroundDigital health applications and AI-supported wearables may benefit people with Multiple Sclerosis (MS), yet fluctuating cognitive and physical symptoms could shape adoption in ways not



