• Home
  • Uncategorized
  • Position as Probability: Self-Supervised Transformers that Think Past Their Training for Length Extrapolation

Position as Probability: Self-Supervised Transformers that Think Past Their Training for Length Extrapolation

arXiv:2506.00920v2 Announce Type: replace-cross
Abstract: Deep sequence models typically degrade in accuracy when test sequences significantly exceed their training lengths, yet many critical tasks–such as algorithmic reasoning, multi-step arithmetic, and compositional generalization–require robust length extrapolation. We introduce PRISM, a Probabilistic Relative-position Implicit Superposition Model, a novel positional encoding mechanism that enables Transformers to extrapolate accurately up to 10x beyond their training length. PRISM learns continuous relative positions through a differentiable histogram-filter update, preserving position uncertainty via a probabilistic superposition rather than conventional deterministic embeddings. Empirically, PRISM achieves state-of-the-art length extrapolation, successfully generalizing to previously intractable sequence lengths across algorithmic benchmarks–including arithmetic (addition, multiplication), SCAN compositionality tasks, and complex copy variants derived from DeepMind’s recent datasets. Our analysis demonstrates that PRISM’s stochastic positional encoding maintains sharp and interpretable internal states, providing a theoretical basis for reliable length generalization. These results advance the goal of neural sequence models that remain algorithmically robust at lengths far exceeding their training horizon.

Subscribe for Updates

Copyright 2025 dijee Intelligence Ltd.   dijee Intelligence Ltd. is a private limited company registered in England and Wales at Media House, Sopers Road, Cuffley, Hertfordshire, EN6 4RY, UK registeration number 16808844