arXiv:2603.20578v1 Announce Type: new
Abstract: The prevailing approach to improving large language model (LLM) reasoning has centered on expanding context windows, implicitly assuming that more tokens yield better performance. However, empirical evidence – including the “lost in the middle” effect and long-distance relational degradation – demonstrates that contextual space exhibits structural gradients, salience asymmetries, and entropy accumulation under transformer architectures.
We introduce Context Cartography, a formal framework for the deliberate governance of contextual space. We define a tripartite zonal model partitioning the informational universe into black fog (unobserved), gray fog (stored memory), and the visible field (active reasoning surface), and formalize seven cartographic operators – reconnaissance, selection, simplification, aggregation, projection, displacement, and layering – as transformations governing information transitions between and within zones. The operators are derived from a systematic coverage analysis of all non-trivial zone transformations and are organized by transformation type (what the operator does) and zone scope (where it applies).
We ground the framework in the salience geometry of transformer attention, characterizing cartographic operators as necessary compensations for linear prefix memory, append-only state, and entropy accumulation under expanding context. An analysis of four contemporary systems (Claude Code, Letta, MemOS, and OpenViking) provides interpretive evidence that these operators are converging independently across the industry.
We derive testable predictions from the framework – including operator-specific ablation hypotheses – and propose a diagnostic benchmark for empirical validation.
Improving Fine-Grained Rice Leaf Disease Detection via Angular-Compactness Dual Loss Learning
arXiv:2603.25006v1 Announce Type: cross Abstract: Early detection of rice leaf diseases is critical, as rice is a staple crop supporting a substantial share of the


