arXiv:2512.10787v2 Announce Type: replace
Abstract: Retrieval-Augmented Generation (RAG) systems often fail on multi-hop queries when the initial retrieval misses a bridge fact. Prior corrective approaches, such as Self-RAG, CRAG, and Adaptive-$k$, typically address this by textitadding more context or pruning existing lists. However, simply expanding the context window often leads to textbfcontext dilution, where distractors crowd out relevant information. We propose textbfSEAL-RAG, a training-free controller that adopts a textbf“replace, don’t expand” strategy to fight context dilution under a fixed retrieval depth $k$. SEAL executes a (textbfSearch $rightarrow$ textbfExtract $rightarrow$ textbfAssess $rightarrow$ textbfLoop) cycle: it performs on-the-fly, entity-anchored extraction to build a live textitgap specification (missing entities/relations), triggers targeted micro-queries, and uses textitentity-first ranking to actively swap out distractors for gap-closing evidence. We evaluate SEAL-RAG against faithful re-implementations of Basic RAG, CRAG, Self-RAG, and Adaptive-$k$ in a shared environment on textbfHotpotQA and textbf2WikiMultiHopQA. On HotpotQA ($k=3$), SEAL improves answer correctness by textbf+3–13 pp and evidence precision by textbf+12–18 pp over Self-RAG. On 2WikiMultiHopQA ($k=5$), it outperforms Adaptive-$k$ by textbf+8.0 pp in accuracy and maintains textbf96% evidence precision compared to 22% for CRAG. These gains are statistically significant ($p<0.001$). By enforcing fixed-$k$ replacement, SEAL yields a predictable cost profile while ensuring the top-$k$ slots are optimized for precision rather than mere breadth. We release our code and data at https://github.com/mosherino/SEAL-RAG.
Learning Evolving Latent Strategies for Multi-Agent Language Systems without Model Fine-Tuning
arXiv:2512.20629v1 Announce Type: cross Abstract: This study proposes a multi-agent language framework that enables continual strategy evolution without fine-tuning the language model’s parameters. The core




