arXiv:2603.22352v1 Announce Type: cross
Abstract: Recent progress in reinforcement learning with verifiable rewards (RLVR) offers a practical path to self-improvement of language models, but existing methods face a key trade-off: endogenous self-play can drift over iterations, while corpus-grounded approaches rely on curated data environments. We present textbfWIST, a textbfWeb-grounded textbfIterative textbfSelf-play textbfTree framework for domain-targeted reasoning improvement that learns directly from the open web without requiring any pre-arranged domain corpus. WIST incrementally expands a domain tree for exploration, and retrieves and cleans path-consistent web corpus to construct a controllable training environment. It then performs Challenger–Solver self-play with verifiable rewards, and feeds learnability signals back to update node posteriors and guide subsequent exploration through an adaptive curriculum. Across four backbones, WIST consistently improves over the base models and typically outperforms both purely endogenous self-evolution and corpus-grounded self-play baselines, with the Overall gains reaching textbf+9.8 (textitQwen3-4B-Base) and textbf+9.7 (textitOctoThinker-8B). WIST is also domain-steerable, improving textitQwen3-8B-Base by textbf+14.79 in medicine and textitQwen3-4B-Base by textbf+5.28 on PhyBench. Ablations further confirm the importance of WIST’s key components for stable open-web learning. Our Code is available at https://github.com/lfy-123/WIST.
Inside the stealthy startup that pitched brainless human clones
After operating in secrecy for years, a startup company called R3 Bio, in Richmond, California, suddenly shared details about its work last week—saying it had



