InfoFlow: Reinforcing Search Agent Via Reward Density Optimization

arXiv:2510.26575v1 Announce Type: cross
Abstract: Reinforcement Learning with Verifiable Rewards (RLVR) is a promising approach for enhancing agentic deep search. However, its application is often hindered by low textbfReward Density in deep search scenarios, where agents expend significant exploratory costs for infrequent and often null final rewards. In this paper, we formalize this challenge as the textbfReward Density Optimization problem, which aims to improve the reward obtained per unit of exploration cost. This paper introduce textbfInfoFlow, a systematic framework that tackles this problem from three aspects. 1) textbfSubproblem decomposition: breaking down long-range tasks to assign process rewards, thereby providing denser learning signals. 2) textbfFailure-guided hints: injecting corrective guidance into stalled trajectories to increase the probability of successful outcomes. 3) textbfDual-agent refinement: employing a dual-agent architecture to offload the cognitive burden of deep exploration. A refiner agent synthesizes the search history, which effectively compresses the researcher’s perceived trajectory, thereby reducing exploration cost and increasing the overall reward density. We evaluate InfoFlow on multiple agentic search benchmarks, where it significantly outperforms strong baselines, enabling lightweight LLMs to achieve performance comparable to advanced proprietary LLMs.

Subscribe for Updates

Copyright 2025 dijee Intelligence Ltd.   dijee Intelligence Ltd. is a private limited company registered in England and Wales at Media House, Sopers Road, Cuffley, Hertfordshire, EN6 4RY, UK registeration number 16808844