• Home
  • Uncategorized
  • R^3: Replay, Reflection, and Ranking Rewards for LLM Reinforcement Learning

arXiv:2601.19620v2 Announce Type: replace-cross
Abstract: Large reasoning models (LRMs) aim to solve diverse and complex problems through structured reasoning. Recent advances in group-based policy optimization methods have shown promise in enabling stable advantage estimation without reliance on process-level annotations. However, these methods rely on advantage gaps induced by high-quality samples within the same batch, which makes the training process fragile and inefficient when intra-group advantages collapse under challenging tasks. To address these problems, we propose a reinforcement learning mechanism named emphtextbfR^3 that along three directions: (1) a emphcross-context underlinetextbfReplay strategy that maintains the intra-group advantage by recalling valuable examples from historical trajectories of the same query, (2) an emphin-context self-underlinetextbfReflection mechanism enabling models to refine outputs by leveraging past failures, and (3) a emphstructural entropy underlinetextbfRanking reward, which assigns relative rewards to truncated or failed samples by ranking responses based on token-level entropy patterns, capturing both local exploration and global stability. We implement our method on Deepseek-R1-Distill-Qwen-1.5B and train it on the DeepscaleR-40k in the math domain. Experiments demonstrate our method achieves SoTA performance on several math benchmarks, representing significant improvements and fewer reasoning tokens over the base models. Code and model will be released.

Subscribe for Updates

Copyright 2025 dijee Intelligence Ltd.   dijee Intelligence Ltd. is a private limited company registered in England and Wales at Media House, Sopers Road, Cuffley, Hertfordshire, EN6 4RY, UK registeration number 16808844