• Home
  • Primary
  • Fast LLM Post-training via Decoupled and Best-of-N Speculation

Fast LLM Post-training via Decoupled and Best-of-N Speculation

arXiv:2511.16193v2 Announce Type: replace-cross
Abstract: Rollout dominates the training time in large language model (LLM) post-training, where the trained model is used to generate tokens given a batch of prompts. SpecActor achieves fast rollout with speculative decoding that deploys a fast path (e.g., a smaller model) to accelerate the unparallelizable generation, while the correctness is guaranteed by fast parallel verification of the outputs with the original model. SpecActor addresses two foundational challenges in speculative rollout by (1) a emphdynamic decoupled speculation execution method that maximizes the GPU computational efficiency to realize speedup for large-batch execution — a configuration common in training but unfriendly to speculative execution and (2) a emphdynamic Best-of-N speculation method that selects and combines different drafting methods according to the rollout progress. It substantially improves the speculation accuracy even when the best drafting method is unknown a priori, meanwhile without requiring adding extra computation resources. sys is 1.7,$times$ faster than veRL in end-to-end training, and is 1.3–1.5,$times$ faster compared to baselines with speculative decoding.

Subscribe for Updates

Copyright 2025 dijee Intelligence Ltd.   dijee Intelligence Ltd. is a private limited company registered in England and Wales at Media House, Sopers Road, Cuffley, Hertfordshire, EN6 4RY, UK registeration number 16808844