arXiv:2604.24957v1 Announce Type: cross
Abstract: Scaling test-time compute has emerged as a powerful mechanism for enhancing Large Language Model (LLM) performance. However, standard post-training paradigms, Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL), optimize the likelihood of individual samples under a base policy, creating a misalignment with test time procedures that rely on aggregated or filtered outputs. In this work, we propose Compute Aligned Training, which aligns training objectives with test-time strategies. By conceptualizing inference strategies as operators on the base policy, we derive new loss functions that maximize performance when said strategies are applied. We instantiate such loss functions for SFT and RL across common test time strategies. Finally, we provide empirical evidence that this training method substantially improves test time scaling over standard training.
On a Keller-Segel type equation to model Brain Microvascular Endothelial Cells growth’s patterns
arXiv:2604.25180v1 Announce Type: cross Abstract: This article presents a partial differential equation (PDE) of Keller-Segel (KS) type that reproduces patterns commonly observed during the growth


