arXiv:2604.18239v2 Announce Type: replace-cross
Abstract: Preference optimization is widely used to align large language models (LLMs) with human preferences. However, many margin-based objectives suppress the chosen response along with the rejected one, a phenomenon known as likelihood displacement, and no general mechanism currently prevents this across objectives.
We bridge this gap by presenting a unified emphincentive-score decomposition of preference optimization, revealing that diverse objectives share identical local update directions and differ only in their scalar weighting coefficients.
Building on this decomposition, by analyzing the dynamics of the chosen/rejected likelihoods, we identify the emphdisentanglement band (DB), a simple, testable condition that characterizes when training can avoid likelihood displacement by realizing the preferred pathway: suppressing the loser while maintaining the winner, possibly after an initial transient.
Leveraging the DB, we propose a plug-and-play emphreward calibration (RC) that adaptively rebalances chosen versus rejected updates to satisfy the DB and mitigate likelihood displacement, without redesigning the base objective.
Empirical results show that RC steers training toward more disentangled dynamics and often improves downstream performance across a range of objectives. Our code is available at https://github.com/IceyWuu/DisentangledPreferenceOptimization.
Disclosure in the era of generative artificial intelligence
Generative artificial intelligence (AI) has rapidly become embedded in academic writing, assisting with tasks ranging from language editing to drafting text and producing evidence. Despite


