arXiv:2602.22743v2 Announce Type: replace
Abstract: Recommendation model performance is intrinsically tied to the quality, volume, and relevance of their training data. To address common challenges like data sparsity and cold start, recent researchs have leveraged data from multiple auxiliary domains to enrich information within the target domain. However, inherent domain gaps can degrade the quality of mixed-domain data, leading to negative transfer and diminished model performance. Existing prevailing emphmodel-centric paradigm — which relies on complex, customized architectures — struggles to capture the subtle, non-structural sequence dependencies across domains, leading to poor generalization and high demands on computational resources. To address these shortcomings, we propose textscTaesar, a emphdata-centric framework for textbftarget-textbfaligntextbfed textbfsequentitextbfal textbfregeneration, which employs a contrastive decoding mechanism to adaptively encode cross-domain context into target-domain sequences. It employs contrastive decoding to encode cross-domain context into target sequences, enabling standard models to learn intricate dependencies without complex fusion architectures. Experiments show textscTaesar outperforms model-centric solutions and generalizes to various sequential models. By generating enriched datasets, textscTaesar effectively combines the strengths of data- and model-centric paradigms. The code accompanying this paper is available at~ textcolorbluehttps://github.com/USTC-StarTeam/Taesar.

Subscribe for Updates

Copyright 2025 dijee Intelligence Ltd.   dijee Intelligence Ltd. is a private limited company registered in England and Wales at Media House, Sopers Road, Cuffley, Hertfordshire, EN6 4RY, UK registration number 16808844