arXiv:2605.10973v1 Announce Type: cross
Abstract: Supervised fine-tuning (SFT) improves in-domain performance but can degrade out-of-domain (OOD) generalization. Prior work suggests that this degradation is related to changes in dominant singular subspaces of pretrained weight matrices. However, directly identifying loss-sensitive directions with Hessian or Fisher information is computationally expensive at LLM scale. In this work, we propose preserving projected rotations in pretrained singular subspaces as an efficient proxy for Fisher-sensitive directions, which we call Rotation-Preserving Supervised Fine-Tuning (RPSFT). RPSFT penalizes changes in the projected top-$k$ singular-vector block of each pretrained weight matrix, limiting unnecessary rotation while preserving task adaptation. Across model families and sizes trained on math reasoning data, RPSFT improves the in-domain/OOD trade-off over standard SFT and strong SFT baselines, better preserves pretrained representations, and provides stronger initializations for downstream RL fine-tuning. Code is available at hrefhttps://github.com/jinhangzhan/RPSFT.githttps://github.com/jinhangzhan/RPSFT.
Teleophthalmology adoption and perceived barriers among Colombian general practitioners: a cross-sectional study
BackgroundTelemedicine has improved access to healthcare, reduced costs, and minimized infection risks, particularly during the COVID-19 pandemic. Teleophthalmology may enhance access to eye care, but