arXiv:2604.14177v1 Announce Type: cross
Abstract: Grammatical error correction (GEC) and explanation (GEE) have made rapid progress, but real teaching scenarios also require emphlearner-friendly pedagogical feedback that is actionable, level-appropriate, and encouraging. We introduce textbfSPFG (textbfSpoken textbfPedagogical textbfFeedback textbfGeneration), a dataset built based on the Speak & Improve Challenge 2025 corpus, pairing fluency-oriented transcriptions with GEC targets and emphhuman-verified teacher-style feedback, including preferred/rejected feedback pairs for preference learning. We study a transcript-based Spoken Grammatical Error Correction (SGEC) setting and evaluate three instruction-tuned LLMs (Qwen2.5, Llama-3.1, and GLM-4), comparing supervised fine-tuning (SFT) with preference-based alignment (using DPO and KTO) for jointly generating corrections and feedback. Results show that SFT provides the most consistent improvements, while DPO/KTO yield smaller or mixed gains, and that correction quality and feedback quality are weakly coupled. Our implementation is available at https://github.com/Skywalker-Harrison/spfg.
SegMix:Shuffle-based Feedback Learning for Semantic Segmentation of Pathology Images
arXiv:2604.15777v1 Announce Type: cross Abstract: Segmentation is a critical task in computational pathology, as it identifies areas affected by disease or abnormal growth and is

