arXiv:2510.27055v1 Announce Type: cross
Abstract: We present Contamination Detection via Context (CoDeC), a practical and accurate method to detect and quantify training data contamination in large language models. CoDeC distinguishes between data memorized during training and data outside the training distribution by measuring how in-context learning affects model performance. We find that in-context examples typically boost confidence for unseen datasets but may reduce it when the dataset was part of training, due to disrupted memorization patterns. Experiments show that CoDeC produces interpretable contamination scores that clearly separate seen and unseen datasets, and reveals strong evidence of memorization in open-weight models with undisclosed training corpora. The method is simple, automated, and both model- and dataset-agnostic, making it easy to integrate with benchmark evaluations.
Generative Semantic Coding for Ultra-Low Bitrate Visual Communication and Analysis
arXiv:2510.27324v1 Announce Type: cross Abstract: We consider the problem of ultra-low bit rate visual communication for remote vision analysis, human interactions and control in challenging



