arXiv:2510.12476v1 Announce Type: cross
Abstract: Large language models (LLMs) have grown more powerful in language generation, producing fluent text and even imitating personal style. Yet, this ability also heightens the risk of identity impersonation. To the best of our knowledge, no prior work has examined personalized machine-generated text (MGT) detection. In this paper, we introduce dataset, the first benchmark for evaluating detector robustness in personalized settings, built from literary and blog texts paired with their LLM-generated imitations. Our experimental results demonstrate large performance gaps across detectors in personalized settings: some state-of-the-art models suffer significant drops. We attribute this limitation to the textitfeature-inversion trap, where features that are discriminative in general domains become inverted and misleading when applied to personalized text. Based on this finding, we propose method, a simple and reliable way to predict detector performance changes in personalized settings. method identifies latent directions corresponding to inverted features and constructs probe datasets that differ primarily along these features to evaluate detector dependence. Our experiments show that method can accurately predict both the direction and the magnitude of post-transfer changes, showing 85% correlation with the actual performance gaps. We hope that this work will encourage further research on personalized text detection.
The one piece of data that could actually shed light on your job and AI
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Within Silicon

