arXiv:2603.25628v1 Announce Type: new Abstract: Short tandem repeats (STRs) are low-entropy regions in the genome, consisting of a short (1-6 bp) unit that is consecutively repeated multiple times. They are known for high mutational instability, due to so-called stutter-mutations, in which the number of units in the run increases or descreases. In particular, STRs with […]
Probing the Lack of Stable Internal Beliefs in LLMs
arXiv:2603.25187v1 Announce Type: cross Abstract: Persona-driven large language models (LLMs) require consistent behavioral tendencies across interactions to simulate human-like personality traits, such as persistence or reliability. However, current LLMs often lack stable internal representations that anchor their responses over extended dialogues. This work explores whether LLMs can maintain “implicit consistency”, defined as persistent adherence to […]
A Public Theory of Distillation Resistance via Constraint-Coupled Reasoning Architectures
arXiv:2603.25022v1 Announce Type: new Abstract: Knowledge distillation, model extraction, and behavior transfer have become central concerns in frontier AI. The main risk is not merely copying, but the possibility that useful capability can be transferred more cheaply than the governance structure that originally accompanied it. This paper presents a public, trade-secret-safe theoretical framework for reducing […]
Evaluation format, not model capability, drives triage failure in the assessment of consumer health AI
arXiv:2603.11413v3 Announce Type: replace-cross Abstract: Ramaswamy et al. reported in Nature Medicine that ChatGPT Health under-triages 51.6% of emergencies, concluding that consumer-facing AI triage poses safety risks. However, their evaluation used an exam-style protocol — forced A/B/C/D output, knowledge suppression, and suppression of clarifying questions — that differs fundamentally from how consumers use health chatbots. […]
From Stateless to Situated: Building a Psychological World for LLM-Based Emotional Support
arXiv:2603.25031v1 Announce Type: new Abstract: In psychological support and emotional companionship scenarios, the core limitation of large language models (LLMs) lies not merely in response quality, but in their reliance on local next-token prediction, which prevents them from maintaining the temporal continuity, stage awareness, and user consent boundaries required for multi-turn intervention. This stateless characteristic […]
Train at Moving Edge: Online-Verified Prompt Selection for Efficient RL Training of Large Reasoning Model
arXiv:2603.25184v1 Announce Type: cross Abstract: Reinforcement learning (RL) has become essential for post-training large language models (LLMs) in reasoning tasks. While scaling rollouts can stabilize training and enhance performance, the computational overhead is a critical issue. In algorithms like GRPO, multiple rollouts per prompt incur prohibitive costs, as a large portion of prompts provide negligible […]
MP-MoE: Matrix Profile-Guided Mixture of Experts for Precipitation Forecasting
arXiv:2603.25046v1 Announce Type: new Abstract: Precipitation forecasting remains a persistent challenge in tropical regions like Vietnam, where complex topography and convective instability often limit the accuracy of Numerical Weather Prediction (NWP) models. While data-driven post-processing is widely used to mitigate these biases, most existing frameworks rely on point-wise objective functions, which suffer from the “double […]
Graph-of-Mark: Promote Spatial Reasoning in Multimodal Language Models with Graph-Based Visual Prompting
arXiv:2603.06663v2 Announce Type: replace-cross Abstract: Recent advances in training-free visual prompting, such as Set-of-Mark, have emerged as a promising direction for enhancing the grounding capabilities of multimodal language models (MLMs). These techniques operate by partitioning the input image into object regions and annotating them with marks, predominantly boxes with numeric identifiers, before feeding the augmented […]
ElephantBroker: A Knowledge-Grounded Cognitive Runtime for Trustworthy AI Agents
arXiv:2603.25097v1 Announce Type: new Abstract: Large Language Model based agents increasingly operate in high stakes, multi turn settings where factual grounding is critical, yet their memory systems typically rely on flat key value stores or plain vector retrieval with no mechanism to track the provenance or trustworthiness of stored knowledge. We present ElephantBroker, an open […]
Knowledge-Guided Adversarial Training for Infrared Object Detection via Thermal Radiation Modeling
arXiv:2603.25170v1 Announce Type: cross Abstract: In complex environments, infrared object detection exhibits broad applicability and stability across diverse scenarios. However, infrared object detection is vulnerable to both common corruptions and adversarial examples, leading to potential security risks. To improve the robustness of infrared object detection, current methods mostly adopt a data-driven ideology, which only superficially […]
RubricEval: A Rubric-Level Meta-Evaluation Benchmark for LLM Judges in Instruction Following
arXiv:2603.25133v1 Announce Type: new Abstract: Rubric-based evaluation has become a prevailing paradigm for evaluating instruction following in large language models (LLMs). Despite its widespread use, the reliability of these rubric-level evaluations remains unclear, calling for meta-evaluation. However, prior meta-evaluation efforts largely focus on the response level, failing to assess the fine-grained judgment accuracy that rubric-based […]
From Scale to Speed: Adaptive Test-Time Scaling for Image Editing
arXiv:2603.00141v3 Announce Type: replace-cross Abstract: Image Chain-of-Thought (Image-CoT) is a test-time scaling paradigm that improves image generation by extending inference time. Most Image-CoT methods focus on text-to-image (T2I) generation. Unlike T2I generation, image editing is goal-directed: the solution space is constrained by the source image and instruction. This mismatch causes three challenges when applying Image-CoT […]