arXiv:2604.03070v1 Announce Type: cross Abstract: Third-party skills extend LLM agents with powerful capabilities but often handle sensitive credentials in privileged environments, making leakage risks poorly understood. We present the first large-scale empirical study of this problem, analyzing 17,022 skills (sampled from 170,226 on SkillsMP) using static analysis, sandbox testing, and manual inspection. We identify 520 […]
Reliability Gated Multi-Teacher Distillation for Low Resource Abstractive Summarization
arXiv:2604.03192v1 Announce Type: cross Abstract: We study multiteacher knowledge distillation for low resource abstractive summarization from a reliability aware perspective. We introduce EWAD (Entropy Weighted Agreement Aware Distillation), a token level mechanism that routes supervision between teacher distillation and gold supervision based on inter teacher agreement, and CPDP (Capacity Proportional Divergence Preservation), a geometric constraint […]
Evaluating Language Models for Harmful Manipulation
arXiv:2603.25326v3 Announce Type: replace Abstract: Interest in the concept of AI-driven harmful manipulation is growing, yet current approaches to evaluating it are limited. This paper introduces a framework for evaluating harmful AI manipulation via context-specific human-AI interaction studies. We illustrate the utility of this framework by assessing an AI model with 10,101 participants spanning interactions […]
SentinelAgent: Intent-Verified Delegation Chains for Securing Federal Multi-Agent AI Systems
arXiv:2604.02767v1 Announce Type: cross Abstract: When Agent A delegates to Agent B, which invokes Tool C on behalf of User X, no existing framework can answer: whose authorization chain led to this action, and where did it violate policy? This paper introduces SentinelAgent, a formal framework for verifiable delegation chains in federal multi-agent AI systems. […]
FLEX: A Largescale Multimodal, Multiview Dataset for Learning Structured Representations for Fitness Action Quality Assessment
arXiv:2506.03198v4 Announce Type: replace-cross Abstract: Action Quality Assessment (AQA) — the task of quantifying how well an action is performed — has great potential for detecting errors in gym weight training, where accurate feedback is critical to prevent injuries and maximize gains. Existing AQA datasets, however, are limited to single-view competitive sports and RGB video, […]
f-INE: A Hypothesis Testing Framework for Estimating Influence under Training Randomness
arXiv:2510.10510v2 Announce Type: replace-cross Abstract: Influence estimation methods promise to explain and debug machine learning by estimating the impact of individual samples on the final model. Yet, existing methods collapse under training randomness: the same example may appear critical in one run and irrelevant in the next. Such instability undermines their use in data curation […]
Steering Autoregressive Music Generation with Recursive Feature Machines
arXiv:2510.19127v2 Announce Type: replace-cross Abstract: Controllable music generation remains a significant challenge, with existing methods often requiring model retraining or introducing audible artifacts. We introduce MusicRFM, a framework that adapts Recursive Feature Machines (RFMs) to enable fine-grained, interpretable control over frozen, pre-trained music models by directly steering their internal activations. RFMs analyze a model’s internal […]
Early-Warning Signals of Grokking via Loss-Landscape Geometry
arXiv:2602.16967v3 Announce Type: replace-cross Abstract: Grokking — the abrupt transition from memorization to generalization after prolonged training — has been linked to confinement on low-dimensional execution manifolds in modular arithmetic. Whether this mechanism extends beyond arithmetic remains open. We study two sequence-learning benchmarks: SCAN compositional generalization and Dyck-1 depth prediction. Across both tasks and a […]
ERPO: Token-Level Entropy-Regulated Policy Optimization for Large Reasoning Models
arXiv:2603.28204v2 Announce Type: replace-cross Abstract: Reinforcement learning from verifiable rewards has significantly advanced the reasoning capabilities of large language models. However, Group Relative Policy Optimization (GRPO) typically assigns a uniform, sequence-level advantage to all tokens, thereby overlooking the intrinsic information heterogeneity along reasoning chains. We show that this coarse-grained credit assignment leads to premature entropy […]
Corporations Constitute Intelligence
arXiv:2604.02912v1 Announce Type: cross Abstract: In January 2026, Anthropic published a 79-page “constitution” for its AI model Claude, the most comprehensive corporate AI governance document ever released. This Article offers the first legal and democratic-theoretic analysis of that document. Despite genuine philosophical sophistication, the constitution harbors two structural defects. First, it excludes the contexts where […]
Self-Optimizing Multi-Agent Systems for Deep Research
arXiv:2604.02988v1 Announce Type: cross Abstract: Given a user’s complex information need, a multi-agent Deep Research system iteratively plans, retrieves, and synthesizes evidence across hundreds of documents to produce a high-quality answer. In one possible architecture, an orchestrator agent coordinates the process, while parallel worker agents execute tasks. Current Deep Research systems, however, often rely on […]
JoyAI-LLM Flash: Advancing Mid-Scale LLMs with Token Efficiency
arXiv:2604.03044v1 Announce Type: cross Abstract: We introduce JoyAI-LLM Flash, an efficient Mixture-of-Experts (MoE) language model designed to redefine the trade-off between strong performance and token efficiency in the sub-50B parameter regime. JoyAI-LLM Flash is pretrained on a massive corpus of 20 trillion tokens and further optimized through a rigorous post-training pipeline, including supervised fine-tuning (SFT), […]