arXiv:1904.03236v5 Announce Type: replace Abstract: We report the emergence of Log-normal Superstatistics in the collective motion of ants confined in a quasi-2D arena and exposed to a panic-inducing stimulus. A data-driven superstatistical Langevin model accurately reproduces the transition from stationary behavior to an organized escape response, characterized by non-Gaussian velocity distributions and a stochastic diffusion […]
AISAC: An Integrated multi-agent System for Transparent, Retrieval-Grounded Scientific Assistance
arXiv:2511.14043v3 Announce Type: replace Abstract: AI Scientific Assistant Core (AISAC) is a transparent, modular multi-agent runtime developed at Argonne National Laboratory to support long-horizon, evidence-grounded scientific reasoning. Rather than proposing new agent algorithms or claiming autonomous scientific discovery, AISAC contributes a governed execution substrate that operationalizes key requirements for deploying agentic AI in scientific practice, […]
Seed1.8 Model Card: Towards Generalized Real-World Agency
arXiv:2603.20633v2 Announce Type: replace Abstract: We present Seed1.8, a foundation model aimed at generalized real-world agency: going beyond single-turn prediction to multi-turn interaction, tool use, and multi-step execution. Seed1.8 keeps strong LLM and vision-language performance while supporting a unified agentic interface-search, code generation and execution, and GUI interaction. For deployment, it offers latency- and cost-aware […]
Explainable AI needs formalization
arXiv:2409.14590v5 Announce Type: replace-cross Abstract: The field of “explainable artificial intelligence” (XAI) seemingly addresses the desire that decisions of machine learning systems should be human-understandable. However, in its current state, XAI itself needs scrutiny. Popular methods cannot reliably answer relevant questions about ML models, their training data, or test inputs, because they systematically attribute importance […]
Measuring the (Un)Faithfulness of Concept-Based Explanations
arXiv:2504.10833v4 Announce Type: replace-cross Abstract: Deep vision models perform input-output computations that are hard to interpret. Concept-based explanation methods (CBEMs) increase interpretability by re-expressing parts of the model with human-understandable semantic units, or concepts. Checking if the derived explanations are faithful — that is, they represent the model’s internal computation — requires a surrogate that […]
PENGUIN: Enhancing Transformer with Periodic-Nested Group Attention for Long-term Time Series Forecasting
arXiv:2508.13773v3 Announce Type: replace-cross Abstract: Despite advances in the Transformer architecture, their effectiveness for long-term time series forecasting (LTSF) remains controversial. In this paper, we investigate the potential of integrating explicit periodicity modeling into the self-attention mechanism to enhance the performance of Transformer-based architectures for LTSF. Specifically, we propose PENGUIN, a simple yet effective periodic-nested […]
SceneAdapt: Scene-aware Adaptation of Human Motion Diffusion
arXiv:2510.13044v2 Announce Type: replace-cross Abstract: Human motion is inherently diverse and semantically rich, while also shaped by the surrounding scene. However, existing motion generation approaches fail to generate semantically diverse motion while simultaneously respecting geometric scene constraints, since constructing large-scale datasets with both rich text-motion coverage and precise scene interactions is extremely challenging. In this […]
Object-Centric World Models for Causality-Aware Reinforcement Learning
arXiv:2511.14262v3 Announce Type: replace-cross Abstract: World models have been developed to support sample-efficient deep reinforcement learning agents. However, it remains challenging for world models to accurately replicate environments that are high-dimensional, non-stationary, and composed of multiple objects with rich interactions since most world models learn holistic representations of all environmental components. By contrast, humans perceive […]
Measuring all the noises of LLM Evals
arXiv:2512.21326v2 Announce Type: replace-cross Abstract: Separating signal from noise is central to experiments. Applying well-established statistical methods effectively to LLM evals requires consideration of their unique noise characteristics. We clearly define and measure three types of noise: prediction noise from generating different answers on a given question, data noise from sampling questions, and their combined […]
Ecological systems in a modeling perspective
arXiv:2603.26860v1 Announce Type: new Abstract: May (1974,1976) opened the debate on whether biological populations might exhibit nonlinear dynamics and chaos. However, it has in general been difficult to verify nonlinear dynamics in biological populations. There are many reports concerning problems with this issue and some of them can be traced back to Hassell, Lawton, and […]
Power Couple? AI Growth and Renewable Energy Investment
arXiv:2603.26678v1 Announce Type: cross Abstract: AI and renewable energy are increasingly framed as a “power couple” — the idea that surging AI electricity demand will accelerate clean-energy investment — yet concerns persist that AI will instead entrench fossil-fuel carbon lock-in. We reconcile these views by modeling the equilibrium interaction between AI growth and renewable investment. […]
Taming Score-Based Denoisers in ADMM: A Convergent Plug-and-Play Framework
arXiv:2603.10281v2 Announce Type: replace-cross Abstract: While score-based generative models have emerged as powerful priors for solving inverse problems, directly integrating them into optimization algorithms such as ADMM remains nontrivial. Two central challenges arise: i) the mismatch between the noisy data manifolds used to train the score functions and the geometry of ADMM iterates, especially due […]