arXiv:2601.21293v2 Announce Type: replace-cross Abstract: Reliability-centered prognostics for rotating machinery requires early-warning signals that remain accurate under nonstationary operating conditions, domain shifts across speed, load, sensors, and machines, and severe class imbalance, while keeping false-alarm rates small and predictable. We propose the Physics-Guided Tiny-Mamba Transformer (PG-TMT), a compact tri-branch encoder tailored for online condition monitoring. […]
Large language models eroding science understanding: an experimental study
arXiv:2604.25639v1 Announce Type: cross Abstract: This paper is under review in AI and Ethics This study examines whether large language models (LLMs) can reliably answer scientific questions and demonstrates how easily they can be influenced by fringe scientific material. The authors modified custom LLMs to prioritise knowledge in selected fringe papers on the Fine Structure […]
Responsible Evaluation of AI for Mental Health
arXiv:2602.00065v2 Announce Type: replace-cross Abstract: Although artificial intelligence (AI) shows growing promise for mental health care, current approaches to evaluating AI tools in this domain remain fragmented and poorly aligned with clinical practice, social context, and first-hand user experience. This paper argues for a rethinking of responsible evaluation — what is measured, by whom, and […]
A Milestone in Formalization: The Sphere Packing Problem in Dimension 8
arXiv:2604.23468v2 Announce Type: replace-cross Abstract: In 2016, Viazovska famously solved the sphere packing problem in dimension $8$, using modular forms to construct a ‘magic’ function satisfying optimality conditions determined by Cohn and Elkies in 2003. In March 2024, Hariharan and Viazovska launched a project to formalize this solution and related mathematical facts in the Lean […]
JumpLoRA: Sparse Adapters for Continual Learning in Large Language Models
arXiv:2604.16171v3 Announce Type: replace-cross Abstract: Adapter-based methods have become a cost-effective approach to continual learning (CL) for Large Language Models (LLMs), by sequentially learning a low-rank update matrix for each task. To mitigate catastrophic forgetting, state-of-the-art approaches impose constraints on new adapters with respect to the previous ones, by targeting either subspace or coordinate-wise interference. […]
Health System Scale Semantic Search Across Unstructured Clinical Notes
arXiv:2604.25605v1 Announce Type: cross Abstract: Introduction: Semantic search, which retrieves documents based on conceptual similarity rather than keyword matching, offers substantial advantages for retrieval of clinical information. However, deploying semantic search across entire health systems, comprising hundreds of millions of clinical notes, presents formidable engineering, cost, and governance challenges that have prevented adoption. Methods: We […]
Cross-Lingual Jailbreak Detection via Semantic Codebooks
arXiv:2604.25716v1 Announce Type: cross Abstract: Safety mechanisms for large language models (LLMs) remain predominantly English-centric, creating systematic vulnerabilities in multilingual deployment. Prior work shows that translating malicious prompts into other languages can substantially increase jailbreak success rates, exposing a structural cross-lingual security gap. We investigate whether such attacks can be mitigated through language-agnostic semantic similarity […]
Hard to See, Hard to Label: Generative and Symbolic Acquisition for Subtle Visual Phenomena
arXiv:2604.22990v2 Announce Type: replace-cross Abstract: Subtle visual anomalies such as hairline cracks, sub-millimeter voids, and low-contrast inclusions are structurally atypical yet visually ambiguous, making them both difficult to annotate and easy to overlook during active learning. Standard acquisition heuristics based on discriminative uncertainty or feature diversity often overselect dominant patterns while underexploring sparse yet important […]
Luminol-AIDetect: Fast Zero-shot Machine-Generated Text Detection based on Perplexity under Text Shuffling
arXiv:2604.25860v1 Announce Type: cross Abstract: Machine-generated text (MGT) detection requires identifying structurally invariant signals across generation models, rather than relying on model-specific fingerprints. In this respect, we hypothesize that while large language models excel at local semantic consistency, their autoregressive nature results in a specific kind of structural fragility compared to human writing. We propose […]
Emotive Architectures: The Role of LLMs in Adjusting Work Environments
arXiv:2604.25601v1 Announce Type: cross Abstract: In remote and hybrid work contexts, the integration of physical and digital environments is revolutionizing spatial experiences, collaboration, and interpersonal interactions. This study examines three fundamental spatial conditions: the physical environment, characterized by material and sensory attributes; the virtual environment, influenced by immersive technologies; and their fusion into hybrid environments […]
ReCreate: Reasoning and Creating Domain Agents Driven by Experience
arXiv:2601.11100v2 Announce Type: replace Abstract: Large Language Model agents are reshaping the industrial landscape. However, most practical agents remain human-designed because tasks differ widely, making them labor-intensive to build. This situation poses a central question: can we automatically create and adapt domain agents in the wild? While several recent approaches have sought to automate agent […]
SketchVLM: Vision language models can annotate images to explain thoughts and guide users
arXiv:2604.22875v2 Announce Type: replace-cross Abstract: When answering questions about images, humans naturally point, label, and draw to explain their reasoning. In contrast, modern vision-language models (VLMs) such as Gemini-3-Pro and GPT-5 only respond with text, which can be difficult for users to verify. We present SketchVLM, a training-free, model-agnostic framework that enables VLMs to produce […]