SUBARU: A Practical Approach to Power Saving in Hearables Using SUB-Nyquist Audio Resolution Upsampling

arXiv:2506.22321v2 Announce Type: replace-cross Abstract: Hearables are wearable computers that are worn on the ear. Bone conduction microphones (BCMs) are used with air conduction microphones (ACMs) in hearables as a supporting modality for multimodal speech enhancement (SE) in noisy conditions. However, existing works don’t consider the following practical aspects for low-power implementations on hearables: (i) […]

Context Channel Capacity: An Information-Theoretic Framework for Understanding Catastrophic Forgetting

arXiv:2603.07415v1 Announce Type: cross Abstract: Catastrophic forgetting remains a central challenge in continual learning (CL), yet lacks a unified information-theoretic explanation for why some architectures forget catastrophically while others do not. We introduce emphContext Channel Capacity ($C_mathrmctx$), the mutual information between a CL architecture’s context signal and its generated parameters, and prove that zero forgetting […]

Modeling Metabolic State Transitions in Obesity Using a Time-Varying Lambda-Omega Framework

arXiv:2603.06819v1 Announce Type: new Abstract: Obesity does not emerge abruptly; rather, it develops gradually over extended periods. The gradual progression often prevents early recognition of physiological changes until excess adiposity is established. A common belief is that weight reduction can be achieved simply by “eating less and moving more”. Although reductions in caloric intake and […]

Towards Lightweight Adaptation of Speech Enhancement Models in Real-World Environments

arXiv:2603.07471v1 Announce Type: cross Abstract: Recent studies have shown that post-deployment adaptation can improve the robustness of speech enhancement models in unseen noise conditions. However, existing methods often incur prohibitive computational and memory costs, limiting their suitability for on-device deployment. In this work, we investigate model adaptation in realistic settings with dynamic acoustic scene changes […]

Reinforcing Numerical Reasoning in LLMs for Tabular Prediction via Structural Priors

arXiv:2510.17385v3 Announce Type: replace-cross Abstract: Tabular prediction traditionally relies on gradient-boosted decision trees and deep learning models, which excel in specific tasks but lack interpretability and transferability. Reasoning large language models (LLMs) promise cross-task adaptability with transparent reasoning traces, yet their potential for tabular data remains unrealized. To bridge this gap, we propose a reasoning […]

A Unified View of Drifting and Score-Based Models

arXiv:2603.07514v1 Announce Type: cross Abstract: Drifting models train one-step generators by optimizing a mean-shift discrepancy induced by a kernel between the data and model distributions, with Laplace kernels used by default in practice. At each point, this discrepancy compares the kernel-weighted displacement toward nearby data samples with the corresponding displacement toward nearby model samples, yielding […]

Symmetry-Constrained Language-Guided Program Synthesis for Discovering Governing Equations from Noisy and Partial Observations

arXiv:2603.06869v1 Announce Type: new Abstract: Discovering compact governing equations from experimental observations is one of the defining objectives of quantitative science, yet practical discovery pipelines routinely fail when measurements are noisy, relevant state variables are unobserved, or multiple symbolic structures explain the data equally well within statistical uncertainty. Here we introduce SymLang (Symmetry-constrained Language-guided equation […]

Nw=ach=a Mun=a: A Devanagari Speech Corpus and Proximal Transfer Benchmark for Nepal Bhasha ASR

arXiv:2603.07554v1 Announce Type: cross Abstract: Nepal Bhasha (Newari), an endangered language of the Kathmandu Valley, remains digitally marginalized due to the severe scarcity of annotated speech resources. In this work, we introduce Nw=ach=a Mun=a, a newly curated 5.39-hour manually transcribed Devanagari speech corpus for Nepal Bhasha, and establish the first benchmark using script-preserving acoustic modeling. […]

NC-Bench: An LLM Benchmark for Evaluating Conversational Competence

arXiv:2601.06426v2 Announce Type: replace-cross Abstract: The Natural Conversation Benchmark (NC-Bench) introduces a new approach to evaluating the general conversational competence of large language models (LLMs). Unlike prior benchmarks that focus on the content of model behavior, NC-Bench focuses on the form and structure of natural conversation. Grounded in the IBM Natural Conversation Framework (NCF), NC-Bench […]

AI-Driven Phase Identification from X-ray Hyperspectral Imaging of cycled Na-ion Cathode Materials

arXiv:2603.07666v1 Announce Type: cross Abstract: Na-ion batteries have emerged as viable candidates for large-scale energy storage applica- tions due to resource abundance and cost advantages. The constraints imposed on their performance and durability, for instance, by complex phase transformations in positive electrode materials during electrochemical cycling, can be addressed and are thus not detrimental to […]

LEAD: Breaking the No-Recovery Bottleneck in Long-Horizon Reasoning

arXiv:2603.06870v1 Announce Type: new Abstract: Long-horizon execution in Large Language Models (LLMs) remains unstable even when high-level strategies are provided. Evaluating on controlled algorithmic puzzles, we demonstrate that while decomposition is essential for stability, extreme decomposition creates a “no-recovery bottleneck”. We show that this bottleneck becomes critical due to highly non-uniform error distribution, where consistent […]

QuadAI at SemEval-2026 Task 3: Ensemble Learning of Hybrid RoBERTa and LLMs for Dimensional Aspect-Based Sentiment Analysis

arXiv:2603.07766v1 Announce Type: cross Abstract: We present our system for SemEval-2026 Task 3 on dimensional aspect-based sentiment regression. Our approach combines a hybrid RoBERTa encoder, which jointly predicts sentiment using regression and discretized classification heads, with large language models (LLMs) via prediction-level ensemble learning. The hybrid encoder improves prediction stability by combining continuous and discretized […]

Subscribe for Updates

Copyright 2025 dijee Intelligence Ltd.   dijee Intelligence Ltd. is a private limited company registered in England and Wales at Media House, Sopers Road, Cuffley, Hertfordshire, EN6 4RY, UK registration number 16808844