arXiv:2605.00721v1 Announce Type: cross Abstract: The Room Acoustics and Speaker Distance Estimation (SDE) Challenge at ICASSP 2025 explores the effectiveness of augmented room impulse response (RIR) data for improving SDE model performance. This challenge at GenDARA involves generating RIRs to supplement sparse datasets and fine-tuning SDE models with the augmented data. We employ the open-source […]
PORTool: Importance-Aware Policy Optimization with Rewarded Tree for Multi-Tool-Integrated Reasoning
arXiv:2510.26020v2 Announce Type: replace-cross Abstract: Multi-tool-integrated reasoning enables LLM-empowered tool-use agents to solve complex tasks by interleaving natural-language reasoning with calls to external tools. However, training such agents from outcome-only rewards suffers from credit-assignment ambiguity, obscuring which intermediate tool-use decisions drive success or failure. In this paper, we propose PORTool, an importance-aware policy-optimization algorithm that […]
Semantic Level of Detail for Knowledge Graphs: Discovering Abstraction Boundaries via Spectral Heat Diffusion
arXiv:2603.08965v2 Announce Type: replace-cross Abstract: Graph-structured knowledge systems — from knowledge graphs to GraphRAG pipelines — organize information into hierarchical communities, yet lack a principled mechanism for continuous resolution control: where do the qualitative boundaries between abstraction levels lie, and how should an agent navigate them? Current approaches rely on discrete community detection with manually […]
Bring Your Own Prompts: Use-Case-Specific Bias and Fairness Evaluation for LLMs
arXiv:2407.10853v5 Announce Type: replace-cross Abstract: Bias and fairness risks in Large Language Models (LLMs) vary substantially across deployment contexts, yet existing approaches lack systematic guidance for selecting appropriate evaluation metrics. We present a decision framework that maps LLM use cases, characterized by a model and population of prompts, to relevant bias and fairness metrics based […]
Beyond Prompt-Induced Lies: Investigating LLM Deception on Benign Prompts
arXiv:2508.06361v4 Announce Type: replace-cross Abstract: Large Language Models (LLMs) are widely deployed in reasoning, planning, and decision-making tasks, making their trustworthiness critical. A significant and underexplored risk is intentional deception, where an LLM deliberately fabricates or conceals information to serve a hidden objective. Existing studies typically induce deception by explicitly setting a hidden objective through […]
Graph Rewiring in GNNs to Mitigate Over-Squashing and Over-Smoothing: A Survey
arXiv:2411.17429v2 Announce Type: replace-cross Abstract: Graph Neural Networks are powerful models for learning from graph-structured data, yet their effectiveness is often limited by two critical challenges: over-squashing, where information from distant nodes is excessively compressed, and over-smoothing, where repeated propagation makes node representations indistinguishable. Both phenomena stem from the interaction between message passing and the […]
Non-invasive load measurement in the human tibia via spectral analysis of flexural waves
arXiv:2511.06140v3 Announce Type: replace Abstract: Forces transmitted by bones are routinely studied in human biomechanics, but it is challenging to measure them non-invasively, especially outside of laboratory settings. We introduce a technique for non-invasive, in vivo measurement of tibial compressive force using flexural waves propagating in the tibia. Modelling the tibia as an axially compressed […]
Evolutionary BP+OSD Decoding for Low-Latency Quantum Error Correction
arXiv:2512.18273v2 Announce Type: replace-cross Abstract: Quantum error correction (QEC) for fault-tolerant quantum computing requires a balanced decoding solution that offers high performance, low complexity, and low latency. However, the de facto standard, belief propagation (BP) combined with ordered statistics decoding (OSD), suffers from excessive iterations in the BP stage and high complexity in the OSD […]
E-mem: Multi-agent based Episodic Context Reconstruction for LLM Agent Memory
arXiv:2601.21714v2 Announce Type: replace Abstract: The evolution of Large Language Model (LLM) agents towards System~2 reasoning, characterized by deliberative, high-precision problem-solving, requires maintaining rigorous logical integrity over extended horizons. However, prevalent memory preprocessing paradigms suffer from destructive de-contextualization. By compressing complex sequential dependencies into pre-defined structures (e.g., embeddings or graphs), these methods sever the contextual […]
LinkAnchor: An Autonomous LLM-Based Agent for Issue-to-Commit Link Recovery
arXiv:2508.12232v3 Announce Type: replace-cross Abstract: Issue-to-commit link recovery in software repositories is fundamental to software traceability and project management, yet it remains a challenging task. Prior studies show that only about 42.2% of issues on GitHub are correctly linked to their commits, highlighting the need for more effective solutions. Existing work has explored a range […]
Repetition over Diversity: High-Signal Data Filtering for Sample-Efficient German Language Modeling
arXiv:2604.28075v2 Announce Type: replace-cross Abstract: Recent research has shown that filtering massive English web corpora into high-quality subsets significantly improves training efficiency. However, for high-resource non-English languages like German, French, or Japanese, aggressive filtering creates a strategic dilemma: should practitioners prioritize diversity by training once on large amounts of lightly filtered web data, or prioritize […]
Degrees, Levels, and Profiles of Contextuality
arXiv:2603.26692v4 Announce Type: replace-cross Abstract: We introduce a new notion, that of a contextuality profile of a system of random variables. Rather than characterizing a system’s contextuality by a single number, its overall degree of contextuality, we show how it can be characterized by a curve relating degree of contextuality to level at which the […]