Instruction-Tuned LLMs for Parsing and Mining Unstructured Logs on Leadership HPC Systems

arXiv:2604.05168v1 Announce Type: new Abstract: Leadership-class HPC systems generate massive volumes of heterogeneous, largely unstructured system logs. Because these logs originate from diverse software, hardware, and runtime layers, they exhibit inconsistent formats, making structure extraction and pattern discovery extremely challenging. Therefore, robust log parsing and mining is critical to transform this raw telemetry into actionable […]

Beyond LLM-as-a-Judge: Deterministic Metrics for Multilingual Generative Text Evaluation

arXiv:2604.05083v1 Announce Type: cross Abstract: While Large Language Models (LLMs) are increasingly adopted as automated judges for evaluating generated text, their outputs are often costly, and highly sensitive to prompt design, language, and aggregation strategies, severely, which limits reproducibility. To address these challenges, we propose textbftextitOmniScore, a family of complementary, deterministic learned metrics developed using […]

AI Assistance Reduces Persistence and Hurts Independent Performance

arXiv:2604.04721v2 Announce Type: replace Abstract: People often optimize for long-term goals in collaboration: A mentor or companion doesn’t just answer questions, but also scaffolds learning, tracks progress, and prioritizes the other person’s growth over immediate results. In contrast, current AI systems are fundamentally short-sighted collaborators – optimized for providing instant and complete responses, without ever […]

CRAB: Codebook Rebalancing for Bias Mitigation in Generative Recommendation

arXiv:2604.05113v1 Announce Type: cross Abstract: Generative recommendation (GeneRec) has introduced a new paradigm that represents items as discrete semantic tokens and predicts items in a generative manner. Despite its strong performance across multiple recommendation tasks, existing GeneRec approaches still suffer from severe popularity bias and may even exacerbate it. In this work, we conduct a […]

ClawsBench: Evaluating Capability and Safety of LLM Productivity Agents in Simulated Workspaces

arXiv:2604.05172v1 Announce Type: new Abstract: Large language model (LLM) agents are increasingly deployed to automate productivity tasks (e.g., email, scheduling, document management), but evaluating them on live services is risky due to potentially irreversible changes. Existing benchmarks rely on simplified environments and fail to capture realistic, stateful, multi-service workflows. We introduce ClawsBench, a benchmark for […]

The Polar Express: Optimal Matrix Sign Methods and Their Application to the Muon Algorithm

arXiv:2505.16932v4 Announce Type: replace-cross Abstract: Computing the polar decomposition and the related matrix sign function has been a well-studied problem in numerical analysis for decades. Recently, it has emerged as an important subroutine within the Muon optimizer for training deep neural networks. However, the requirements of this application differ sharply from classical settings: deep learning […]

What Makes a Good Response? An Empirical Analysis of Quality in Qualitative Interviews

arXiv:2604.05163v1 Announce Type: cross Abstract: Qualitative interviews provide essential insights into human experiences when they elicit high-quality responses. While qualitative and NLP researchers have proposed various measures of interview quality, these measures lack validation that high-scoring responses actually contribute to the study’s goals. In this work, we identify, implement, and evaluate 10 proposed measures of […]

Attribution Bias in Large Language Models

arXiv:2604.05224v1 Announce Type: new Abstract: As Large Language Models (LLMs) are increasingly used to support search and information retrieval, it is critical that they accurately attribute content to its original authors. In this work, we introduce AttriBench, the first fame- and demographically-balanced quote attribution benchmark dataset. Through explicitly balancing author fame and demographics, AttriBench enables […]

LSRM: High-Fidelity Object-Centric Reconstruction via Scaled Context Windows

arXiv:2604.05182v1 Announce Type: cross Abstract: We introduce the Large Sparse Reconstruction Model to study how scaling transformer context windows impacts feed-forward 3D reconstruction. Although recent object-centric feed-forward methods deliver robust, high-quality reconstruction, they still lag behind dense-view optimization in recovering fine-grained texture and appearance. We show that expanding the context window — by substantially increasing […]

Unlocking the Potential of Diffusion Language Models through Template Infilling

arXiv:2510.13870v2 Announce Type: replace-cross Abstract: Diffusion Language Models (DLMs) have emerged as a promising alternative to Autoregressive Language Models, yet their inference strategies remain limited to prefix-based prompting inherited from the autoregressive paradigm. In this paper, we propose Template Infilling (TI), a tailored conditioning methodology for DLMs. Unlike conventional prefix prompting, TI flexibly aligns structural […]

RoboPlayground: Democratizing Robotic Evaluation through Structured Physical Domains

arXiv:2604.05226v1 Announce Type: cross Abstract: Evaluation of robotic manipulation systems has largely relied on fixed benchmarks authored by a small number of experts, where task instances, constraints, and success criteria are predefined and difficult to extend. This paradigm limits who can shape evaluation and obscures how policies respond to user-authored variations in task intent, constraints, […]

Subscribe for Updates

Copyright 2025 dijee Intelligence Ltd.   dijee Intelligence Ltd. is a private limited company registered in England and Wales at Media House, Sopers Road, Cuffley, Hertfordshire, EN6 4RY, UK registration number 16808844