Health-care AI is here. We don’t know if it actually helps patients.

I don’t need to tell you that AI is everywhere. Or that it is being used, increasingly, in hospitals. Doctors are using AI to help them with notetaking. AI-based tools are trawling through patient records, flagging people who may require certain support or treatments. They are also used to interpret medical exam results and X-rays. A […]

A Systematic Review and Taxonomy of Reinforcement Learning-Model Predictive Control Integration for Linear Systems

arXiv:2604.21030v1 Announce Type: cross Abstract: The integration of Model Predictive Control (MPC) and Reinforcement Learning (RL) has emerged as a promising paradigm for constrained decision-making and adaptive control. MPC offers structured optimization, explicit constraint handling, and established stability tools, whereas RL provides data-driven adaptation and performance improvement in the presence of uncertainty and model mismatch. […]

Structural Quality Gaps in Practitioner AI Governance Prompts: An Empirical Study Using a Five-Principle Evaluation Framework

arXiv:2604.21090v1 Announce Type: cross Abstract: AI governance programmes increasingly rely on natural language prompts to constrain and direct AI agent behaviour. These prompts function as executable specifications: they define the agent’s mandate, scope, and quality criteria. Despite this role, no systematic framework exists for evaluating whether a governance prompt is structurally complete. We introduce a […]

The Path Not Taken: Duality in Reasoning about Program Execution

arXiv:2604.20917v1 Announce Type: cross Abstract: Large language models (LLMs) have shown remarkable capabilities across diverse coding tasks. However, their adoption requires a true understanding of program execution rather than relying on surface-level patterns. Existing benchmarks primarily focus on predicting program properties tied to specific inputs (e.g., code coverage, program outputs). As a result, they provide […]

Watts-per-Intelligence Part II: Algorithmic Catalysis

arXiv:2604.20897v1 Announce Type: cross Abstract: We develop a thermodynamic theory of algorithmic catalysis within the watts-per-intelligence framework, identifying reusable computational structures that reduce irreversible operations for a task class while satisfying bounded restoration and structural selectivity constraints. We prove that any class-specific speed-up is upper-bounded by the algorithmic mutual information between the substrate and the […]

Planning Beyond Text: Graph-based Reasoning for Complex Narrative Generation

arXiv:2604.21253v1 Announce Type: cross Abstract: While LLMs demonstrate remarkable fluency in narrative generation, existing methods struggle to maintain global narrative coherence, contextual logical consistency, and smooth character development, often producing monotonous scripts with structural fractures. To this end, we introduce PLOTTER, a framework that performs narrative planning on structural graph representations instead of the direct […]

Navigating the Clutter: Waypoint-Based Bi-Level Planning for Multi-Robot Systems

arXiv:2604.21138v1 Announce Type: cross Abstract: Multi-robot control in cluttered environments is a challenging problem that involves complex physical constraints, including robot-robot collisions, robot-obstacle collisions, and unreachable motions. Successful planning in such settings requires joint optimization over high-level task planning and low-level motion planning, as violations of physical constraints may arise from failures at either level. […]

On Reasoning Behind Next Occupation Recommendation

arXiv:2604.21204v1 Announce Type: cross Abstract: In this work, we develop a novel reasoning approach to enhance the performance of large language models (LLMs) in future occupation prediction. In this approach, a reason generator first derives a “reason” for a user using his/her past education and career history. The reason summarizes the user’s preference and is […]

IRIS: Interpolative R’enyi Iterative Self-play for Large Language Model Fine-Tuning

arXiv:2604.20933v1 Announce Type: cross Abstract: Self-play fine-tuning enables large language models to improve beyond supervised fine-tuning without additional human annotations by contrasting annotated responses with self-generated ones. Many existing methods rely on a fixed divergence regime. SPIN is closely related to a KL-based regime, SPACE to a Jensen-Shannon-style objective via noise contrastive estimation, and SPIF […]

SGD at the Edge of Stability: The Stochastic Sharpness Gap

arXiv:2604.21016v1 Announce Type: cross Abstract: When training neural networks with full-batch gradient descent (GD) and step size $eta$, the largest eigenvalue of the Hessian — the sharpness $S(boldsymboltheta)$ — rises to $2/eta$ and hovers there, a phenomenon termed the Edge of Stability (EoS). citetdamian2023selfstab showed that this behavior is explained by a self-stabilization mechanism driven […]

Subscribe for Updates

Copyright 2025 dijee Intelligence Ltd.   dijee Intelligence Ltd. is a private limited company registered in England and Wales at Media House, Sopers Road, Cuffley, Hertfordshire, EN6 4RY, UK registration number 16808844