Toward Magnetic-Field-Free Quantum Computing and Quantum Reservoir Computing in Engineered Organic Materials: A Unified Framework from the 3-Layer Quantum Brain Hypothesis

arXiv:2605.00026v1 Announce Type: new Abstract: We extend the spin-vortex-induced loop-current (SVILC) qubit [Wakaura2017] and the 3-Layer Quantum Brain Hypothesis to engineered organic materials operated without any applied magnetic field. Four paths are proposed: (P1) a flavin–nitroxide radical-pair reservoir, (P2) a perchlorotriphenylmethyl (PTM) radical array in a covalent organic framework, (P3) the SVILC analogue on $kappa$-(BEDT-TTF)$_2$Cu[N(CN)$_2$]Br […]

Reinforcement Learning with LLM-Guided Action Spaces for Synthesizable Lead Optimization

arXiv:2604.07669v2 Announce Type: replace-cross Abstract: Lead optimization in drug discovery requires improving therapeutic properties while ensuring that molecular modifications correspond to feasible synthetic routes. Existing approaches either prioritize property scores without enforcing synthesizability, or rely on expensive enumeration over large reaction networks, while direct application of Large Language Models (LLMs) to molecular generation frequently produces […]

MoDAl: Self-Supervised Neural Modality Discovery via Decorrelation for Speech Neuroprosthesis

arXiv:2605.00025v1 Announce Type: new Abstract: Speech neuroprosthesis systems decode intended speech from neural activity in the absence of audible output, offering a path to restoring communication for individuals with speech-impairing conditions. Current approaches decode predominantly from motor cortical areas, discarding others — such as area 44, part of Broca’s area — that may encode complementary […]

AdaMeZO: Adam-style Zeroth-Order Optimizer for LLM Fine-tuning Without Maintaining the Moments

arXiv:2605.00650v1 Announce Type: cross Abstract: Fine-tuning LLMs is necessary for various dedicated downstream tasks, but classic backpropagation-based fine-tuning methods require substantial GPU memory. To this end, a recent work, MeZO, which relies solely on forward passes to fine-tune LLMs, significantly reduces GPU requirements at the cost of slower convergence due to its indifference to loss […]

When RAG Chatbots Expose Their Backend: An Anonymized Case Study of Privacy and Security Risks in Patient-Facing Medical AI

arXiv:2605.00796v1 Announce Type: cross Abstract: Background: Patient-facing medical chatbots based on retrieval-augmented generation (RAG) are increasingly promoted to deliver accessible, grounded health information. AI-assisted development lowers the barrier to building them, but they still demand rigorous security, privacy, and governance controls. Objective: To report an anonymized, non-destructive security assessment of a publicly accessible patient-facing medical […]

Meritocratic Fairness in Budgeted Combinatorial Multi-armed Bandits via Shapley Values

arXiv:2605.00762v1 Announce Type: cross Abstract: We propose a new framework for meritocratic fairness in budgeted combinatorial multi-armed bandits with full-bandit feedback (BCMAB-FBF). Unlike semi-bandit feedback, the contribution of individual arms is not received in full-bandit feedback, making the setting significantly more challenging. To compute arm contributions in BCMAB-FBF, we first extend the Shapley value, a […]

Tumor containment as an anti-percolation process

arXiv:2605.00085v1 Announce Type: new Abstract: Percolation theory from statistical physics has been applied to several aspects of tumor progression. Tumor growth on percolation clusters has been used to model spatial expansion, vascular percolation to describe nutrient supply and transport related percolation to investigate drug and gene delivery. At the molecular level, mutational percolation has been […]

Persistent Visual Memory: Sustaining Perception for Deep Generation in LVLMs

arXiv:2605.00814v1 Announce Type: cross Abstract: While autoregressive Large Vision-Language Models (LVLMs) demonstrate remarkable proficiency in multimodal tasks, they face a “Visual Signal Dilution” phenomenon, where the accumulation of textual history expands the attention partition function, causing visual attention to decay inversely with generated sequence length. To counteract this, we propose Persistent Visual Memory (PVM), a […]

Characterizing control between interacting subsystems with deep Jacobian estimation

arXiv:2507.01946v2 Announce Type: replace Abstract: Biological function arises through the dynamical interactions of multiple subsystems, including those between brain areas, within gene regulatory networks, and more. A common approach to understanding these systems is to model the dynamics of each subsystem and characterize communication between them. An alternative approach is through the lens of control […]

G-reasoner: Foundation Models for Unified Reasoning over Graph-structured Knowledge

arXiv:2509.24276v4 Announce Type: replace Abstract: Large language models (LLMs) excel at complex reasoning but remain limited by static and incomplete parametric knowledge. Retrieval-augmented generation (RAG) mitigates this by incorporating external knowledge, yet existing RAGs struggle with knowledge-intensive tasks due to fragmented information and weak modeling of knowledge structure. Graphs offer a natural way to model […]

Minimal, Local, Causal Explanations for Jailbreak Success in Large Language Models

arXiv:2605.00123v1 Announce Type: new Abstract: Safety trained large language models (LLMs) can often be induced to answer harmful requests through jailbreak prompts. Because we lack a robust understanding of why LLMs are susceptible to jailbreaks, future frontier models operating more autonomously in higher-stakes settings may similarly be vulnerable to such attacks. Prior work has studied […]

HyMem: Hybrid Memory Architecture with Dynamic Retrieval Scheduling

arXiv:2602.13933v2 Announce Type: replace Abstract: Large language model (LLM) agents demonstrate strong performance in short-text contexts but often underperform in extended dialogues due to inefficient memory management. Existing approaches face a fundamental trade-off between efficiency and effectiveness: memory compression risks losing critical details required for complex reasoning, while retaining raw text introduces unnecessary computational overhead […]

Subscribe for Updates

Copyright 2025 dijee Intelligence Ltd.   dijee Intelligence Ltd. is a private limited company registered in England and Wales at Media House, Sopers Road, Cuffley, Hertfordshire, EN6 4RY, UK registration number 16808844