AGNES: Adaptive Graph Neural Network and Dynamic Programming Hybrid Framework for Real-Time Nanopore Seed Chaining

arXiv:2510.16013v3 Announce Type: replace Abstract: Nanopore sequencing enables real-time long-read DNA sequencing with reads exceeding 10 kilobases, but inherent error rates of 12-15 percent present significant computational challenges for read alignment. The critical seed chaining step must connect exact k-mer matches between reads and reference genomes while filtering spurious matches, yet state-of-the-art methods rely on […]

Natural Building Blocks for Structured World Models: Theory, Evidence, and Scaling

arXiv:2511.02091v1 Announce Type: cross Abstract: The field of world modeling is fragmented, with researchers developing bespoke architectures that rarely build upon each other. We propose a framework that specifies the natural building blocks for structured world models based on the fundamental stochastic processes that any world model must capture: discrete processes (logic, symbols) and continuous […]

InsurAgent: A Large Language Model-Empowered Agent for Simulating Individual Behavior in Purchasing Flood Insurance

arXiv:2511.02119v1 Announce Type: new Abstract: Flood insurance is an effective strategy for individuals to mitigate disaster-related losses. However, participation rates among at-risk populations in the United States remain strikingly low. This gap underscores the need to understand and model the behavioral mechanisms underlying insurance decisions. Large language models (LLMs) have recently exhibited human-like intelligence across […]

Matrix Sensing with Kernel Optimal Loss: Robustness and Optimization Landscape

arXiv:2511.02122v1 Announce Type: cross Abstract: In this paper we study how the choice of loss functions of non-convex optimization problems affects their robustness and optimization landscape, through the study of noisy matrix sensing. In traditional regression tasks, mean squared error (MSE) loss is a common choice, but it can be unreliable for non-Gaussian or heavy-tailed […]

ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs

arXiv:2409.09586v3 Announce Type: replace-cross Abstract: As AI systems become more advanced, ensuring their alignment with a diverse range of individuals and societal values becomes increasingly critical. But how can we capture fundamental human values and assess the degree to which AI systems align with them? We introduce ValueCompass, a framework of fundamental values, grounded in […]

ScenicProver: A Framework for Compositional Probabilistic Verification of Learning-Enabled Systems

arXiv:2511.02164v1 Announce Type: cross Abstract: Full verification of learning-enabled cyber-physical systems (CPS) has long been intractable due to challenges including black-box components and complex real-world environments. Existing tools either provide formal guarantees for limited types of systems or test the system as a monolith, but no general framework exists for compositional analysis of learning-enabled CPS […]

DL4Proteins Jupyter Notebooks Teach how to use Artificial Intelligence for Biomolecular Structure Prediction and Design

arXiv:2511.02128v1 Announce Type: new Abstract: Computational methods for predicting and designing biomolecular structures are increasingly powerful. While previous approaches relied on physics-based modeling, modern tools, such as AlphaFold2 in CASP14, leverage artificial intelligence (AI) to achieve significantly improved performance. The growing impact of AI-based tools in protein science necessitates enhanced educational materials that improve AI […]

Open the Oyster: Empirical Evaluation and Improvement of Code Reasoning Confidence in LLMs

arXiv:2511.02197v1 Announce Type: cross Abstract: With the widespread application of large language models (LLMs) in the field of code intelligence, increasing attention has been paid to the reliability and controllability of their outputs in code reasoning tasks. Confidence estimation serves as an effective and convenient approach for evaluating these aspects. This paper proposes a confidence […]

Dense Backpropagation Improves Training for Sparse Mixture-of-Experts

arXiv:2504.12463v3 Announce Type: replace-cross Abstract: Mixture of Experts (MoE) pretraining is more scalable than dense Transformer pretraining, because MoEs learn to route inputs to a sparse set of their feedforward parameters. However, this means that MoEs only receive a sparse backward update, leading to training instability and suboptimal performance. We present a lightweight approximation method […]

Subscribe for Updates

Copyright 2025 dijee Intelligence Ltd.   dijee Intelligence Ltd. is a private limited company registered in England and Wales at Media House, Sopers Road, Cuffley, Hertfordshire, EN6 4RY, UK registeration number 16808844