arXiv:2605.00782v1 Announce Type: cross Abstract: Reliable spatial analysis in GIScience requires preserving coordinate semantics, topology, units, and geographic plausibility. Current LLM-based GIS systems generate fluent scripts but rarely enforce these geographic rules at scale. We present GeoContra, a verification and repair framework for LLM-driven Python GIS workflows. It represents each task as an executable geospatial […]
A hybrid solution approach for the Integrated Healthcare Timetabling Competition 2024
arXiv:2511.04685v2 Announce Type: replace Abstract: In this work, we present the solution approach for the Integrated Healthcare Timetabling Competition 2024 submitted by Team Twente, which ultimately ranked third among the finalists. Our approach combines mixed-integer programming, constraint programming, and simulated annealing in a 3-phase solution approach based on decomposition into subproblems. In addition to describing […]
The Topology of Multimodal Fusion: Why Current Architectures Fail at Creative Cognition
arXiv:2604.04465v2 Announce Type: replace Abstract: This paper identifies a structural limitation in current multimodal AI architectures that is topological rather than parametric. Contrastive alignment (CLIP), cross-attention fusion (GPT-4V/Gemini), and diffusion-based generation share a common geometric prior — modal separability — which we term contact topology. The argument rests on three pillars with philosophy as the […]
CollaFuse: Collaborative Diffusion Models
arXiv:2406.14429v3 Announce Type: replace-cross Abstract: In the landscape of generative artificial intelligence, diffusion-based models have emerged as a promising method for generating synthetic images. However, the application of diffusion models poses numerous challenges, particularly concerning data availability, computational requirements, and privacy. Traditional approaches to address these shortcomings, like federated learning, often impose significant computational burdens […]
Comparing Exploration-Exploitation Strategies of LLMs and Humans: Insights from Standard Multi-armed Bandit Experiments
arXiv:2505.09901v3 Announce Type: replace-cross Abstract: Large language models (LLMs) are increasingly used to simulate or automate human behavior in complex sequential decision-making settings. A natural question is then whether LLMs exhibit similar decision-making behavior to humans, and can achieve comparable (or superior) performance. In this work, we focus on the exploration-exploitation (E&E) tradeoff, a fundamental […]
Quantum Optimal Control for Coherent Spin Dynamics of Radical Pairs via Pontryagin Maximum Principle
arXiv:2508.01806v2 Announce Type: replace-cross Abstract: This paper aims to devise the shape of the external electromagnetic field that drives the spin dynamics of radical pairs to a quantum coherent state through maximization of the triplet-born singlet yield in biochemical reactions. The model is a Schr”odinger system with spin Hamiltonians given by the sum of Zeeman […]
Unlocking Zero-Shot Geospatial Reasoning via Indirect Rewards
arXiv:2510.00072v2 Announce Type: replace-cross Abstract: Training robust reasoning vision-language models (VLMs) in rare domains (such as geospatial) is fundamentally constrained by supervision scarcity. While raw geospatial imagery is abundant, the amount of task-direct supervision falls far behind that of common domains. In this work, we validate an important conclusion: indirect verifiable rewards, derived from seemingly […]
LLM-Based Agentic Negotiation for 6G: Addressing Uncertainty Neglect and Tail-Event Risk
arXiv:2511.19175v2 Announce Type: replace-cross Abstract: A critical barrier to the trustworthiness of sixth-generation (6G) agentic autonomous networks is the uncertainty neglect bias; a cognitive tendency for large language model (LLM)-powered agents to make high-stakes decisions based on simple averages while ignoring the tail risk of extreme events. This paper proposes an unbiased, risk-aware framework for […]
Language Models Struggle to Use Representations Learned In-Context
arXiv:2602.04212v2 Announce Type: replace-cross Abstract: Though large language models (LLMs) have enabled great success across a wide variety of tasks, they still appear to fall short of one of the loftier goals of artificial intelligence research: creating an artificial system that can adapt its behavior to radically new contexts upon deployment. One important step towards […]
GenRecEdit: Adapting Model Editing for Generative Recommendation with Cold-Start Items
arXiv:2603.14259v2 Announce Type: replace-cross Abstract: Generative recommendation (GR) has shown strong potential for sequential recommendation in an end-to-end generation paradigm. However, existing GR models suffer from severe cold-start collapse: their recommendation accuracy on cold-start items can drop to near zero. Current solutions typically rely on retraining with cold-start interactions, which is hindered by sparse feedback, […]
Bridging the Experimental Last Mile: Digitizing Laboratory Know-How for Safe AI-Assisted Support
arXiv:2604.16345v2 Announce Type: replace-cross Abstract: While advances in materials informatics have accelerated the development of Self-Driving Laboratories (SDLs), human-led experiments remain standard in many educational and exploratory research laboratories. In specific lab settings, formal documentation alone is often insufficient for safe and reliable operation. We refer to the gap between formal documentation and reliable execution […]
Learning Rate Transfer in Normalized Transformers
arXiv:2604.27077v2 Announce Type: replace-cross Abstract: The Normalized Transformer, or nGPT (arXiv:2410.01131) achieves impressive training speedups and does not require weight decay or learning rate warmup. However, despite having hyperparameters that explicitly scale with model size, we observe that nGPT does not exhibit learning rate transfer across model dimension and token horizon. To rectify this, we […]