arXiv:2604.01235v1 Announce Type: new
Abstract: Structured LLM routing is often treated as a prompt-engineering problem. We argue that it is, more fundamentally, a systems-level burden-allocation problem. As large language models (LLMs) become core control components in agentic AI systems, reliable structured routing must balance correctness, latency, and implementation cost under real deployment constraints. We show that this balance is shaped not only by prompts or schemas, but also by how structural work is allocated across the generation stack: whether output structure is emitted directly by the model, compressed during transport, or reconstructed locally after generation.
We evaluate this formulation through a comprehensive full-factorial benchmark covering 48 deployment configurations and 15,552 requests across OpenAI, Gemini, and Llama backends. Our central finding is consequential: there is no universal best routing mode. Instead, backend-specific interaction effects dominate performance. Modes that remain highly reliable on Gemini and OpenAI can suffer substantial correctness degradation on Llama, while efficiency gains from compressed realization are strongly backend-dependent.
Rather than presenting another isolated model comparison, this work contributes a deployable framework for reasoning about structured routing under heterogeneous backend conditions. We provide a cross-backend evaluation methodology and practical deployment guidance for navigating the correctness-cost-latency frontier in production-grade agentic expert systems.
Assessing nurses’ attitudes toward artificial intelligence in Kazakhstan: psychometric validation of a nine-item scale
BackgroundArtificial intelligence (AI) is increasingly integrated into healthcare, yet the attitudes and knowledge of nurses, who are the key mediators of AI implementation, remain underexplored.


