arXiv:2605.11807v1 Announce Type: new Abstract: Generative point-of-interest (POI) recommendation models based on large language models (LLMs) have shown promising results by formulating next POI prediction as a sequence generation task. However, the knowledge encoded in these models remains fixed after training, making them unable to perceive evolving real-world conditions that shape user mobility decisions, such […]
AI Native Asset Intelligence
arXiv:2605.09115v2 Announce Type: replace-cross Abstract: Modern security environments generate fragmented signals across cloud resources, identities, configurations, and third-party security tools. Although AI-native security assistants improve access to this data, they remain largely reactive: users must ask the right questions and interpret disconnected findings. This does not scale in enterprise environments, where signal importance depends on […]
Rethinking Positional Encoding for Neural Vehicle Routing
arXiv:2605.11910v1 Announce Type: new Abstract: Transformer-based models have become the dominant paradigm for neural combinatorial optimization (NCO) of vehicle routing problems (VRPs), yet the role of positional encoding (PE) in these architectures remains largely unexplored. Unlike natural language, where tokens are uniformly spaced on a line, routing solutions exhibit several properties that render standard NLP […]
Cochise: A Reference Harness for Autonomous Penetration Testing
arXiv:2605.11671v1 Announce Type: cross Abstract: Recent work on LLM-driven autonomous penetration testing reports promising results, but existing systems often combine many architectural, prompting, and tool-integration choices, making it difficult to tell what is gained over a simple agent scaffold. We present cochise, a 597 LOC Python reference harness for autonomous penetration-testing experiments. Cochise connects an […]
BadSKP: Backdoor Attacks on Knowledge Graph-Enhanced LLMs with Soft Prompts
arXiv:2605.11996v1 Announce Type: new Abstract: Recent knowledge graph (KG)-enhanced large language models (LLMs) move beyond purely textual knowledge augmentation by encoding retrieved subgraphs into continuous soft prompts via graph neural networks, introducing a graph-conditioned channel that operates alongside the standard text interface. However, existing backdoor attacks are largely designed for the textual channel, and their […]
SDG-MoE: Signed Debate Graph Mixture-of-Experts
arXiv:2605.08322v2 Announce Type: replace-cross Abstract: Sparse MoE models achieve a good balance between capacity and compute by routing each token to a small subset of experts. However, in most MoE architectures, once a token is routed, the selected experts process it independently and their outputs are combined via a weighted sum. This leaves open whether […]
Adaptive Multi-Round Allocation with Stochastic Arrivals
arXiv:2605.12111v1 Announce Type: new Abstract: We study a sequential resource allocation problem motivated by adaptive network recruitment, in which a limited budget of identical resources must be allocated over multiple rounds to individuals with stochastic referral capacity. Successful referrals endogenously generate future decision opportunities while allocating additional resources to an individual exhibits diminishing returns. We […]
Evolutionary Task Discovery: Advancing Reasoning Frontiers via Skill Composition and Complexity Scaling
arXiv:2605.11666v1 Announce Type: cross Abstract: The reasoning frontier of Large Language Models (LLMs) has advanced significantly through modern post-training paradigms (e.g., Reinforcement Learning from Verifiable Rewards (RLVR)). However, the efficacy of these methods remains fundamentally constrained by the diversity and complexity of the training data. One practical solution is data synthesis; yet, prevalent methods relying […]
Goal-Oriented Reasoning for RAG-based Memory in Conversational Agentic LLM Systems
arXiv:2605.12213v1 Announce Type: new Abstract: LLM-based conversational AI agents struggle to maintain coherent behavior over long horizons due to limited context. While RAG-based approaches are increasingly adopted to overcome this limitation by storing interactions in external memory modules and performing retrieval from them, their effectiveness in answering challenging questions (e.g., multi-hop, commonsense) ultimately depends on […]
VLADriver-RAG: Retrieval-Augmented Vision-Language-Action Models for Autonomous Driving
arXiv:2605.08133v2 Announce Type: replace-cross Abstract: Vision-Language-Action (VLA) models have emerged as a promising paradigm for end-to-end autonomous driving, yet their reliance on implicit parametric knowledge limits generalization in long-tail scenarios. While Retrieval-Augmented Generation (RAG) offers a solution by accessing external expert priors, standard visual retrieval suffers from high latency and semantic ambiguity. To address these […]
LISA: Cognitive Arbitration for Signal-Free Autonomous Intersection Management
arXiv:2605.12321v1 Announce Type: new Abstract: Large language models (LLMs) show strong potential for Intelligent Transportation Systems (ITS), particularly in tasks requiring situational reasoning and multi-agent coordination. These capabilities make them well suited for cooperative driving, where rule-based approaches struggle in complex and dynamic traffic environments. Intersection management remains especially challenging due to conflicting right-of-way demands, […]
Reviving In-domain Fine-tuning Methods for Source-Free Cross-domain Few-shot Learning
arXiv:2605.11659v1 Announce Type: cross Abstract: Cross-Domain Few-Shot Learning (CDFSL) aims to adapt large-scale pretrained models to specialized target domains with limited samples, yet the few-shot fine-tuning of vision-language models like CLIP remains underexplored. By establishing multiple fine-tuning baselines of CLIP for CDFSL, we find adapter-based methods (e.g., LoRA) consistently outperform prompt-based ones (e.g., MaPLe), contrary […]