arXiv:2603.00171v2 Announce Type: replace-cross
Abstract: Multimodal Large Language Models (MLLMs) often struggle to accurately perceive fine-grained visual details, especially when targets are tiny or visually subtle. This challenge can be addressed through semantic-visual information fusion, which integrates global image context with fine-grained local evidence for multi-scale visual understanding. Recently, a paradigm termed “Thinking with Images” has emerged, enabling models to acquire high-resolution visual evidence by zooming or cropping image regions and fusing these local details with global context during reasoning. Although training-based approaches demonstrate the effectiveness of this capability, they require extensive computational resources and large-scale task-specific data. Consequently, lightweight training-free methods have been proposed as a practical alternative to incorporate local visual evidence during inference. However, existing training-free approaches still suffer from two key limitations. First, they indiscriminately extract and fuse local visual regions for all inputs regardless of necessity, introducing computational redundancy and perceptual noise. Second, they exhibit drift between semantic intent and visual attention, preventing accurate localization of user-focused regions. To address these challenges, we propose SvfEye, a training-free framework for adaptive visual-semantic fusion. SvfEye follows a two-stage pipeline with a confidence-based decision module to determine whether additional local visual information is needed, and a semantic-attention fusion module to identify informative local regions. Experiments show that SvfEye achieves substantial performance gains while obtaining an approximately 4.0x inference speedup over the state-of-the-art method ZoomEye.