arXiv:2509.26360v3 Announce Type: replace-cross
Abstract: Identifying key temporal intervals within long videos, known as temporal grounding (TG), is important to video understanding and reasoning tasks. In this paper, we introduce a new form of the temporal grounding problem, textbfTask-oriented Temporal Grounding (textbfToTG), which is driven by the requirements of downstream tasks rather than explicit time-interval descriptions. For example, a ToTG input may be “explain why the man in the video is sent to the hospital,” whereas traditional TG would take an explicit temporal description such as “the moments when the man is tripped by a stone and falls to the ground.” This new ToTG formulation presents significant challenges for existing TG methods, as it requires jointly performing deep task comprehension and fine-grained temporal localization within long videos. To address these challenges, we conduct a systematic set of studies. First, we construct textbfa new benchmark ToTG-Bench, which comprehensively evaluates ToTG performance across diverse settings. Second, we introduce textbfa new temporal-ground method TimeScope, which performs coarse-to-fine localization through a progressive reasoning process. Leveraging extensive supervised fine-tuning with carefully curated chain-of-thought (CoT) data from a variety of scenarios, TimeScope generalizes effectively across tasks and domains. Our evaluation demonstrates textbfTimeScope’s empirical advantages over existing baselines from three perspectives: (1) substantial improvements in grounding precision, (2) significant benefits to downstream tasks, and (3) strong generalizability across different scenarios. All models, datasets, and source code will be fully open-sourced to support future research in this area.
CUDA-L2: Surpassing cuBLAS Performance for Matrix Multiplication through Reinforcement Learning
arXiv:2512.02551v2 Announce Type: replace-cross Abstract: In this paper, we propose CUDA-L2, a system that combines large language models (LLMs) and reinforcement learning (RL) to automatically




