CORE – A Cell-Level Coarse-to-Fine Image Registration Engine for Multi-stain Image Alignment

arXiv:2511.03826v1 Announce Type: new Abstract: Accurate and efficient registration of whole slide images (WSIs) is essential for high-resolution, nuclei-level analysis in multi-stained tissue slides. We propose a novel coarse-to-fine framework CORE for accurate nuclei-level registration across diverse multimodal whole-slide image (WSI) datasets. The coarse registration stage leverages prompt-based tissue mask extraction to effectively filter out […]

Alternative Fairness and Accuracy Optimization in Criminal Justice

arXiv:2511.04505v1 Announce Type: cross Abstract: Algorithmic fairness has grown rapidly as a research area, yet key concepts remain unsettled, especially in criminal justice. We review group, individual, and process fairness and map the conditions under which they conflict. We then develop a simple modification to standard group fairness. Rather than exact parity across protected groups, […]

Cross-modal Causal Intervention for Alzheimer’s Disease Prediction

arXiv:2507.13956v2 Announce Type: replace Abstract: Mild Cognitive Impairment (MCI) serves as a prodromal stage of Alzheimer’s Disease (AD), where early identification and intervention can effectively slow the progression to dementia. However, diagnosing AD remains a significant challenge in neurology due to the confounders caused mainly by the selection bias of multi-modal data and the complex […]

Benchmarking LLM Faithfulness in RAG with Evolving Leaderboards

arXiv:2505.04847v2 Announce Type: replace-cross Abstract: Retrieval-augmented generation (RAG) aims to reduce hallucinations by grounding responses in external context, yet large language models (LLMs) still frequently introduce unsupported information or contradictions even when provided with relevant context. This paper presents two complementary efforts at Vectara to measure and benchmark LLM faithfulness in RAG. First, we describe […]

LLM Targeted Underperformance Disproportionately Impacts Vulnerable Users

arXiv:2406.17737v2 Announce Type: replace-cross Abstract: While state-of-the-art large language models (LLMs) have shown impressive performance on many tasks, there has been extensive research on undesirable model behavior such as hallucinations and bias. In this work, we investigate how the quality of LLM responses changes in terms of information accuracy, truthfulness, and refusals depending on three […]

Training Large Language Models To Reason In Parallel With Global Forking Tokens

arXiv:2510.05132v2 Announce Type: replace-cross Abstract: Although LLMs have demonstrated improved performance by scaling parallel test-time compute, doing so relies on generating reasoning paths that are both diverse and accurate. For challenging problems, the forking tokens that trigger diverse yet correct reasoning modes are typically deep in the sampling tree. Consequently, common strategies to encourage diversity, […]

Laugh, Relate, Engage: Stylized Comment Generation for Short Videos

arXiv:2511.03757v1 Announce Type: cross Abstract: Short-video platforms have become a central medium in the modern Internet landscape, where efficient information delivery and strong interactivity are reshaping user engagement and cultural dissemination. Among the various forms of user interaction, comments play a vital role in fostering community participation and enabling content re-creation. However, generating comments that […]

To See or To Read: User Behavior Reasoning in Multimodal LLMs

arXiv:2511.03845v1 Announce Type: new Abstract: Multimodal Large Language Models (MLLMs) are reshaping how modern agentic systems reason over sequential user-behavior data. However, whether textual or image representations of user behavior data are more effective for maximizing MLLM performance remains underexplored. We present textttBehaviorLens, a systematic benchmarking framework for assessing modality trade-offs in user-behavior reasoning across […]

Optimizing Reasoning Efficiency through Prompt Difficulty Prediction

arXiv:2511.03808v1 Announce Type: cross Abstract: Reasoning language models perform well on complex tasks but are costly to deploy due to their size and long reasoning traces. We propose a routing approach that assigns each problem to the smallest model likely to solve it, reducing compute without sacrificing accuracy. Using intermediate representations from s1.1-32B, we train […]

Subscribe for Updates

Copyright 2025 dijee Intelligence Ltd.   dijee Intelligence Ltd. is a private limited company registered in England and Wales at Media House, Sopers Road, Cuffley, Hertfordshire, EN6 4RY, UK registeration number 16808844