Cutting-Edge News, Analysis, and Thought Leadership at the Intersection of Life Sciences and Digital Transformation
WER is Unaware: Assessing How ASR Errors Distort Clinical Understanding in Patient Facing Dialogue
arXiv:2511.16544v2 Announce Type: replace-cross Abstract: As Automatic Speech Recognition (ASR) is increasingly deployed in clinical dialogue, standard evaluations still rely heavily on Word Error Rate
Sex and age estimation from cardiac signals captured via radar using data augmentation and deep learning: a privacy concern
IntroductionElectrocardiograms (ECGs) have long served as the standard method for cardiac monitoring. While ECGs are highly accurate and widely validated, they require direct skin contact,
Reassessing prediction in the brain: Pre-onset neural encoding during natural listening does not reflect pre-activation
arXiv:2412.19622v2 Announce Type: replace Abstract: Predictive processing theories propose that the brain continuously anticipates upcoming input. However, direct neural evidence for predictive pre-activation during natural
CharCom: Composable Identity Control for Multi-Character Story Illustration
arXiv:2510.10135v2 Announce Type: replace Abstract: Ensuring character identity consistency across varying prompts remains a fundamental limitation in diffusion-based text-to-image generation. We propose CharCom, a modular
ConCISE: A Reference-Free Conciseness Evaluation Metric for LLM-Generated Answers
arXiv:2511.16846v1 Announce Type: cross Abstract: Large language models (LLMs) frequently generate responses that are lengthy and verbose, filled with redundant or unnecessary details. This diminishes
CATCODER: Repository-Level Code Generation with Relevant Code and Type Context
arXiv:2406.03283v2 Announce Type: replace-cross Abstract: Large language models (LLMs) have demonstrated remarkable capabilities in code generation tasks. However, repository-level code generation presents unique challenges, particularly
Comprehensive Evaluation of Prototype Neural Networks
arXiv:2507.06819v3 Announce Type: replace-cross Abstract: Prototype models are an important method for explainable artificial intelligence (XAI) and interpretable machine learning. In this paper, we perform
Is Phase Really Needed for Weakly-Supervised Dereverberation ?
arXiv:2511.17346v1 Announce Type: cross Abstract: In unsupervised or weakly-supervised approaches for speech dereverberation, the target clean (dry) signals are considered to be unknown during training.
CleverDistiller: Simple and Spatially Consistent Cross-modal Distillation
arXiv:2503.09878v4 Announce Type: replace-cross Abstract: Vision foundation models (VFMs) such as DINO have led to a paradigm shift in 2D camera-based perception towards extracting generalized
SALT: Steering Activations towards Leakage-free Thinking in Chain of Thought
arXiv:2511.07772v2 Announce Type: replace-cross Abstract: As Large Language Models (LLMs) evolve into personal assistants with access to sensitive user data, they face a critical privacy












