Cutting-Edge News, Analysis, and Thought Leadership at the Intersection of Life Sciences and Digital Transformation
WER is Unaware: Assessing How ASR Errors Distort Clinical Understanding in Patient Facing Dialogue
arXiv:2511.16544v2 Announce Type: replace-cross Abstract: As Automatic Speech Recognition (ASR) is increasingly deployed in clinical dialogue, standard evaluations still rely heavily on Word Error Rate
Sex and age estimation from cardiac signals captured via radar using data augmentation and deep learning: a privacy concern
IntroductionElectrocardiograms (ECGs) have long served as the standard method for cardiac monitoring. While ECGs are highly accurate and widely validated, they require direct skin contact,
Reassessing prediction in the brain: Pre-onset neural encoding during natural listening does not reflect pre-activation
arXiv:2412.19622v2 Announce Type: replace Abstract: Predictive processing theories propose that the brain continuously anticipates upcoming input. However, direct neural evidence for predictive pre-activation during natural
CharCom: Composable Identity Control for Multi-Character Story Illustration
arXiv:2510.10135v2 Announce Type: replace Abstract: Ensuring character identity consistency across varying prompts remains a fundamental limitation in diffusion-based text-to-image generation. We propose CharCom, a modular
ConCISE: A Reference-Free Conciseness Evaluation Metric for LLM-Generated Answers
arXiv:2511.16846v1 Announce Type: cross Abstract: Large language models (LLMs) frequently generate responses that are lengthy and verbose, filled with redundant or unnecessary details. This diminishes
CATCODER: Repository-Level Code Generation with Relevant Code and Type Context
arXiv:2406.03283v2 Announce Type: replace-cross Abstract: Large language models (LLMs) have demonstrated remarkable capabilities in code generation tasks. However, repository-level code generation presents unique challenges, particularly
Sometimes Painful but Certainly Promising: Feasibility and Trade-offs of Language Model Inference at the Edge
arXiv:2503.09114v2 Announce Type: replace-cross Abstract: The rapid rise of Language Models (LMs) has expanded the capabilities of natural language processing, powering applications from text generation
Genomic Next-Token Predictors are In-Context Learners
arXiv:2511.12797v2 Announce Type: replace-cross Abstract: In-context learning (ICL) — the capacity of a model to infer and apply abstract patterns from examples provided within its
Comprehensive Evaluation of Prototype Neural Networks
arXiv:2507.06819v3 Announce Type: replace-cross Abstract: Prototype models are an important method for explainable artificial intelligence (XAI) and interpretable machine learning. In this paper, we perform
Is Phase Really Needed for Weakly-Supervised Dereverberation ?
arXiv:2511.17346v1 Announce Type: cross Abstract: In unsupervised or weakly-supervised approaches for speech dereverberation, the target clean (dry) signals are considered to be unknown during training.












