Cutting-Edge News, Analysis, and Thought Leadership at the Intersection of Life Sciences and Digital Transformation
VLA-Pruner: Temporal-Aware Dual-Level Visual Token Pruning for Efficient Vision-Language-Action Inference
arXiv:2511.16449v2 Announce Type: replace-cross Abstract: Vision-Language-Action (VLA) models have shown great promise for embodied AI, yet the heavy computational cost of processing continuous visual streams
Sex and age estimation from cardiac signals captured via radar using data augmentation and deep learning: a privacy concern
IntroductionElectrocardiograms (ECGs) have long served as the standard method for cardiac monitoring. While ECGs are highly accurate and widely validated, they require direct skin contact,
Reassessing prediction in the brain: Pre-onset neural encoding during natural listening does not reflect pre-activation
arXiv:2412.19622v2 Announce Type: replace Abstract: Predictive processing theories propose that the brain continuously anticipates upcoming input. However, direct neural evidence for predictive pre-activation during natural
CharCom: Composable Identity Control for Multi-Character Story Illustration
arXiv:2510.10135v2 Announce Type: replace Abstract: Ensuring character identity consistency across varying prompts remains a fundamental limitation in diffusion-based text-to-image generation. We propose CharCom, a modular
ConCISE: A Reference-Free Conciseness Evaluation Metric for LLM-Generated Answers
arXiv:2511.16846v1 Announce Type: cross Abstract: Large language models (LLMs) frequently generate responses that are lengthy and verbose, filled with redundant or unnecessary details. This diminishes
CATCODER: Repository-Level Code Generation with Relevant Code and Type Context
arXiv:2406.03283v2 Announce Type: replace-cross Abstract: Large language models (LLMs) have demonstrated remarkable capabilities in code generation tasks. However, repository-level code generation presents unique challenges, particularly
Quantum Masked Autoencoders for Vision Learning
arXiv:2511.17372v1 Announce Type: cross Abstract: Classical autoencoders are widely used to learn features of input data. To improve the feature learning, classical masked autoencoders extend
Sometimes Painful but Certainly Promising: Feasibility and Trade-offs of Language Model Inference at the Edge
arXiv:2503.09114v2 Announce Type: replace-cross Abstract: The rapid rise of Language Models (LMs) has expanded the capabilities of natural language processing, powering applications from text generation
Genomic Next-Token Predictors are In-Context Learners
arXiv:2511.12797v2 Announce Type: replace-cross Abstract: In-context learning (ICL) — the capacity of a model to infer and apply abstract patterns from examples provided within its
Comprehensive Evaluation of Prototype Neural Networks
arXiv:2507.06819v3 Announce Type: replace-cross Abstract: Prototype models are an important method for explainable artificial intelligence (XAI) and interpretable machine learning. In this paper, we perform











