Cutting-Edge News, Analysis, and Thought Leadership at the Intersection of Life Sciences and Digital Transformation
Sparse Mixture-of-Experts for Multi-Channel Imaging: Are All Channel Interactions Required?
arXiv:2511.17400v1 Announce Type: cross Abstract: Vision Transformers ($textViTs$) have become the backbone of vision foundation models, yet their optimization for multi-channel domains – such as
Sex and age estimation from cardiac signals captured via radar using data augmentation and deep learning: a privacy concern
IntroductionElectrocardiograms (ECGs) have long served as the standard method for cardiac monitoring. While ECGs are highly accurate and widely validated, they require direct skin contact,
Reassessing prediction in the brain: Pre-onset neural encoding during natural listening does not reflect pre-activation
arXiv:2412.19622v2 Announce Type: replace Abstract: Predictive processing theories propose that the brain continuously anticipates upcoming input. However, direct neural evidence for predictive pre-activation during natural
CharCom: Composable Identity Control for Multi-Character Story Illustration
arXiv:2510.10135v2 Announce Type: replace Abstract: Ensuring character identity consistency across varying prompts remains a fundamental limitation in diffusion-based text-to-image generation. We propose CharCom, a modular
The Finer the Better: Towards Granular-aware Open-set Domain Generalization
arXiv:2511.16979v1 Announce Type: cross Abstract: Open-Set Domain Generalization (OSDG) tackles the realistic scenario where deployed models encounter both domain shifts and novel object categories. Despite
Wideband RF Radiance Field Modeling Using Frequency-embedded 3D Gaussian Splatting
arXiv:2505.20714v2 Announce Type: replace-cross Abstract: Indoor environments typically contain diverse RF signals distributed across multiple frequency bands, including NB-IoT, Wi-Fi, and millimeter-wave. Consequently, wideband RF
You Only Forward Once: An Efficient Compositional Judging Paradigm
arXiv:2511.16600v2 Announce Type: replace Abstract: Multimodal large language models (MLLMs) show strong potential as judges. However, existing approaches face a fundamental trade-off: adapting MLLMs to
Quantum Masked Autoencoders for Vision Learning
arXiv:2511.17372v1 Announce Type: cross Abstract: Classical autoencoders are widely used to learn features of input data. To improve the feature learning, classical masked autoencoders extend
Sometimes Painful but Certainly Promising: Feasibility and Trade-offs of Language Model Inference at the Edge
arXiv:2503.09114v2 Announce Type: replace-cross Abstract: The rapid rise of Language Models (LMs) has expanded the capabilities of natural language processing, powering applications from text generation
Genomic Next-Token Predictors are In-Context Learners
arXiv:2511.12797v2 Announce Type: replace-cross Abstract: In-context learning (ICL) — the capacity of a model to infer and apply abstract patterns from examples provided within its








