arXiv:2511.17400v1 Announce Type: cross
Abstract: Vision Transformers ($textViTs$) have become the backbone of vision foundation models, yet their optimization for multi-channel domains – such as cell painting or satellite imagery – remains underexplored. A key challenge in these domains is capturing interactions between channels, as each channel carries different information. While existing works have shown efficacy by treating each channel independently during tokenization, this approach naturally introduces a major computational bottleneck in the attention block – channel-wise comparisons leads to a quadratic growth in attention, resulting in excessive $textFLOPs$ and high training cost. In this work, we shift focus from efficacy to the overlooked efficiency challenge in cross-channel attention and ask: “Is it necessary to model all channel interactions?”. Inspired by the philosophy of Sparse Mixture-of-Experts ($textMoE$), we propose MoE-ViT, a Mixture-of-Experts architecture for multi-channel images in $textViTs$, which treats each channel as an expert and employs a lightweight router to select only the most relevant experts per patch for attention. Proof-of-concept experiments on real-world datasets – JUMP-CP and So2Sat – demonstrate that $textMoE-ViT$ achieves substantial efficiency gains without sacrificing, and in some cases enhancing, performance, making it a practical and attractive backbone for multi-channel imaging.
Sex and age estimation from cardiac signals captured via radar using data augmentation and deep learning: a privacy concern
IntroductionElectrocardiograms (ECGs) have long served as the standard method for cardiac monitoring. While ECGs are highly accurate and widely validated, they require direct skin contact,




