• Home
  • Primary
  • Sparse Mixture-of-Experts for Multi-Channel Imaging: Are All Channel Interactions Required?

Sparse Mixture-of-Experts for Multi-Channel Imaging: Are All Channel Interactions Required?

arXiv:2511.17400v1 Announce Type: cross
Abstract: Vision Transformers ($textViTs$) have become the backbone of vision foundation models, yet their optimization for multi-channel domains – such as cell painting or satellite imagery – remains underexplored. A key challenge in these domains is capturing interactions between channels, as each channel carries different information. While existing works have shown efficacy by treating each channel independently during tokenization, this approach naturally introduces a major computational bottleneck in the attention block – channel-wise comparisons leads to a quadratic growth in attention, resulting in excessive $textFLOPs$ and high training cost. In this work, we shift focus from efficacy to the overlooked efficiency challenge in cross-channel attention and ask: “Is it necessary to model all channel interactions?”. Inspired by the philosophy of Sparse Mixture-of-Experts ($textMoE$), we propose MoE-ViT, a Mixture-of-Experts architecture for multi-channel images in $textViTs$, which treats each channel as an expert and employs a lightweight router to select only the most relevant experts per patch for attention. Proof-of-concept experiments on real-world datasets – JUMP-CP and So2Sat – demonstrate that $textMoE-ViT$ achieves substantial efficiency gains without sacrificing, and in some cases enhancing, performance, making it a practical and attractive backbone for multi-channel imaging.

Subscribe for Updates

Copyright 2025 dijee Intelligence Ltd.   dijee Intelligence Ltd. is a private limited company registered in England and Wales at Media House, Sopers Road, Cuffley, Hertfordshire, EN6 4RY, UK registeration number 16808844