arXiv:2604.10857v1 Announce Type: cross
Abstract: Diffusion models generate samples by iteratively querying learned score estimates. A rapidly growing literature focuses on accelerating sampling by minimizing the number of score evaluations, yet the information-theoretic limits of such acceleration remain unclear.
In this work, we establish the first score query lower bounds for diffusion sampling. We prove that for $d$-dimensional distributions, given access to score estimates with polynomial accuracy $varepsilon=d^-O(1)$ (in any $L^p$ sense), any sampling algorithm requires $widetildeOmega(sqrtd)$ adaptive score queries. In particular, our proof shows that any sampler must search over $widetildeOmega(sqrtd)$ distinct noise levels, providing a formal explanation for why multiscale noise schedules are necessary in practice.
Disclosure in the era of generative artificial intelligence
Generative artificial intelligence (AI) has rapidly become embedded in academic writing, assisting with tasks ranging from language editing to drafting text and producing evidence. Despite


