arXiv:2511.01734v1 Announce Type: cross
Abstract: We provide the first proof of learning rate transfer with width in a linear multi-layer perceptron (MLP) parametrized with $mu$P, a neural network parameterization designed to “maximize” feature learning in the infinite-width limit. We show that under $mu P$, the optimal learning rate converges to a emphnon-zero constant as width goes to infinity, providing a theoretical explanation to learning rate transfer. In contrast, we show that this property fails to hold under alternative parametrizations such as Standard Parametrization (SP) and Neural Tangent Parametrization (NTP). We provide intuitive proofs and support the theoretical findings with extensive empirical results.
The Hidden Power of Normalization: Exponential Capacity Control in Deep Neural Networks
arXiv:2511.00958v1 Announce Type: cross Abstract: Normalization methods are fundamental components of modern deep neural networks (DNNs). Empirically, they are known to stabilize optimization dynamics and


