arXiv:2604.26830v1 Announce Type: cross
Abstract: I propose the emphRandom Cloud method, a training-free approach to neural architecture search that discovers minimal feedforward network topologies through stochastic exploration and progressive structural reduction. Unlike post-training pruning methods that require a full train-prune-retrain cycle, this method evaluates randomly initialized networks without backpropagation, progressively reduces their topology, and only trains the best minimal candidate at the end. I evaluate on 7 classification benchmarks against magnitude pruning and random pruning baselines. The Random Cloud matches or outperforms both baselines in 6 of 7 datasets, achieving statistically significant improvements on Sonar ($+4.9$pp accuracy, $p=0.017$ vs magnitude pruning) with 87% parameter reduction. Crucially, the method is faster than both pruning baselines in 4 of 5 datasets (0.67–0.94$times$ the cost of full training), since it avoids training the full-size network entirely.
Disclosure in the era of generative artificial intelligence
Generative artificial intelligence (AI) has rapidly become embedded in academic writing, assisting with tasks ranging from language editing to drafting text and producing evidence. Despite


