Anticipating biodiversity change is critical in rapidly warming regions, yet challenging because these areas often coincide with poor sampling. Data gaps are widely understood to interfere with species distribution models (SDMs), but this is difficult to detect with biased data. We test SDM bias-correction methods with a new occurrence-checklist-range (OCR) validation approach and evaluate prediction discrepancy for ~700 Canadian terrestrial vertebrate species. We found: 1) bias-correction improves model performance against independent (checklist and range) data, but not against typical occurrence cross-validation, 2) predicted richness differed among methods (up to 2.7-fold), especially in the north, and 3) counterintuitively, future projections varied less (by 28%) because well-sampled climate space will shift north. Our findings suggest potential widespread overconfidence in SDM predictions for the unevenly sampled world, with implications for the growing reliance on biodiversity estimates for planning and policy. OCR validation and methodological discrepancy measurements are relatively easy ways to address this.
Surrogate Neural Architecture Codesign Package (SNAC-Pack)
arXiv:2512.15998v1 Announce Type: cross Abstract: Neural Architecture Search is a powerful approach for automating model design, but existing methods struggle to accurately optimize for real

