arXiv:2604.07387v2 Announce Type: replace-cross
Abstract: We present a design automation framework for analog circuit sizing that produces calibrated, topology-specific analytical equations from raw circuit netlists. A large language model (LLM) derives a complete Python sizing function in which each device dimension is traceable to a specific design rationale – a form of interpretable output absent from existing optimization-based and LLM-based sizing methods. A deterministic calibration loop extracts process-dependent parameters from a single DC operating point simulation, while a prediction-error feedback mechanism compensates for analytical inaccuracies. We validate the framework on circuits ranging from 8 to 30 transistors – spanning two-stage Miller-compensated, current-mirror, folded cascode, nested Miller-compensated, and complementary class-AB output topologies – across three process nodes (40 nm, 90 nm, 180 nm). On matched-specification benchmarks, including the class-AB opamp case, the framework converges in 2-7 simulations. Despite large initial prediction errors, convergence depends on the measurement-feedback architecture, not prediction accuracy. The one-shot calibration automatically captures process-dependent variations, enabling cross-node portability without modification, retraining, or per-process characterization.
Disclosure in the era of generative artificial intelligence
Generative artificial intelligence (AI) has rapidly become embedded in academic writing, assisting with tasks ranging from language editing to drafting text and producing evidence. Despite


