arXiv:2512.14722v1 Announce Type: cross
Abstract: At NeurIPS 2024, Kera et al. introduced the use of transformers for computing Groebner bases, a central object in computer algebra with numerous practical applications. In this paper, we improve this approach by applying Hierarchical Attention Transformers (HATs) to solve systems of multivariate polynomial equations via Groebner bases computation. The HAT architecture incorporates a tree-structured inductive bias that enables the modeling of hierarchical relationships present in the data and thus achieves significant computational savings compared to conventional flat attention models. We generalize to arbitrary depths and include a detailed computational cost analysis. Combined with curriculum learning, our method solves instances that are much larger than those in Kera et al. (2024 Learning to compute Groebner bases)
Magnetoencephalography reveals adaptive neural reorganization maintaining lexical-semantic proficiency in healthy aging
Although semantic cognition remains behaviorally stable with age, neuroimaging studies report age-related alterations in response to semantic context. We aimed to reconcile these inconsistent findings




