arXiv:2603.19427v1 Announce Type: cross
Abstract: Why do some languages like Czech permit free word order, while others like English do not? We address this question by pretraining transformer language models on a spectrum of synthetic word-order variants of natural languages. We observe that greater word-order irregularity consistently raises model surprisal, indicating reduced learnability. Sentence reversal, however, affects learnability only weakly. A coarse distinction of free- (e.g., Czech and Finnish) and fixed-word-order languages (e.g., English and French) does not explain cross-lingual variation. Instead, the structure of the word and subword vocabulary strongly predicts the model surprisal. Overall, vocabulary structure emerges as a key driver of computational word-order learnability across languages.
How Open Must Language Models be to Enable Reliable Scientific Inference?
arXiv:2603.26539v1 Announce Type: cross Abstract: How does the extent to which a model is open or closed impact the scientific inferences that can be drawn


