arXiv:2511.17161v1 Announce Type: cross
Abstract: This paper describes the instruction dataset used to fine-tune a set of transformer-based large language models (LLMs) developed in the PLLuM (Polish Large Language Model) project. We present a functional typology of the organic, converted, and synthetic instructions used in PLLuM and share some observations about the implications of using human-authored versus synthetic instruction datasets in the linguistic adaptation of base LLMs. Additionally, we release the first representative subset of the PLLuM instruction corpus (PLLuMIC), which we believe to be useful in guiding and planning the development of similar datasets for other LLMs.
Mucin-type O-glycans regulate proteoglycan stability and chondrocyte maturation
O-glycosylation is a ubiquitous post-translational modification essential for protein stability, cell signaling, and tissue organization, yet how distinct O-glycan subclasses coordinate tissue development remains unclear.


