Generalizing from previous experience in value-based tasks requires discovering and exploiting hidden structure to draw inferences beyond direct observations, and thereby update information about the values of states. We combined behavior, functional magnetic resonance imaging, and computational modeling to test how latent cause inference guides neural replay, enabling fast generalization. Over two days, fifty-two participants performed a sequential decision task in which multiple visual sequences either shared a latent reward source or were associated with an independent reward. On day 1, participants learned this correlation structure and exploited it to achieve 1-shot value generalization after reversals. Multivariate decoding and sequentiality analyses revealed backward replay in visual cortex during rest intervals that was selective to unobserved reward-linked sequences, consistent with non-local value updating. In parallel, the representation of the abstract reward structure in medial temporal lobe (MTL) increased from early to late blocks on day 1. On day 2, we covertly changed which sequences shared rewards. Participants flexibly reorganized generalization, and replay patterns adapted, parallel to a reorganization of MTL representations. A LCI model captured the initial learning trajectory and adaptation upon changes in latent structure. The model captured individual differences in structure learning, and its value updating of unobserved states predicted trial-wise fluctuations in replay strength. These results provide a mechanistic account in which latent structure discovery and replay interact to propagate value and enable rapid, flexible generalization.
Mucin-type O-glycans regulate proteoglycan stability and chondrocyte maturation
O-glycosylation is a ubiquitous post-translational modification essential for protein stability, cell signaling, and tissue organization, yet how distinct O-glycan subclasses coordinate tissue development remains unclear.




