The initial stage of learning motor skills involves exploring vast action spaces, making it impractical to learn the value of every possible action independently. This poses a challenge for standard reinforcement learning approaches, which excel in constrained domains but struggle when the space of possible actions is high-dimensional. Recent work in machine learning has sought to mitigate this problem by combining deep reinforcement learning with a supervised learning system that reduces the complexity of the control policy space by learning low-dimensional embeddings of an action space. Here, we propose that in the mammalian brain, the cortico-cerebellar network learns these low-dimensional action embeddings in a supervised way, while the basal ganglia learn value and policies in this action embedding space using reinforcement learning. We trained this model on reaching tasks and show that, contrary to traditional models of the basal ganglia, it recapitulates features of neural activity whereby similar reaching movements are associated with similar neural activity patterns in the basal ganglia. We also demonstrate a link between learning these low-dimensional action embeddings and both generalisation and the limits of multi-task adaptation in human behavioural studies. Through this framework, we propose a novel computational view of how key motor regions of the brain interact to efficiently learn a new skill.
Magnetoencephalography reveals adaptive neural reorganization maintaining lexical-semantic proficiency in healthy aging
Although semantic cognition remains behaviorally stable with age, neuroimaging studies report age-related alterations in response to semantic context. We aimed to reconcile these inconsistent findings




