• Home
  • Uncategorized
  • Why motor learning involves multiple systems: an algorithmic perspective

Why motor learning involves multiple systems: an algorithmic perspective

The initial stage of learning motor skills involves exploring vast action spaces, making it impractical to learn the value of every possible action independently. This poses a challenge for standard reinforcement learning approaches, which excel in constrained domains but struggle when the space of possible actions is high-dimensional. Recent work in machine learning has sought to mitigate this problem by combining deep reinforcement learning with a supervised learning system that reduces the complexity of the control policy space by learning low-dimensional embeddings of an action space. Here, we propose that in the mammalian brain, the cortico-cerebellar network learns these low-dimensional action embeddings in a supervised way, while the basal ganglia learn value and policies in this action embedding space using reinforcement learning. We trained this model on reaching tasks and show that, contrary to traditional models of the basal ganglia, it recapitulates features of neural activity whereby similar reaching movements are associated with similar neural activity patterns in the basal ganglia. We also demonstrate a link between learning these low-dimensional action embeddings and both generalisation and the limits of multi-task adaptation in human behavioural studies. Through this framework, we propose a novel computational view of how key motor regions of the brain interact to efficiently learn a new skill.

Subscribe for Updates

Copyright 2025 dijee Intelligence Ltd.   dijee Intelligence Ltd. is a private limited company registered in England and Wales at Media House, Sopers Road, Cuffley, Hertfordshire, EN6 4RY, UK registeration number 16808844