arXiv:2601.20831v1 Announce Type: new
Abstract: Foundation models rely on in-context learning for personalized decision making. The limited size of this context window necessitates memory compression and retrieval systems like RAG. These systems however often treat memory as large offline storage spaces, which is unfavorable for embodied agents that are expected to operate under strict memory and compute constraints, online. In this work, we propose MemCtrl, a novel framework that uses Multimodal Large Language Models (MLLMs) for pruning memory online. MemCtrl augments MLLMs with a trainable memory head mu that acts as a gate to determine which observations or reflections to retain, update, or discard during exploration. We evaluate with training two types of mu, 1) via an offline expert, and 2) via online RL, and observe significant improvement in overall embodied task completion ability on mu-augmented MLLMs. In particular, on augmenting two low performing MLLMs with MemCtrl on multiple subsets of the EmbodiedBench benchmark, we observe that mu-augmented MLLMs show an improvement of around 16% on average, with over 20% on specific instruction subsets. Finally, we present a qualitative analysis on the memory fragments collected by mu, noting the superior performance of mu augmented MLLMs on long and complex instruction types.
Inside the marketplace powering bespoke AI deepfakes of real women
Civitai—an online marketplace for buying and selling AI-generated content, backed by the venture capital firm Andreessen Horowitz—is letting users buy custom instruction files for generating


