arXiv:2604.21229v1 Announce Type: cross
Abstract: Large language model assistants are increasingly expected to retain and reason over information accumulated across many sessions. We introduce EngramaBench, a benchmark for long-term conversational memory built around five personas, one hundred multi-session conversations, and one hundred fifty queries spanning factual recall, cross-space integration, temporal reasoning, adversarial abstention, and emergent synthesis. We evaluate Engrama, a graph-structured memory system, against GPT-4o full-context prompting and Mem0, an open-source vector-retrieval memory system. All three use the same answering model (GPT-4o), isolating the effect of memory architecture. GPT-4o full-context achieves the highest composite score (0.6186), while Engrama scores 0.5367 globally but is the only system to score higher than full-context prompting on cross-space reasoning (0.6532 vs. 0.6291, n=30). Mem0 is cheapest but substantially weaker (0.4809). Ablations reveal that the components driving Engrama’s cross-space advantage trade off against global composite score, exposing a systems-level tension between structured memory specialization and aggregate optimization.
Coordinated Temporal Dynamics of Glucocorticoid Receptor Binding and Chromatin Landscape Drive Transcriptional Regulation
Glucocorticoid receptor (GR) signaling elicits diverse transcriptional responses through dynamic and context-dependent interactions with chromatin. Here, we define a temporally resolved and mechanistically integrated framework


