TheSequence

TheSequence

Share this post

TheSequence
TheSequence
Edge 323: Types of Memory-Augmentation in Foundation Models

Edge 323: Types of Memory-Augmentation in Foundation Models

Not all LLMs memories are created equal.

Sep 05, 2023
∙ Paid
71

Share this post

TheSequence
TheSequence
Edge 323: Types of Memory-Augmentation in Foundation Models
Share
Created Using Midjourney

In this Issue:

  1. Different types of memory in foundation models.

  2. Google’s breakthrough paper that demonstrates that memory-augmented LLMs are computationally universal.

  3. A review the Chroma vector database.

💡 ML Concept of the Day: Types of Memory-Augmentation in Foundation Models

Memory is a key component of modern LLM architectures and one that is rapidly being incorporated into LLM frameworks. Memory allows LLMs to expands their contextual knowledge of a conversation by retrieving concepts from former interactions. Conceptually, LLM memory can be seen as a set of context-target pairs that can be aggregated to obtain next token probabilities.  There are many ways to incorporate memory into LLMs and they can have different effects. Most LLM memory forms can be grouped into three main groups:

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share