Edge 323: Types of Memory-Augmentation in Foundation Models
Not all LLMs memories are created equal.
In this Issue:
Different types of memory in foundation models.
Google’s breakthrough paper that demonstrates that memory-augmented LLMs are computationally universal.
A review the Chroma vector database.
💡 ML Concept of the Day: Types of Memory-Augmentation in Foundation Models
Memory is a key component of modern LLM architectures and one that is rapidly being incorporated into LLM frameworks. Memory allows LLMs to expands their contextual knowledge of a conversation by retrieving concepts from former interactions. Conceptually, LLM memory can be seen as a set of context-target pairs that can be aggregated to obtain next token probabilities. There are many ways to incorporate memory into LLMs and they can have different effects. Most LLM memory forms can be grouped into three main groups: