TheSequence

TheSequence

Share this post

TheSequence
TheSequence
Edge 305: In-Context Retrieval-Augmented Language Models

Edge 305: In-Context Retrieval-Augmented Language Models

Can we augment the knowledge of LLMs with external information without modifying their architecture?

Jul 04, 2023
∙ Paid
27

Share this post

TheSequence
TheSequence
Edge 305: In-Context Retrieval-Augmented Language Models
3
Share
Created Using Midjourney

In this Issue:

  1. The concept of in-context retrieval-augmented language models(RALM).

  2. AI21 original paper about in-context RALM.

  3. The Humanloop platform.

💡 ML Concept of the Day: In-Context Retrieval-Augmented Language Models

Retrieval augmented language models(RALMs) have been the subject of the last few issues of this series. Conceptually, RALMs look to augment the knowledge of an LLM with external information. Most of the common RALM architectures rely on modifying an existing LLM to incorporate the external data. This approach is not always practical and requires significant computation resources.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share