TheSequence

TheSequence

Share this post

TheSequence
TheSequence
Edge 301: Retrieval-Augmented Language Models Methods

Edge 301: Retrieval-Augmented Language Models Methods

The ideas for decoupling model knowledge from language generation.

Jun 20, 2023
∙ Paid
25

Share this post

TheSequence
TheSequence
Edge 301: Retrieval-Augmented Language Models Methods
2
Share
Created Using Midjourney

In this Issue:

  1. Retrieval-augmented language models.

  2. Google’s REALM paper.

  3. Ray’s new support for foundation models.

💡 ML Concept of the Day: Retrieval-Augmented Language Models

Augmenting the knowledge of large language models(LLMs) is one of the most active areas of research in the foundation model space. In previous editions of this series, we discussed methods for tool augmentation that integrates LLMs with consumer and business applications. A complementary group of methods is what is known as retrieval-augmented LLMs which use an external knowledge graph to augment the capabilities of an LLM. The best known example of retrieval-augmented LLMs are the LLM integration with search engines such as Bing or Google.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share