Edge 305: In-Context Retrieval-Augmented Language Models
Can we augment the knowledge of LLMs with external information without modifying their architecture?
In this Issue:
The concept of in-context retrieval-augmented language models(RALM).
AI21 original paper about in-context RALM.
The Humanloop platform.
💡 ML Concept of the Day: In-Context Retrieval-Augmented Language Models
Retrieval augmented language models(RALMs) have been the subject of the last few issues of this series. Conceptually, RALMs look to augment the knowledge of an LLM with external information. Most of the common RALM architectures rely on modifying an existing LLM to incorporate the external data. This approach is not always practical and requires significant computation resources.