TheSequence

TheSequence

Share this post

TheSequence
TheSequence
Understanding ReAct(Reason + Act) in LLMs

Understanding ReAct(Reason + Act) in LLMs

Language models that can both reason and make decisions.

Jul 25, 2023
∙ Paid
36

Share this post

TheSequence
TheSequence
Understanding ReAct(Reason + Act) in LLMs
2
Share
Created Using Midjourney

In this Issue:

  1. Understanding ReAct.

  2. Google’s original ReAct paper.

  3. The Haystack framework for LLM-based search.

💡 ML Concept of the Day: What is ReAct

In the last few editions of this series about modern techniques in transformer models, we have explored both the concepts of reasoning and acting. Advancements in language models (LM) have extended their range of applications to downstream tasks. When appropriately prompted through a chain-of-thought approach, these language models exhibit emergent capabilities for self-conditioned reasoning. They effectively derive answers from questions, showcasing proficiency in diverse areas such as arithmetic, commonsense, and symbolic reasoning.  Conversely, recent research utilizes pre-trained language models for planning and executing actions within interactive environments. These environments encompass various domains like text games, web navigation, embodied tasks, and robotics. The focus of this work lies in mapping text contexts to text actions by leveraging the internal knowledge of the language model. However, these approaches fail to engage in abstract reasoning concerning high-level goals or maintain a working memory that supports decision-making over extended periods.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share