Understanding ReAct(Reason + Act) in LLMs
Language models that can both reason and make decisions.
In this Issue:
Understanding ReAct.
Google’s original ReAct paper.
The Haystack framework for LLM-based search.
💡 ML Concept of the Day: What is ReAct
In the last few editions of this series about modern techniques in transformer models, we have explored both the concepts of reasoning and acting. Advancements in language models (LM) have extended their range of applications to downstream tasks. When appropriately prompted through a chain-of-thought approach, these language models exhibit emergent capabilities for self-conditioned reasoning. They effectively derive answers from questions, showcasing proficiency in diverse areas such as arithmetic, commonsense, and symbolic reasoning. Conversely, recent research utilizes pre-trained language models for planning and executing actions within interactive environments. These environments encompass various domains like text games, web navigation, embodied tasks, and robotics. The focus of this work lies in mapping text contexts to text actions by leveraging the internal knowledge of the language model. However, these approaches fail to engage in abstract reasoning concerning high-level goals or maintain a working memory that supports decision-making over extended periods.