Edge 380: Inside SELF-Discover: Google DeepMind's LLM Reasoning Method for Solving Complex Tasks
The techniques addresses some of the limitation of prompting methods such as chain-of-thought.
Reasoning continues evolving as one of the most fascinating areas in generative AI with research papers pushing the boundaries of our imagination. Chain of thought(CoT), tree of thought(ToT), System 2 are many of the recent LLM reasoning techniques that are exploring the ability of LLMs to breakdown complex problems. Recently, researchers from Google DeepMind published a paper outlining SELF-DISCOVER, a somewhat of a novel take on LLM reasoning.
As mentioned before, there is no lack of reasoning methods in the LLM space but DeepMind’s seems to have \been inspired by the way humans tackle reasoning problems. They’ve looked at methods like few-shot and zero-shot chain-of-thought prompting, which mimic the human approach of solving problems step by step. Another method, decomposition-based prompting, draws from the human ability to break down complex problems into smaller, manageable parts and then address each part individually. Additionally, they’ve explored step-back prompting, which reflects on the nature of the task to derive general principles for solving it.