TheSequence

TheSequence

Share this post

TheSequence
TheSequence
Edge 380: Inside SELF-Discover: Google DeepMind's LLM Reasoning Method for Solving Complex Tasks

Edge 380: Inside SELF-Discover: Google DeepMind's LLM Reasoning Method for Solving Complex Tasks

The techniques addresses some of the limitation of prompting methods such as chain-of-thought.

Mar 21, 2024
∙ Paid
8

Share this post

TheSequence
TheSequence
Edge 380: Inside SELF-Discover: Google DeepMind's LLM Reasoning Method for Solving Complex Tasks
2
Share
Created Using Ideogram

Reasoning continues evolving as one of the most fascinating areas in generative AI with research papers pushing the boundaries of our imagination. Chain of thought(CoT), tree of thought(ToT), System 2 are many of the recent LLM reasoning techniques that are exploring the ability of LLMs to breakdown complex problems. Recently, researchers from Google DeepMind published a paper outlining SELF-DISCOVER, a somewhat of a novel take on LLM reasoning.

As mentioned before, there is no lack of reasoning methods in the LLM space but DeepMind’s seems to have \been inspired by the way humans tackle reasoning problems. They’ve looked at methods like few-shot and zero-shot chain-of-thought prompting, which mimic the human approach of solving problems step by step. Another method, decomposition-based prompting, draws from the human ability to break down complex problems into smaller, manageable parts and then address each part individually. Additionally, they’ve explored step-back prompting, which reflects on the nature of the task to derive general principles for solving it.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share