TheSequence

TheSequence

Share this post

TheSequence
TheSequence
Edge 371: Two-Step LLM Reasoning with Skeleton of Thoughts
Copy link
Facebook
Email
Notes
More

Edge 371: Two-Step LLM Reasoning with Skeleton of Thoughts

Created by Microsoft Research, the technique models some of the aspects of human cognitive reasoning in LLMs.

Feb 20, 2024
∙ Paid
40

Share this post

TheSequence
TheSequence
Edge 371: Two-Step LLM Reasoning with Skeleton of Thoughts
Copy link
Facebook
Email
Notes
More
1
Share
Visualize a conceptual piece where an 'artificial intelligence skeleton model' symbolizes the thought process of AI analyzing a problem in two stages. This unique skeleton model, structured with digital bones and circuits, stands in the center of the image, splitting the scene into two halves to depict the phases of thought. On the left side, illustrate the skeleton model with a gentle glow around its head, surrounded by abstract, loosely defined shapes and symbols floating around, signifying the initial, broad understanding of the problem. These shapes include basic forms, light lines, and soft colors, representing the AI's preliminary, general answer. On the right side, show the same skeleton model with a more intense glow, now surrounded by sharply defined shapes, precise symbols, and detailed textual data, symbolizing the detailed completion of the problem-solving process. This side features complex diagrams, equations, and specific data points, indicating a deep, detailed understanding. The background transitions from a dim, nebulous atmosphere on the left to a bright, clear setting on the right, enhancing the visual metaphor of moving from a vague to a detailed comprehension.
Created Using DALL-E

In this Issue:

  1. An overview of Skeleton of Thoughts(SoT) for LLM reasoning.

  2. Microsoft’s original SoT paper.

  3. Dify framework for building LLM apps.

💡 ML Concept of the Day: Understanding Skeleton-of-Thoughts

The Skeleton-of-Thoughts (SoT) technique, a recent innovation in the field of Large Language Models (LLMs), represents a significant shift in how these models process and generate information. soT was originally aimed to reduced the latency in end-to-end inference in LLMs but the results has had profound impact in the reasoning space. SoT is grounded in the observation that human thought and response patterns are often non-linear. Unlike traditional LLMs, which generate responses in a sequential manner, SoT introduces a two-stage process for answer generation. Initially, the LLM formulates a basic outline or 'skeleton' of the response. This skeleton encompasses the key points or elements of the answer. Following this, the model elaborates on each point in the skeleton simultaneously, rather than one after the other.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More