Edge 371: Two-Step LLM Reasoning with Skeleton of Thoughts
Created by Microsoft Research, the technique models some of the aspects of human cognitive reasoning in LLMs.
In this Issue:
An overview of Skeleton of Thoughts(SoT) for LLM reasoning.
Microsoft’s original SoT paper.
Dify framework for building LLM apps.
💡 ML Concept of the Day: Understanding Skeleton-of-Thoughts
The Skeleton-of-Thoughts (SoT) technique, a recent innovation in the field of Large Language Models (LLMs), represents a significant shift in how these models process and generate information. soT was originally aimed to reduced the latency in end-to-end inference in LLMs but the results has had profound impact in the reasoning space. SoT is grounded in the observation that human thought and response patterns are often non-linear. Unlike traditional LLMs, which generate responses in a sequential manner, SoT introduces a two-stage process for answer generation. Initially, the LLM formulates a basic outline or 'skeleton' of the response. This skeleton encompasses the key points or elements of the answer. Following this, the model elaborates on each point in the skeleton simultaneously, rather than one after the other.