TheSequence

TheSequence

Share this post

TheSequence
TheSequence
Inside Orca 2: Microsoft's Small Language Model that Outperforms Models 10x Larger in Reasoning Capabilities

Inside Orca 2: Microsoft's Small Language Model that Outperforms Models 10x Larger in Reasoning Capabilities

The model innovating in the training procedures to improve reasoning abilities in small language models.

Dec 28, 2023
∙ Paid
276

Share this post

TheSequence
TheSequence
Inside Orca 2: Microsoft's Small Language Model that Outperforms Models 10x Larger in Reasoning Capabilities
3
Share
Created Using Midjourney

Earlier this year, Microsoft Research unveiled Orca, a 13-billion parameter model that can emulate the intricate reasoning processes exhibited by other LLMs. Specifically , Orca learns from GPT-4 signals including explanatory traces, meticulous step-by-step thinking, and a myriad of complex instructions. A few weeks ago, Microsoft expanded on that line of work with the release of Orca 2, an extension of the groundbreaking work that delves even deeper into the domain of Small Language Models (SLMs). This new release challenges the conventional approaches to reasoning, pushing the boundaries of what’s possible in the field.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share