TheSequence

TheSequence

Share this post

TheSequence
TheSequence
The Sequence Chat: The Transition that Changes Everything. From Pretraining to Post-Training in Foundation Models

The Sequence Chat: The Transition that Changes Everything. From Pretraining to Post-Training in Foundation Models

One of the most impactful transitions in the generative AI space

Dec 04, 2024
∙ Paid
8

Share this post

TheSequence
TheSequence
The Sequence Chat: The Transition that Changes Everything. From Pretraining to Post-Training in Foundation Models
1
Share
A cinematic digital illustration of an artificial intelligence entity transitioning between two phases: 'Pretraining' and 'Post-Training.' The left side represents the pretraining phase, with the word 'Pretraining' glowing prominently above, surrounded by a futuristic, chaotic data stream of binary code, swirling images, and graphs. The right side represents the post-training phase, with the word 'Post-Training' glowing boldly, featuring the AI in a focused, organized environment analyzing complex problems like interconnected networks, equations, and holographic diagrams. The lighting is dynamic, with dark, moody tones on the pretraining side shifting to a vibrant, golden glow on the post-training side. The scene is visually cinematic, with high contrast and dramatic lighting effects.
Created Using DALL-E

The release of GPT-01 marked many important milestones in the generative AI space. The model has sparked a tremendous new phase of innovation in reasoning models which has materialized in the release of models such as DeepSeek’s R1 or Alibaba’s QwQ. The magical reasoning capabilities of these models is powered by an increasing transition from pretraining to post-training computation time. In this essay, we will explore the fundamentals behind that transition highlighting the limitations associated with scaling pretraining and the emerging techniques in post-training. Furthermore, it emphasizes the shift away from traditional reinforcement learning with human feedback (RLHF) towards innovative methodologies that promise to enhance model performance and adaptability.

Understanding Pretraining in Foundation Models

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share