TheSequence

TheSequence

The Sequence Opinion #786: The Great Absorption: When System Code Becomes Model Weights

A lot of the system and agentic capabilities are getting productized into frontier models.

Jan 08, 2026
∙ Paid

There is a distinct, unsettling feeling when you write code for AI agents today. You spend weeks architecting a beautiful orchestration framework—a complex Rube Goldberg machine of Python scripts, vector databases, regex parsers, and prompt chains—only to wake up one Tuesday morning to find that a new checkpoint from OpenAI or Anthropic has rendered 80% of your repository obsolete.

We are witnessing a relentless historical trend in the generative AI era. It is a specific flavor of Rich Sutton’s “The Bitter Lesson,” playing out in real-time. In Sutton’s original formulation, hand-coded features were consistently crushed by general methods that leveraged computation.1 In our current era, the lesson is slightly different but equally bitter: Hand-coded system scaffolding is consistently crushed by model internalization.

The history of the last three years is the history of capabilities migrating from the “outside” (the system, the prompt, the agent loop) to the “inside” (the weights, the activations, the forward pass). If you are building AI products, understanding this migration isn’t just academic—it is a survival mechanism. You need to know which parts of your stack are permanent infrastructure, and which parts are just temporary scaffolding waiting to be absorbed.

The Software 1.0 vs. Software 2.0 Clash

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Jesus Rodriguez · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture