TheSequence

TheSequence

Share this post

TheSequence
TheSequence
🤖 Edge#233: Understanding DALL-E 2

🤖 Edge#233: Understanding DALL-E 2

Oct 11, 2022
∙ Paid
19

Share this post

TheSequence
TheSequence
🤖 Edge#233: Understanding DALL-E 2
Share

In this issue:

  • we explain DALL-E 2;

  • we discuss the DALL-E 2 paper; 

  • we explore DALL-E Mini (Now Craiyon), the most popular DALL-E implementation on the market.

Enjoy the learning!  


💡 ML Concept of the Day: Understanding DALL-E 2 

We have gone long enough in our series about text-to-image synthesis without covering the most famous models of all: OpenAI’s DALL-E and DALL-E 2. These models have been dominating the headlines regarding text-to-image generative models and have certainly accelerated the research in text-to-image models. The first version of DALL-E was released in January 2021 and used a combination of CLIP (Edge#219) and diffusion methods to generate photorealistic images from textual descriptions.  

DALL-E 2 is the second iteration of this OpenAI architecture, providing significant improvements over its predecessor. From a capability standpoint,

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share