TheSequence

Share this post

🤖 Edge#233: Understanding DALL-E 2

thesequence.substack.com

🤖 Edge#233: Understanding DALL-E 2

Oct 11, 2022
∙ Paid
19
Share this post

🤖 Edge#233: Understanding DALL-E 2

thesequence.substack.com
Share

In this issue:

  • we explain DALL-E 2;

  • we discuss the DALL-E 2 paper; 

  • we explore DALL-E Mini (Now Craiyon), the most popular DALL-E implementation on the market.

Enjoy the learning!  


💡 ML Concept of the Day: Understanding DALL-E 2 

We have gone long enough in our series about text-to-image synthesis without covering the most famous models of all: OpenAI’s DALL-E and DALL-E 2. These models have been dominating the headlines regarding text-to-image generative models and have certainly accelerated the research in text-to-image models. The first version of DALL-E was released in January 2021 and used a combination of CLIP (Edge#219) and diffusion methods to generate photorealistic images from textual descriptions.  

DALL-E 2 is the second iteration of this OpenAI architecture, providing significant improvements over its predecessor. From a capability standpoint,

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2023 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing