TheSequence

TheSequence

Share this post

TheSequence
TheSequence
➗ Edge#173: Exploring Conditional GANs

➗ Edge#173: Exploring Conditional GANs

+cGANs to generate images from concepts and GAN Lab

Mar 15, 2022
∙ Paid
4

Share this post

TheSequence
TheSequence
➗ Edge#173: Exploring Conditional GANs
Share

In this issue: 

  • we explore Conditional GANs;  

  • we overview how Meta AI used cGANs to generate images from concepts;  

  • we explain GAN Lab.   

Enjoy the learning!  

💡 ML Concept of the Day: Exploring Conditional GANs 

Continuing our series about GAN variations, we would like to turn our attention to Conditional GANs (cGANs). In the traditional GAN architecture, the generator learns to synthesize new images, and the discriminator learns to distinguish synthetic images from real ones. One of the challenges of this classic model is that its unable to set conditions about what type of image would be produced. For instance, in a GAN architecture that generates images of hand-written digits, it is impossible to determine which digits will be generated.  

cGANs solves this challenge by adding an additional input of labels that can help condition the images produced by the generator. In our hand-written digit generator, we can incorporate new datasets with digits written in a specific style. From that perspective, cGANs are a supervised technique as they require this additional layer of labeled data. In cGANs, the output of the discriminator is not only based on the differences between synthetic and real images but also the correspondence between the generated image and the target labels. There are different architectures of cGANs, but

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share