TheSequence

TheSequence

Share this post

TheSequence
TheSequence
🔁 Edge#169: Understanding CycleGANs

🔁 Edge#169: Understanding CycleGANs

Mar 01, 2022
∙ Paid
7

Share this post

TheSequence
TheSequence
🔁 Edge#169: Understanding CycleGANs
Share

In this issue: 

  • we explain CycleGANs; 

  • we walk through the original CycleGAN paper; 

  • we overview a GAN library Mimicry. 

Enjoy the learning!  

💡 ML Concept of the Day: Understanding CycleGANs 

In Edge#167, we introduced the concept of Generative adversarial networks (GANs). In the last part of our series about generative models, we would like to discuss different GAN architectures that have been widely adopted in deep learning solutions. That’s right, GANs are not a single deep neural network architecture and there are several relevant variations that have evolved during the last few years. We would start our journey by discussing CycleGANs that have become one of the popular models in image translation scenarios.  

The goal of CycleGANs is to perform translations between images of different domains without training pairs. Think about transforming landscape images from Van Gogh paintings into the style of Monet. Being able to translate between domains without paired training is what makes CycleGANs unique. The core architecture of CycleGANs consists of two GAN models. The generator of the first network performs a mapping between a real image and generates a synthetic one. Let’s say the first generator converts from real images to Van Gogh pictures. The second generator performs the inverse mapping by taking Van Gogh paintings and generating a real image. The discriminators of both networks are trained to detect real images from fake ones. In the next section, we will dive into the architecture.  

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share