🤼 GANs Recap
Last week we finished the GANs series, one of our most popular series so far. Here is a full recap for you to catch up with the topics we covered. As the proverb (and many ML people) says: Repetition is the mother of learning ;)
💡What are Generative Adversarial Networks?
Generative adversarial networks (GANs) are considered one of the most interesting developments in recent years of deep learning. Meta’s (Facebook) chief scientist and AI legend Yann LeCun once described GANs as “the most interesting idea in the last ten years of machine learning”. GANs are the brainchild of another deep learning household name: Google-Open AI-Apple AI veteran Ian Goodfellow. The core idea of GAN almost sounds like fun if we weren’t talking about neural networks: two neural networks compete with each other in a zero-sum game in order to master a task. GANs leverage game theory to maximize the efficiency of the learning process. As their name indicates, GANs fall under the umbrella of generative models, although they certainly belong to a unique class within that group.
In Edge#167 (read it without a subscription), we discuss generative adversarial networks and how GANs work; +the original GAN paper by Ian Goodfellow; +TF-GANs.
In Edge#169, we explain CycleGANs; +the original CycleGAN paper; +Mimicry.
In Edge#171, we learn about DCGANs; +DCGAN Paper; +Nvidia Imaginaire.
In Edge#173, we explore Conditional GANs; +how Meta AI used cGANs; +GAN Lab.
In Edge#175, we overview StyleGANs, +the original StyleGAN Paper, +Open Source StyleGANs.
In Edge#177, we analyze StackGANs; +the original StackGAN Paper; +NVIDIA’s GAN Projects.
Trying to alternate topics of research and engineering, the next week we will start a new series about ML training. Stay tuned!
Subscribe today with 30% OFF to have access to the full archive. Find a topic that matches your interests: