In this issue:Ā
we discuss generative adversarial networks;Ā
we overview the original GAN paper by Ian Goodfellow;Ā
we explore TF-GANs.
Enjoy the learning!Ā Ā
š” ML Concept of the Day: What are Generative Adversarial Networks?
Generative adversarial networks (GANs) are considered one of the most interesting developments in recent years of deep learning. Metaās (Facebook) chief scientist and AI legend Yann LeCun once described GANs as āthe most interesting idea in the last ten years of machine learningā. GANs are the brainchild of another deep learning household name: Google-Open AI-Apple AI veteran Ian Goodfellow. The core idea of GAN almost sounds like fun if we werenāt talking about neural networks: two neural networks compete with each other in a zero-sum game in order to master a task. GANs leverage game theory to maximize the efficiency of the learning process. As their name indicates, GANs fall under the umbrella of generative models, although they certainly belong to a unique class within that group.Ā Ā
How do GANs work?
As mentioned above, the game dynamics in GANs are carried by two neural networks ā the generator and the discriminator:Ā Ā
TheāÆgeneratorāÆlearns to generate plausible data. The generated instances become negative training examples for the discriminator.Ā
TheāÆdiscriminatorāÆlearns to distinguish the generator's fake data from real data. The discriminator penalizes the generator for producing implausible results.Ā
GANs are a connected structure, with the output of the generator used as an input into the discriminator, while the output from the discriminator is used as learning feedback for the generator. GANs have been at the center of many deep learning use cases in the real world. Many GANs models have achieved notoriety generating training datasets that can be used to train other neural networks. Security is another key use case for GANs, as they have proven to be effective in simulating attacks in neural networks that can be used to increase their robustness.Ā Ā Ā
š ML Research You Should Know: GANsā Original PaperĀ
In āÆGenerative Adversarial Networks, a group of authors led by Ian Goodfellow outlined the principles of GANs architecture, opening the door to a brand new area of research in the deep learning space. The paper was originally published in 2014.Ā
The objective: GANs are a type of generative model that learns as a result of competition between two different neural networks. The original paper showed how game-theoretic aspects in a zero-sum game could be used to optimize the learning process.Ā Ā Ā
Why it is so important: The original GANs paper is considered one of the seminal papers in the history of deep learning. GANs went from being an obscure idea to one of the most important architectures in practical deep learning applications in just a handful of years. There have been hundreds of research papers about GANs theory, but Goodfellowās original paper remains a relic in the deep learning space.Ā Ā Ā Ā Ā Ā
Diving deeper: Like other generative models, GANs try to learn the distribution of a training set as well as the mechanisms to estimate that distribution. Before GANs, many generative models had challenges estimating complex, probabilistic intractable distributions. In the GANs paper, Goodfellow and the other authors tried to outline a new form of generative model that can address those difficulties.Ā Ā
The key contribution of GANs is to model the learning process as a zero-sum game between two neural networks. The first network is called the generator and is intended to generate samples that match the distribution of the training dataset. The second neural network is known as the discriminator. This neural network examines samples to determine whether they are real or fake. The discriminator learns using traditional supervised learning techniques, dividing inputs into two classes: real or fake. In the GAN model, the generator is trained to fool the discriminator and tries to determine whether the data produced by the generator is real or not. You can think of the generator as a diamond counterfeiter and the discriminator as the appraiser. Now, how much fun is that? The following diagram illustrates the architecture in the context of an image classification model.Ā Ā
Image credit: Google Developers
The competition created by the GAN model should drive both the generator and discriminator to improve their learning policies. The original GANs paper also highlighted some computation advantages of GAN models, as the generator can be updated without using real data examples, using the gradients produced by the discriminator instead. One of the challenges discussed in the paper was related to training. Because GANs involve two neural networks, they also require two different training processes. Similarly, GANs need to evaluate two loss functions instead of one in order to replicate the probability distribution of the original dataset. This remains an active area of research within the GAN space.Ā Ā Ā
The original GANs paper deserves its own place in the history of deep learning. GANs have become a very popular area of research and have been incorporated into many deep learning frameworks and platforms.
š¤ ML Technology to Follow: TF-GAN Brings GANs to TensorFlowĀ Ā
Why should I know about this: TensorFlow-GAN (TF-GAN) is one of the most popular libraries for implementing GANs.Ā Ā
What it is: If you are looking to get started with GANs, TF-GAN might be one of the projects to evaluate. TF-GAN is a lightweight framework for implementing GAN architectures. It was designed on top of the TensorFlow programming model and enables a very simple API for the training and construction of GANs.Ā
Building deep neural networks is typically hard, and GANs take that problem to another level, as they use two different neural networks. From that perspective, the processes of training and building GANs are far from being a trivial endeavor. TF-GAN simplifies this process by providing a very simple programming model for building TensorFlow-based GANs.Ā Ā
From an architecture standpoint, the TF-GAN library includes a series of key components that can be extended independently.Ā Ā
Core: this component includes the main infrastructure needed to train GANs;Ā Ā
Features: this component includes operations such as instance normalization and conditioning, which are common in GAN architectures;Ā Ā
Losses: this component encapsulates different loss and penalty functions that can be used in the optimization of GANs;Ā Ā
Evaluation: this component includes a series of evaluation metrics tailored to GAN architectures. Some example metrics include Inception Score,āÆFrechet Distance, orāÆKernel Distance.
A key component of TF-GAN is the GANEstimator responsible for assembling the GAN model. Functionally, the GANEstimator class receives parameters such as the loss functions, optimizers, and network builder functions from both the generator and discriminator networks. It uses these parameters to build and connect both neural networks. Similarly, the GANEstimator is responsible for training the generator and discriminator networks.Ā Ā
The latest release of TF-GAN includes notable additions such as support for Cloud TPU infrastructure, TensorFlow 2.0 as well as an enhanced group of optimization metrics. Additionally, the release is accompanied by many examples and interactive tutorials, making it relatively easy to get started with.Ā Ā
Practical implementation: One of the most impressive things about TF-GAN is the number of projects using it. Projects such as DeepMindās BigGAN for image generation or GANSynth for musical composition are based on TF-GAN. The TensorFlow community has also embraced the project and has received a significant number of contributions.Ā Ā
How can I use it: TF-GAN is one of the components of TensorFlow that maintains an independent GitHub repository. The project is available at https://github.com/tensorflow/gan.Ā Ā
TheSequence is a summary of groundbreaking ML research papers, engaging explanations of ML concepts, and exploration of new ML frameworks and platforms. TheSequence keeps you up to date with the news, trends, and technology developments in the AI field.
5 minutes of your time, 3 times a week ā you will steadily become knowledgeable about everything happening in the AI space. Make it a gift for those who can benefit from it.