Generative Adversarial Networks

Introduction

Generative Adversarial Networks (GANs) are a class of deep learning models introduced by Ian Goodfellow and his colleagues in 2014. The core idea behind GANs is to train a generator network to produce data that is indistinguishable from real data, while simultaneously training a discriminator network to differentiate between real and generated data.

A common analogy that can be found online is that of an art forger/painter (the generator) which tries to forge paintings and an art investigator/critic (the discriminator) which tries to detect limitations.

Lilian Weng GAN Figure

GANs vs VAEs

GANs and VAEs are both popular generative models in machine learning, but they have different strengths and weaknesses. Whether one is “better” depends on the specific task and requirements. Here’s a breakdown of their strengths and weaknesses.

Here’s a table summarizing the key differences:

Feature GANs VAEs
Image Quality Higher Lower
Ease of Training More difficult Easier
Stability Less Stable More Stable
Applications Image Generation, Super-resolution, image-to-image translation Image Denoising, Anamoly Detection, Signal Analysis

Ultimately, the best choice depends on one’s specific needs and priorities. If one needs high-quality images for tasks like generating realistic faces or landscapes, then a GAN might be the better choice. However, if one needs a model that is easier to train and more stable, then a VAE might be a better option.

Training GANs

Training GANs involves a unique adversarial process where the generator and discriminator play a cat-and-mouse game.

References:

  1. Lilian Weng’s Awesome Blog on GANs
  2. GAN — What is Generative Adversarial Networks
  3. What are the fundamental differences between VAE and GAN for image generation?
  4. Issues with GAN and VAE models
  5. VAE Vs. GAN For Image Generation
  6. Diffusion Models vs. GANs vs. VAEs: Comparison of Deep Generative Models
< > Update on GitHub