- Conditional version of generative adversarial nets
- In an unconditioned generative model, there is no control on modes of the data being generated.
- In the Conditional GAN (CGAN), the generator learns to generate a fake sample with a specific condition or characteristics (such as a label associated with an image or more detailed tag) rather than a generic sample from unknown noise distribution.
- Simply feeding the data y, and conditioning both the generator and discriminator
- Adversarial learning loss of CGAN
- Generator
- Input: z (100 dimension), Output: generated image
- FC [256, 512, 1024]
- Discriminator
- Input: image (generated or real), Output: fake/real
- FC [1024, 512, 256]
- Adversarial learning: refer to generator_train_step(), discriminator_train_step()
https://www.kaggle.com/datasets/zalando-research/fashionmnist
https://www.kaggle.com/code/arturlacerda/pytorch-conditional-gan/notebook
@article{CGAN,
title={Conditional Generative Adversarial Nets},
author={Mehdi Mirza, Simon Osindero},
journal = {arXiv},
year={2014}
}