code for the paper Lessons Learned from the Training of GANs on Artificial Datasets and beyond
Some recent state-of-the-art generative models in ONE notebook.
This repo implements any method that can match the following regular expression:
(MIX-)?(GAN|WGAN|BigGAN|MHingeGAN|AMGAN|StyleGAN|StyleGAN2)(\+ADA|\+CR|\+EMA|\+GP|\+R1|\+SA|\+SN)*
- For the GPU implementation,
tensorflow>=2
ortensorflow-gpu==1.14
(some modifications for the calculation of IS and FID will be necessary, see the other repos of mine). - For the TPU implemetation,
tensorflow>=2.4
ortf-nightly
will be necessary.
This implemetation supports automatic mixed-precision training of TensorFlow, which can reduce GPU memory usage and training time dramatically. Therefore, it is recommended to upgrade to Colab Pro in order to use GPUs with Tensor Cores. Training MIX-MHingeGAN
with 10 generators and 10 discriminators takes only 1.5 days on a single Tesla V100.
Coming soon...
- First disable Stackdriver Logging to avoid unnecessary charges.
- Create cloud TPUs, TPU software version should be at least
2.4.0
ornightly
. - Fill in
TPU_NAMES
andZONE
in the the above notebook for TPUs. Set up environment variablesLOG
andDATA
, run the notebook. - Delete TPUs.
https://github.com/igul222/improved_wgan_training
https://github.com/biuyq/CT-GAN
https://github.com/google/compare_gan
https://github.com/ajbrock/BigGAN-PyTorch
https://github.com/taki0112/BigGAN-Tensorflow
https://github.com/brain-research/self-attention-gan
https://github.com/ilyakava/BigGAN-PyTorch
https://github.com/NVlabs/stylegan2
https://github.com/NVlabs/stylegan2-ada
@article{tang2020lessons,
title={Lessons Learned from the Training of GANs on Artificial Datasets},
author={Tang, Shichang},
journal={arXiv preprint arXiv:2007.06418},
year={2020}
}