**StudioGAN** is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation. StudioGAN aims to offer an identical playground for modern GANs so that machine learning researchers can readily compare and analyze a new idea. This hub provides all the checkpoints we used to create the GAN benchmarks below. Please visit our github repository ([PyTorch-StudioGAN](https://github.com/POSTECH-CVLab/PyTorch-StudioGAN)) for more details.

## License PyTorch-StudioGAN is an open-source library under the MIT license (MIT). However, portions of the library are avaiiable under distinct license terms: StyleGAN2, StyleGAN2-ADA, and StyleGAN3 are licensed under [NVIDIA source code license](https://github.com/POSTECH-CVLab/PyTorch-StudioGAN/blob/master/LICENSE-NVIDIA), and PyTorch-FID is licensed under [Apache License](https://github.com/POSTECH-CVLab/PyTorch-StudioGAN/blob/master/src/metrics/fid.py). ## Citation StudioGAN is established for the following research projects. Please cite our work if you use StudioGAN. ```bib @article{kang2022StudioGAN, title = {{StudioGAN: A Taxonomy and Benchmark of GANs for Image Synthesis}}, author = {MinGuk Kang and Joonghyuk Shin and Jaesik Park}, journal = {2206.09479 (arXiv)}, year = {2022} } ``` ```bib @inproceedings{kang2021ReACGAN, title = {{Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training}}, author = {Minguk Kang, Woohyeon Shim, Minsu Cho, and Jaesik Park}, journal = {Conference on Neural Information Processing Systems (NeurIPS)}, year = {2021} } ``` ```bib @inproceedings{kang2020ContraGAN, title = {{ContraGAN: Contrastive Learning for Conditional Image Generation}}, author = {Minguk Kang and Jaesik Park}, journal = {Conference on Neural Information Processing Systems (NeurIPS)}, year = {2020} } ```