--- license: apache-2.0 tags: - stylegan2 - image-generation --- # AniCharaGAN: Anime Character Generation with StyleGAN2 [![GitHub Repo stars](https://img.shields.io/github/stars/eugenesiow/practical-ml?style=social)](https://github.com/eugenesiow/practical-ml) This model uses the awesome lucidrains’s [stylegan2-pytorch](https://github.com/lucidrains/stylegan2-pytorch) library to train a model on a private anime character dataset to generate full-body 256x256 female anime characters. Here are some samples: ![Samples of anime characters and styles generated by the model](images/samples1.jpg "Samples of anime characters and styles generated by the model") ## Model description The model generates 256x256, square, white background, full-body anime characters. It is trained using [stylegan2-pytorch](https://github.com/lucidrains/stylegan2-pytorch). It is trained to 150 epochs. ## Intended uses & limitations You can use the model for generating anime characters and than use a super resolution library like [super_image](https://github.com/eugenesiow/super-image) to upscale. ### How to use [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/practical-ml/blob/master/notebooks/Anime_Character_Generation_with_StyleGAN2.ipynb "Open in Colab") Install the dependencies: ```bash pip install -q stylegan2_pytorch==1.5.10 ``` Here is how to generate images: ```python import torch from torchvision.utils import save_image from stylegan2_pytorch import ModelLoader from pathlib import Path Path('./models/ani-chara-gan/').mkdir(parents=True, exist_ok=True) torch.hub.download_url_to_file('https://huggingface.co/eugenesiow/ani-chara-gan/resolve/main/model.pt', './models/ani-chara-gan/model_150.pt') torch.hub.download_url_to_file('https://huggingface.co/eugenesiow/ani-chara-gan/resolve/main/.config.json', './models/ani-chara-gan/.config.json') loader = ModelLoader( base_dir = './', name = 'ani-chara-gan' ) noise = torch.randn(1, 256).cuda() # noise styles = loader.noise_to_styles(noise, trunc_psi = 0.7) # pass through mapping network images = loader.styles_to_images(styles) # call the generator on intermediate style vectors save_image(images, './sample.jpg') ``` ## BibTeX entry and citation info The model is part of the [practical-ml](https://github.com/eugenesiow/practical-ml) repository. [![GitHub Repo stars](https://img.shields.io/github/stars/eugenesiow/practical-ml?style=social)](https://github.com/eugenesiow/practical-ml)