waifu-diffusion / README.md
kirisame's picture
Add WD 1.3 float16 weights
1574306
metadata
language:
  - en
tags:
  - stable-diffusion
  - text-to-image
license: creativeml-openrail-m
inference: false

waifu-diffusion v1.3 - Diffusion for Weebs

waifu-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through fine-tuning.

<<<<<<< HEAD

Original Weights

Gradio & Colab

We also support a Gradio Web UI and Colab with Diffusers to run Waifu Diffusion: Open In Spaces Open In Colab

>>>>>>> b45bafccd9d0e0757b70a54c7ebc32ff56ca9ee1

Model Description

See here for a full model overview.

<<<<<<< HEAD

License

This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:

  1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
  2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
  3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here

    The current model has been fine-tuned with a learning rate of 5.0e-6 for 4 epochs on 56k Danbooru text-image pairs which all have an aesthetic rating greater than 6.0.

Training Data & Annotative Prompting

The data used for fine-tuning has come from a random sample of 56k Danbooru images, which were filtered based on CLIP Aesthetic Scoring where only images with an aesthetic score greater than 6.0 were used.

Captions are Danbooru-style captions.

b45bafccd9d0e0757b70a54c7ebc32ff56ca9ee1

Downstream Uses

This model can be used for entertainment purposes and as a generative art assistant.

Example Code

import torch
from torch import autocast
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained(
    'waifu-diffusion',
    torch_dtype=torch.float32
).to('cuda')

<<<<<<< HEAD
prompt = "1girl, aqua eyes, baseball cap, blonde hair, closed mouth, earrings, green background, hat, hoop earrings, jewelry, looking at viewer, shirt, short hair, simple background, solo, upper body, yellow shirt"
=======

pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision='fp16')
pipe = pipe.to(device)

prompt = "touhou hakurei_reimu 1girl solo portrait"
>>>>>>> b45bafccd9d0e0757b70a54c7ebc32ff56ca9ee1
with autocast("cuda"):
    image = pipe(prompt, guidance_scale=6)["sample"][0]  
    
image.save("test.png")

Team Members and Acknowledgements

This project would not have been possible without the incredible work by the CompVis Researchers.

In order to reach us, you can join our Discord server.

Discord Server