Reimu Hakurei commited on
Commit
31c9f74
1 Parent(s): 44d0d4d

Update README

Browse files
Files changed (1) hide show
  1. README.md +13 -13
README.md CHANGED
@@ -11,22 +11,19 @@ inference: false
11
 
12
  # waifu-diffusion - Diffusion for Weebs
13
 
14
- waifu-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through [Textual Inversion](https://github.com/rinongal/textual_inversion).
15
 
16
- <img src=https://cdn.discordapp.com/attachments/872361510133981234/1016022078635388979/unknown.png?3867929 width=30% height=30%>
17
- <sub>Prompt: touhou 1girl komeiji_koishi portrait</sub>
18
 
19
  ## Model Description
20
 
21
  The model originally used for fine-tuning is [Stable Diffusion V1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4), which is a latent image diffusion model trained on [LAION2B-en](https://huggingface.co/datasets/laion/laion2B-en).
22
 
23
- The current model is based from [Yasu Seno](https://twitter.com/naclbbr)'s [TrinArt Stable Diffusion](https://huggingface.co/naclbit/trinart_stable_diffusion) which has been fine-tuned on 30,000 high-resolution manga/anime-style images for 3.5 epochs.
24
-
25
- With [Textual Inversion](https://github.com/rinongal/textual_inversion), the embeddings for the text encoder has been trained to align more with anime-styled images, reducing excessive prompting.
26
 
27
  ## Training Data & Annotative Prompting
28
 
29
- The data used for Textual Inversion has come from a random sample of 25k Danbooru images, which were then filtered based on [CLIP Aesthetic Scoring](https://github.com/christophschuhmann/improved-aesthetic-predictor) where only images with an aesthetic score greater than `6.0` were used.
30
 
31
  Captions are Danbooru-style captions.
32
 
@@ -45,10 +42,10 @@ model_id = "hakurei/waifu-diffusion"
45
  device = "cuda"
46
 
47
 
48
- pipe = StableDiffusionPipeline.from_pretrained(model_id, use_auth_token=True)
49
  pipe = pipe.to(device)
50
 
51
- prompt = "a photo of reimu hakurei. anime style"
52
  with autocast("cuda"):
53
  image = pipe(prompt, guidance_scale=7.5)["sample"][0]
54
 
@@ -57,9 +54,12 @@ image.save("reimu_hakurei.png")
57
 
58
  ## Team Members and Acknowledgements
59
 
60
- This project would not have been possible without the incredible work by the [CompVis Researchers](https://ommer-lab.com/) and the author of the original finetuned model that this work was based upon, [Yasu Seno](https://twitter.com/naclbbr)!
61
-
62
- Additionally, the methods presented in the [Textual Inversion](https://github.com/rinongal/textual_inversion) repo was an incredible help.
63
 
64
  - [Anthony Mercurio](https://github.com/harubaru)
65
- - [Salt](https://github.com/sALTaccount/)
 
 
 
 
 
 
11
 
12
  # waifu-diffusion - Diffusion for Weebs
13
 
14
+ waifu-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through fine-tuning.
15
 
16
+ <img src=https://cdn.discordapp.com/attachments/930499731451428926/1017258164439220254/unknown.png width=20% height=20%>
 
17
 
18
  ## Model Description
19
 
20
  The model originally used for fine-tuning is [Stable Diffusion V1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4), which is a latent image diffusion model trained on [LAION2B-en](https://huggingface.co/datasets/laion/laion2B-en).
21
 
22
+ The current model has been fine-tuned with a learning rate of 5.0e-6 for 4 epochs on 56k Danbooru text-image pairs which all have an aesthetic rating greater than `6.0`.
 
 
23
 
24
  ## Training Data & Annotative Prompting
25
 
26
+ The data used for fine-tuning has come from a random sample of 56k Danbooru images, which were filtered based on [CLIP Aesthetic Scoring](https://github.com/christophschuhmann/improved-aesthetic-predictor) where only images with an aesthetic score greater than `6.0` were used.
27
 
28
  Captions are Danbooru-style captions.
29
 
 
42
  device = "cuda"
43
 
44
 
45
+ pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision='fp16')
46
  pipe = pipe.to(device)
47
 
48
+ prompt = "touhou hakurei_reimu 1girl solo portrait"
49
  with autocast("cuda"):
50
  image = pipe(prompt, guidance_scale=7.5)["sample"][0]
51
 
 
54
 
55
  ## Team Members and Acknowledgements
56
 
57
+ This project would not have been possible without the incredible work by the [CompVis Researchers](https://ommer-lab.com/).
 
 
58
 
59
  - [Anthony Mercurio](https://github.com/harubaru)
60
+ - [Salt](https://github.com/sALTaccount/)
61
+ - [Sta @ Bit192](https://twitter.com/naclbbr)
62
+
63
+ In order to reach us, you can join our [Discord server](https://discord.gg/touhouai).
64
+
65
+ [![Discord Server](https://discordapp.com/api/guilds/930499730843250783/widget.png?style=banner2)](https://discord.gg/touhouai)