AstraliteHeart commited on
Commit
5387f81
·
1 Parent(s): 2c1580e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -10
README.md CHANGED
@@ -21,7 +21,7 @@ pony-diffusion is a latent text-to-image diffusion model that has been condition
21
 
22
  Special thanks to [Waifu-Diffusion](https://huggingface.co/hakurei/waifu-diffusion) for providing finetuning expertise and advising through the process, without their help this project would not exist.
23
 
24
- [Pruned safetensors PyTorch Model(Use this with Automatic1111 or other SD UIs)](https://mega.nz/file/wO0EkC5L#N-IUbBe2e83_hIdepiRjSFg_81So3ZQsskNE4eD0v9A)
25
 
26
  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1rl-R39aKp42RS6sRsM9cVC2elBFugdDd?usp=sharing)
27
 
@@ -43,22 +43,17 @@ You can see more samples at [PurpleSmartAI](https://purplesmart.ai/collection/to
43
 
44
  The model originally used for fine-tuning is [Stable Diffusion V1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5), which is a latent image diffusion model trained on [LAION2B-en](https://huggingface.co/datasets/laion/laion2B-en).
45
 
46
- This particular checkpoint has been fine-tuned with a learning rate of 5.0e-6 for 20 epochs on approximately 1.7M pony, furry and other cartoon text-image pairs (using metadata from derpibooru, e621 and danbooru).
47
 
48
  ## Improvements over previous models
49
 
50
  ### Better disentanglement of tag based prompts
51
  Aka ["using Hidden States of CLIP’s Penultimate Layer"](https://blog.novelai.net/novelai-improvements-on-stable-diffusion-e10d38db82ac#:~:text=Using%20Hidden%20States%20of%20CLIP%E2%80%99s%20Penultimate%20Layer), a technique adopted by SD2 which should lead to generally higher quality and more tag driven outputs.
52
- In our experiments using penultimate CLIP was not always the best choice so trying both CLIP skip of 1 and 2 is recommended.
53
 
54
- ### Better support for non square images and increase of default resolution to 768px
55
- This should allow you to generate full body image with resolution 512x768px without triggering "double head" glitches.
56
 
57
- ### Removed SFM/3D bias
58
- V2 model demostrated bias toward 3d/sfw visual styles so special care has been applied to restrict exposure to 3d/sfm images during training.
59
-
60
- ### Improver data diversity
61
- We ran multiple experiments finetuning models on 300k pony only and 600k pony only datasets, the resulting models demostrated worse quality for pony specific data. We concluded that despite more complicated prompting (and lack of pony bias by default), inclusion of large amount of non pony specific highly ranked non photorealistic images generally has strong positive efect.
62
 
63
  ## License
64
 
 
21
 
22
  Special thanks to [Waifu-Diffusion](https://huggingface.co/hakurei/waifu-diffusion) for providing finetuning expertise and advising through the process, without their help this project would not exist.
23
 
24
+ [Pruned safetensors PyTorch Model (use this with Automatic1111 or other SD UIs)](https://mega.nz/file/wO0EkC5L#N-IUbBe2e83_hIdepiRjSFg_81So3ZQsskNE4eD0v9A)
25
 
26
  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1rl-R39aKp42RS6sRsM9cVC2elBFugdDd?usp=sharing)
27
 
 
43
 
44
  The model originally used for fine-tuning is [Stable Diffusion V1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5), which is a latent image diffusion model trained on [LAION2B-en](https://huggingface.co/datasets/laion/laion2B-en).
45
 
46
+ This particular checkpoint has been fine-tuned with a learning rate of 5.0e-6 for 15 epochs on approximately 3M pony, furry and other cartoon text-image pairs (using metadata from derpibooru, e621 and danbooru).
47
 
48
  ## Improvements over previous models
49
 
50
  ### Better disentanglement of tag based prompts
51
  Aka ["using Hidden States of CLIP’s Penultimate Layer"](https://blog.novelai.net/novelai-improvements-on-stable-diffusion-e10d38db82ac#:~:text=Using%20Hidden%20States%20of%20CLIP%E2%80%99s%20Penultimate%20Layer), a technique adopted by SD2 which should lead to generally higher quality and more tag driven outputs.
 
52
 
53
+ Compared to pony-diffusion-v3 using penultimate CLIP is generally the best choice but trying both CLIP skip of 1 and 2 is still recommended.
 
54
 
55
+ ### Improved data quality labeling
56
+ We reccomend adding 'derpibooru_p_95' to prompt and 'derpibooru_p_low' to negative prompt to improve quality of generated pony images.
 
 
 
57
 
58
  ## License
59