nitrosocke commited on
Commit
ae5e1fb
1 Parent(s): 58654b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  license: creativeml-openrail-m
3
  ---
4
- **Update:** Arcane Diffusion v2 now available!
5
 
6
  This is the fine-tuned Stable Diffusion model trained on images from the TV Show Arcane.
7
  Use the tokens **arcane style** in your prompts for the effect.
@@ -12,6 +12,7 @@ Sample images used for training:
12
  Sample images from the model:
13
  ![output Samples](https://huggingface.co/nitrosocke/Arcane-Diffusion/resolve/main/arcane-diffusion-output-images.jpg)
14
 
15
- Update: Version 2 uses the diffusers based dreambooth training and prior-preservation loss is way more effective. The .ckpt was converted with a script and works with automatics repo.
 
16
 
17
- Disclaimer v1 (arcane-diffusion-5k): This model was trained using _Unfrozen Model Textual Inversion_ utilizing the _Training with prior-preservation loss_ methods. There is still a slight shift towards the style, while not using the arcane token.
 
1
  ---
2
  license: creativeml-openrail-m
3
  ---
4
+ **Update:** Arcane Diffusion v3 coming soon (already in training)!
5
 
6
  This is the fine-tuned Stable Diffusion model trained on images from the TV Show Arcane.
7
  Use the tokens **arcane style** in your prompts for the effect.
 
12
  Sample images from the model:
13
  ![output Samples](https://huggingface.co/nitrosocke/Arcane-Diffusion/resolve/main/arcane-diffusion-output-images.jpg)
14
 
15
+ Version 2 (arcane-diffusion-v2): This uses the diffusers based dreambooth training and prior-preservation loss is way more effective. The diffusers where then converted with a script to a ckpt file in order to work with automatics repo.
16
+ Training was done with 5k steps for a direct comparison to v1 and results show that it needs more steps for a more prominent result. Version 3 will be tested with 11k steps.
17
 
18
+ Version 1 (arcane-diffusion-5k): This model was trained using _Unfrozen Model Textual Inversion_ utilizing the _Training with prior-preservation loss_ methods. There is still a slight shift towards the style, while not using the arcane token.