ZeroCool94 commited on
Commit
6ace42a
1 Parent(s): e04ea2c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -42,7 +42,7 @@ As the model is fine-tuned on a wide variety of content, it’s able to generate
42
 
43
  If you find our work useful, please consider supporting us on [OpenCollective](https://opencollective.com/sygil_dev)!
44
 
45
- This model is still in its infancy, so feel free to give us feedback on our [Discord Server](https://discord.gg/UjXFsf6mTu) or on the discussions section on huggingface. We plan to improve it with more, better tags in the future, so any help is always welcome 😛
46
  [![Join the Discord Server](https://badgen.net/discord/members/fTtcufxyHQ?icon=discord)](https://discord.gg/UjXFsf6mTu)
47
 
48
 
@@ -83,7 +83,7 @@ image.save("fantasy_forest_illustration.png")
83
  ## Available Checkpoints:
84
  - [Sygil Diffusion v0.1](https://huggingface.co/Sygil/Sygil-Diffusion/blob/main/sygil-diffusion-v0.1.ckpt): Trained on Stable Diffusion 1.5 for 800,000 steps.
85
  - [sygil-diffusion-v0.2](https://huggingface.co/Sygil/Sygil-Diffusion/blob/main/sygil-diffusion-v0.2.ckpt): Resumed from Sygil Diffusion v0.1 and trained for a total of 1.77 million steps.
86
-
87
  ## Training
88
 
89
  **Training Data**:
@@ -92,7 +92,7 @@ The model was trained on the following dataset:
92
 
93
  **Hardware and others**
94
  - **Hardware:** 1 x Nvidia RTX 3050 8GB GPU
95
- - **Hours Trained:** 630 hours approximately.
96
  - **Optimizer:** AdamW
97
  - **Adam Beta 1**: 0.9
98
  - **Adam Beta 2**: 0.999
@@ -105,7 +105,7 @@ The model was trained on the following dataset:
105
  - **Lora unet Learning Rate**: 1e-7
106
  - **Lora Text Encoder Learning Rate**: 1e-7
107
  - **Resolution**: 512 pixels
108
- - **Total Training Steps:** 1,770,717
109
 
110
  Developed by: [ZeroCool94](https://github.com/ZeroCool940711) at [Sygil-Dev](https://github.com/Sygil-Dev/)
111
 
 
42
 
43
  If you find our work useful, please consider supporting us on [OpenCollective](https://opencollective.com/sygil_dev)!
44
 
45
+ This model is still in its infancy and it's meant to be constantly updated and trained with more and more data as time goes by, so feel free to give us feedback on our [Discord Server](https://discord.gg/UjXFsf6mTu) or on the discussions section on huggingface. We plan to improve it with more, better tags in the future, so any help is always welcome 😛
46
  [![Join the Discord Server](https://badgen.net/discord/members/fTtcufxyHQ?icon=discord)](https://discord.gg/UjXFsf6mTu)
47
 
48
 
 
83
  ## Available Checkpoints:
84
  - [Sygil Diffusion v0.1](https://huggingface.co/Sygil/Sygil-Diffusion/blob/main/sygil-diffusion-v0.1.ckpt): Trained on Stable Diffusion 1.5 for 800,000 steps.
85
  - [sygil-diffusion-v0.2](https://huggingface.co/Sygil/Sygil-Diffusion/blob/main/sygil-diffusion-v0.2.ckpt): Resumed from Sygil Diffusion v0.1 and trained for a total of 1.77 million steps.
86
+ - [sygil-diffusion-v0.3_1860778_lora.ckpt](https://huggingface.co/Sygil/Sygil-Diffusion/blob/main/sygil-diffusion-v0.3_1860778_lora.ckpt): Resumed from Sygil Diffusion v0.2 and trained for a total of 1.86 million steps.
87
  ## Training
88
 
89
  **Training Data**:
 
92
 
93
  **Hardware and others**
94
  - **Hardware:** 1 x Nvidia RTX 3050 8GB GPU
95
+ - **Hours Trained:** 710 hours approximately.
96
  - **Optimizer:** AdamW
97
  - **Adam Beta 1**: 0.9
98
  - **Adam Beta 2**: 0.999
 
105
  - **Lora unet Learning Rate**: 1e-7
106
  - **Lora Text Encoder Learning Rate**: 1e-7
107
  - **Resolution**: 512 pixels
108
+ - **Total Training Steps:** 1,860,778
109
 
110
  Developed by: [ZeroCool94](https://github.com/ZeroCool940711) at [Sygil-Dev](https://github.com/Sygil-Dev/)
111