Crosstyan commited on
Commit
b946723
1 Parent(s): 7395ca4

update new models

Browse files
Files changed (4) hide show
  1. .gitattributes +1 -0
  2. README.md +7 -1
  3. bp_mk3.safetensors +3 -0
  4. bp_mk5.safetensors +3 -0
.gitattributes CHANGED
@@ -1,3 +1,4 @@
1
  *.png filter=lfs diff=lfs merge=lfs -text
2
  *.bin filter=lfs diff=lfs merge=lfs -text
3
  *.ckpt filter=lfs diff=lfs merge=lfs -text
 
1
  *.png filter=lfs diff=lfs merge=lfs -text
2
  *.bin filter=lfs diff=lfs merge=lfs -text
3
  *.ckpt filter=lfs diff=lfs merge=lfs -text
4
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -16,6 +16,12 @@ widget:
16
 
17
  # BPModel
18
 
 
 
 
 
 
 
19
  BPModel is an experimental Stable Diffusion model based on [ACertainty](https://huggingface.co/JosephusCheung/ACertainty) from [Joseph Cheung](https://huggingface.co/JosephusCheung).
20
 
21
  Why is the Model even existing? There are loads of Stable Diffusion model out there, especially anime style models.
@@ -145,7 +151,7 @@ LaTeNt SpAcE!
145
  Use [`bp_1024_with_vae_te.ckpt`](bp_1024_with_vae_te.ckpt) if you don't have VAE and text encoder with you, still
146
  EMA weight is not included and it's fp16.
147
 
148
- If you want to continue training, use [`bp_1024_e10_ema.ckpt`](bp_1024_e10_ema.ckpt) which is the ema weight
149
  and with fp32 precision.
150
 
151
  For better performance, it is strongly recommended to use Clip skip (CLIP stop at last layers) 2.
16
 
17
  # BPModel
18
 
19
+ ## Update
20
+
21
+ 2023-01-02: I wasted more GPU hours to train it a little bit more (overfitting). Check out [bp_mk3.safetensors](bp_mk3.safetensors) and [bp_mk5.safetensors](bp_mk5.safetensors). Prepare yourself own VAE! Update your WebUI if you can't load [safetensors](https://github.com/huggingface/safetensors).
22
+
23
+ ## Introduction
24
+
25
  BPModel is an experimental Stable Diffusion model based on [ACertainty](https://huggingface.co/JosephusCheung/ACertainty) from [Joseph Cheung](https://huggingface.co/JosephusCheung).
26
 
27
  Why is the Model even existing? There are loads of Stable Diffusion model out there, especially anime style models.
151
  Use [`bp_1024_with_vae_te.ckpt`](bp_1024_with_vae_te.ckpt) if you don't have VAE and text encoder with you, still
152
  EMA weight is not included and it's fp16.
153
 
154
+ If you want to continue training, use [`bp_1024_e10_ema.ckpt`](bp_1024_e10_ema.ckpt) which is the ema unet weight
155
  and with fp32 precision.
156
 
157
  For better performance, it is strongly recommended to use Clip skip (CLIP stop at last layers) 2.
bp_mk3.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97848d7d80b242a1483d0307509b422fee12a0e7096ff202397a6e395a71aea9
3
+ size 1719136903
bp_mk5.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f87dabceffd299a8f3f72f031829338e34ad3c1e2541815af08fa694d65fb4c0
3
+ size 1719136903