ZeroCool94 commited on
Commit
7699903
1 Parent(s): 23b65e4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -39,11 +39,13 @@ This model is still in its infancy and it's meant to be constantly updated and t
39
 
40
  ## Available Checkpoints:
41
  - #### Stable:
 
 
42
  - [vae.sygil_muse_v0.1.pt](https://huggingface.co/Sygil/Sygil-Muse/blob/main/vae.sygil_muse_v0.1.pt): Trained from scratch for 3.0M steps with **dim: 128** and **vq_codebook_size: 256**.
43
  - [maskgit.sygil_muse_v0.1.pt](https://huggingface.co/Sygil/Sygil-Muse/blob/main/maskgit.sygil_muse_v0.1.pt): Maskgit trained from the VAE for 3.46M steps
44
  - [vae.sygil_muse_v0.5.pt](https://huggingface.co/Sygil/Sygil-Muse/blob/main/vae.sygil_muse_v0.5.pt): Trained from scratch for 1.99M steps with **dim: 128** and **vq_codebook_size: 8192**.
45
  - #### Beta:
46
- - [vae.87000.pt](https://huggingface.co/Sygil/Sygil-Muse/blob/main/vae.87000.pt): Trained from scratch for 87K steps and higher **vq_codebook_dim** and **vq_codebook_size** than before.
47
  - [maskgit.39000.pt](https://huggingface.co/Sygil/Sygil-Muse/blob/main/maskgit.39000.pt): Maskgit trained from the VAE for 39K steps using the hyperparameters `heads 16` and `depth 22` for testing, these values have huge performance effects, the vram usage was also increased so it is just for testing, the quality on this checkpoint did increase a lot and requires a lot less training which is something we want but we need to find a balance between quality and performance.
48
 
49
  Note: Checkpoints under the Beta section are updated daily or at least 3-4 times a week. While the beta checkpoints can be used as they are only the latest version is kept on the repo and the older checkpoints are removed when a new one
@@ -73,7 +75,7 @@ The model was trained on the following dataset:
73
  - **heads:** 8
74
  - **depth:** 4
75
  - **Random Crop:** True
76
- - **Total Training Steps:** 87,000
77
 
78
  Note: On Muse we can change the image_size or resolution at any time without having to train the model from scratch again, this allows us to first train the model at low resolution using the same `dim` and `vq_codebook_size` to train faster and then we can increase the `image_size` and use a higher resolution once the model has trained enough.
79
 
 
39
 
40
  ## Available Checkpoints:
41
  - #### Stable:
42
+ - None
43
+ - #### Old Checkpoints:
44
  - [vae.sygil_muse_v0.1.pt](https://huggingface.co/Sygil/Sygil-Muse/blob/main/vae.sygil_muse_v0.1.pt): Trained from scratch for 3.0M steps with **dim: 128** and **vq_codebook_size: 256**.
45
  - [maskgit.sygil_muse_v0.1.pt](https://huggingface.co/Sygil/Sygil-Muse/blob/main/maskgit.sygil_muse_v0.1.pt): Maskgit trained from the VAE for 3.46M steps
46
  - [vae.sygil_muse_v0.5.pt](https://huggingface.co/Sygil/Sygil-Muse/blob/main/vae.sygil_muse_v0.5.pt): Trained from scratch for 1.99M steps with **dim: 128** and **vq_codebook_size: 8192**.
47
  - #### Beta:
48
+ - [vae.195500.pt](https://huggingface.co/Sygil/Sygil-Muse/blob/main/vae.195500.pt): Trained from scratch for 195K steps and higher **vq_codebook_dim** and **vq_codebook_size** than before.
49
  - [maskgit.39000.pt](https://huggingface.co/Sygil/Sygil-Muse/blob/main/maskgit.39000.pt): Maskgit trained from the VAE for 39K steps using the hyperparameters `heads 16` and `depth 22` for testing, these values have huge performance effects, the vram usage was also increased so it is just for testing, the quality on this checkpoint did increase a lot and requires a lot less training which is something we want but we need to find a balance between quality and performance.
50
 
51
  Note: Checkpoints under the Beta section are updated daily or at least 3-4 times a week. While the beta checkpoints can be used as they are only the latest version is kept on the repo and the older checkpoints are removed when a new one
 
75
  - **heads:** 8
76
  - **depth:** 4
77
  - **Random Crop:** True
78
+ - **Total Training Steps:** 195,500
79
 
80
  Note: On Muse we can change the image_size or resolution at any time without having to train the model from scratch again, this allows us to first train the model at low resolution using the same `dim` and `vq_codebook_size` to train faster and then we can increase the `image_size` and use a higher resolution once the model has trained enough.
81