Update README.md
Browse files
README.md
CHANGED
@@ -12,11 +12,11 @@ inference: false
|
|
12 |
These weights are intended to be used with the original [CompVis Stable Diffusion codebase](https://github.com/CompVis/stable-diffusion). If you are looking for the model to use with the 🧨 diffusers library, [come here](https://huggingface.co/CompVis/stabilityai/sd-vae-ft-ema).
|
13 |
|
14 |
## Decoder Finetuning
|
15 |
-
We publish two kl-f8 autoencoder versions, finetuned from the original [kl-f8 autoencoder](https://github.com/CompVis/latent-diffusion#pretrained-autoencoding-models).
|
16 |
-
The first, _ft-EMA_, was resumed from the original checkpoint, trained for 313198 steps and uses EMA weights.
|
17 |
-
The second, _ft-MSE_, was resumed from _ft-EMA_ and uses EMA weights and was trained for another 280k steps using a
|
18 |
-
on MSE reconstruction (
|
19 |
-
To keep compatibility with existing models, only the decoder part was finetuned; the checkpoints can be used as a drop-in replacement for the existing autoencoder
|
20 |
|
21 |
_Original kl-f8 VAE vs f8-ft-EMA vs f8-ft-MSE_
|
22 |
|
|
|
12 |
These weights are intended to be used with the original [CompVis Stable Diffusion codebase](https://github.com/CompVis/stable-diffusion). If you are looking for the model to use with the 🧨 diffusers library, [come here](https://huggingface.co/CompVis/stabilityai/sd-vae-ft-ema).
|
13 |
|
14 |
## Decoder Finetuning
|
15 |
+
We publish two kl-f8 autoencoder versions, finetuned from the original [kl-f8 autoencoder](https://github.com/CompVis/latent-diffusion#pretrained-autoencoding-models) on a 1:1 ratio of [LAION-Aesthetics](https://laion.ai/blog/laion-aesthetics/) and LAION-Humans, an unreleased subset containing only SFW images of humans. The intent was to fine-tune on the Stable Diffusion training set (the autoencoder was originally trained on OpenImages) but also enrich the dataset with images of humans to improve the reconstruction of faces.
|
16 |
+
The first, _ft-EMA_, was resumed from the original checkpoint, trained for 313198 steps and uses EMA weights. It uses the same loss configuration as the original checkpoint (L1 + LPIPS).
|
17 |
+
The second, _ft-MSE_, was resumed from _ft-EMA_ and uses EMA weights and was trained for another 280k steps using a different loss, with more emphasis
|
18 |
+
on MSE reconstruction (MSE + 0.1 * LPIPS). It produces somewhat ``smoother'' outputs. The batch size for both versions was 192 (16 A100s, batch size 12 per GPU).
|
19 |
+
To keep compatibility with existing models, only the decoder part was finetuned; the checkpoints can be used as a drop-in replacement for the existing autoencoder..
|
20 |
|
21 |
_Original kl-f8 VAE vs f8-ft-EMA vs f8-ft-MSE_
|
22 |
|