sayakpaul HF staff commited on
Commit
2b2693f
1 Parent(s): 33871e9

description.

Browse files
Files changed (1) hide show
  1. app.py +7 -3
app.py CHANGED
@@ -11,10 +11,14 @@ This Space lets you convert KerasCV Stable Diffusion weights to a format compati
11
 
12
  * The Space downloads a couple of pre-trained weights and runs a dummy inference. Depending, on the machine type, the enture process can take anywhere between 2 - 5 minutes.
13
  * Only Stable Diffusion (v1) is supported as of now. In particular this checkpoint: [`"CompVis/stable-diffusion-v1-4"`](https://huggingface.co/CompVis/stable-diffusion-v1-4).
14
- * Only the text encoder and UNet parameters are converted since only these two elements are generally fine-tuned.
15
  * [This Colab Notebook](https://colab.research.google.com/drive/1RYY077IQbAJldg8FkK8HSEpNILKHEwLb?usp=sharing) was used to develop the conversion utilities initially.
16
- * You can choose NOT to provide `text_encoder_weights` and `unet_weights` in case you don't have any fine-tuned weights. In that case, the original parameters of the respective models (text encoder and UNet) from KerasCV will be used.
17
- * You can provide only `text_encoder_weights` or `unet_weights` or both.
 
 
 
 
 
18
  * When providing the weights' links, ensure they're directly downloadable. Internally, the Space uses [`tf.keras.utils.get_file()`](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) to retrieve the weights locally.
19
  * If you don't provide `your_hf_token` the converted pipeline won't be pushed.
20
 
 
11
 
12
  * The Space downloads a couple of pre-trained weights and runs a dummy inference. Depending, on the machine type, the enture process can take anywhere between 2 - 5 minutes.
13
  * Only Stable Diffusion (v1) is supported as of now. In particular this checkpoint: [`"CompVis/stable-diffusion-v1-4"`](https://huggingface.co/CompVis/stable-diffusion-v1-4).
 
14
  * [This Colab Notebook](https://colab.research.google.com/drive/1RYY077IQbAJldg8FkK8HSEpNILKHEwLb?usp=sharing) was used to develop the conversion utilities initially.
15
+ * Providing both `text_encoder_weights` and `unet_weights` is dependent on the fine-tuning task. Here are some _typical_ scenarios:
16
+
17
+ * [DreamBooth](https://dreambooth.github.io/): Both text encoder and UNet
18
+ * [Textual Inversion](https://textual-inversion.github.io/): Text encoder
19
+ * [Traditional text2image fine-tuning](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image): UNet
20
+
21
+ **In case none of the `text_encoder_weights` and `unet_weights` is provided, nothing will be done.**
22
  * When providing the weights' links, ensure they're directly downloadable. Internally, the Space uses [`tf.keras.utils.get_file()`](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) to retrieve the weights locally.
23
  * If you don't provide `your_hf_token` the converted pipeline won't be pushed.
24