Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
tags:
|
5 |
+
- stable-diffusion
|
6 |
+
- text-to-image
|
7 |
+
license: bigscience-bloom-rail-1.0
|
8 |
+
inference: false
|
9 |
+
|
10 |
+
---
|
11 |
+
|
12 |
+
A finetuning equivalent with less VRAM requirement than finetuning Stable Diffusion itself, faster if you have all the images downloaded, less space taken up by the models since you only need CLIP
|
13 |
+
|
14 |
+
A notebook for producing your own "stable inversions" is included in this repo but I wouldn't recommend doing so (they suck). It works on Colab free tier though.
|
15 |
+
|
16 |
+
[link to notebook for you to download](https://huggingface.co/crumb/genshin-stable-inversion/blob/main/stable_inversion%20(1).ipynb)
|
17 |
+
|
18 |
+
how you can load this into a diffusers-based notebook like [Doohickey](https://github.com/aicrumb/doohickey) might look something like this
|
19 |
+
|
20 |
+
```python
|
21 |
+
from huggingface_hub import hf_hub_download
|
22 |
+
|
23 |
+
stable_inversion = "user/my-stable-inversion" #@param {type:"string"}
|
24 |
+
inversion_path = hf_hub_download(repo_id=stable_inversion, filename="token_embeddings.pt")
|
25 |
+
text_encoder.text_model.embeddings.token_embedding.weight = torch.load(inversion_path)
|
26 |
+
```
|