Move samples to the end, link to blog and docs (#2)
Browse files- Move samples to the end, link to blog and docs (f88dbe16b1a0b96b63444144c07c60036824ea15)
README.md
CHANGED
@@ -11,12 +11,7 @@ inference: true
|
|
11 |
---
|
12 |
|
13 |
# LoRA text2image fine-tuning - https://huggingface.co/pcuenq/pokemon-lora
|
14 |
-
These are LoRA adaption weights trained on base model https://huggingface.co/runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset.
|
15 |
-
|
16 |
-
![img_0](./image_0.png)
|
17 |
-
![img_1](./image_1.png)
|
18 |
-
![img_2](./image_2.png)
|
19 |
-
![img_3](./image_3.png)
|
20 |
|
21 |
## How to Use
|
22 |
|
@@ -41,3 +36,12 @@ pipe.to("cuda")
|
|
41 |
image = pipe("Green pokemon with menacing face", num_inference_steps=25).images[0]
|
42 |
image.save("green_pokemon.png")
|
43 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
---
|
12 |
|
13 |
# LoRA text2image fine-tuning - https://huggingface.co/pcuenq/pokemon-lora
|
14 |
+
These are LoRA adaption weights trained on base model https://huggingface.co/runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset.
|
|
|
|
|
|
|
|
|
|
|
15 |
|
16 |
## How to Use
|
17 |
|
|
|
36 |
image = pipe("Green pokemon with menacing face", num_inference_steps=25).images[0]
|
37 |
image.save("green_pokemon.png")
|
38 |
```
|
39 |
+
|
40 |
+
Please, check [our blog post](https://huggingface.co/blog/lora) or [documentation](https://huggingface.co/docs/diffusers/v0.15.0/en/training/lora#text-to-image-inference) for more details.
|
41 |
+
|
42 |
+
## Example Images
|
43 |
+
|
44 |
+
![img_0](./image_0.png)
|
45 |
+
![img_1](./image_1.png)
|
46 |
+
![img_2](./image_2.png)
|
47 |
+
![img_3](./image_3.png)
|