Update README.md
Browse files
README.md
CHANGED
@@ -12,32 +12,9 @@ tags:
|
|
12 |
- lora
|
13 |
---
|
14 |
|
15 |
-
<!-- This model card has been generated automatically according to the information the training script had access to. You
|
16 |
-
should probably proofread and complete it, then remove this comment. -->
|
17 |
-
|
18 |
-
|
19 |
-
# LoRA text2image fine-tuning - AlexeyGHT/Stable_Diffusion_v1.4_lora
|
20 |
-
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the AlexeyGHT/Iris dataset. You can find some example images in the following.
|
21 |
|
22 |
![img_0](./image_0.png)
|
23 |
![img_1](./image_1.png)
|
24 |
![img_2](./image_2.png)
|
25 |
![img_3](./image_3.png)
|
26 |
|
27 |
-
|
28 |
-
|
29 |
-
## Intended uses & limitations
|
30 |
-
|
31 |
-
#### How to use
|
32 |
-
|
33 |
-
```python
|
34 |
-
# TODO: add an example code snippet for running this diffusion pipeline
|
35 |
-
```
|
36 |
-
|
37 |
-
#### Limitations and bias
|
38 |
-
|
39 |
-
[TODO: provide examples of latent issues and potential remediations]
|
40 |
-
|
41 |
-
## Training details
|
42 |
-
|
43 |
-
[TODO: describe the data used to train the model]
|
|
|
12 |
- lora
|
13 |
---
|
14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
|
16 |
![img_0](./image_0.png)
|
17 |
![img_1](./image_1.png)
|
18 |
![img_2](./image_2.png)
|
19 |
![img_3](./image_3.png)
|
20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|