|
--- |
|
license: creativeml-openrail-m |
|
library_name: diffusers |
|
tags: |
|
- stable-diffusion |
|
- stable-diffusion-diffusers |
|
- text-to-image |
|
- diffusers |
|
- diffusers-training |
|
- lora |
|
base_model: runwayml/stable-diffusion-v1-5 |
|
inference: true |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the training script had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
|
|
# LoRA text2image fine-tuning - PQlet/lora-narutoblip-v1-ablation-r64-a16 |
|
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the Naruto-BLIP dataset. You can find some example images in the following. |
|
![img_0](./image_a_man_with_glasses_and_a_shirt_on.png) |
|
![img_1](./image_a_group_of_people_sitting_on_the_ground.png) |
|
![img_2](./image_a_man_in_a_green_hoodie_standing_in_front_of_a_mountain.png) |
|
![img_3](./image_a_man_with_a_gun_in_his_hand.png) |
|
![img_4](./image_a_woman_with_red_hair_and_a_cat_on_her_head.png) |
|
![img_5](./image_two_pokemons_sitting_on_top_of_a_cloud.png) |
|
![img_6](./image_a_man_standing_in_front_of_a_bridge.png) |
|
|
|
|
|
## Intended uses & limitations |
|
|
|
#### How to use |
|
|
|
```python |
|
# TODO: add an example code snippet for running this diffusion pipeline |
|
``` |
|
|
|
#### Limitations and bias |
|
|
|
[TODO: provide examples of latent issues and potential remediations] |
|
|
|
## Training details |
|
|
|
[TODO: describe the data used to train the model] |