|
--- |
|
license: creativeml-openrail-m |
|
library_name: diffusers |
|
tags: |
|
- stable-diffusion-xl |
|
- stable-diffusion-xl-diffusers |
|
- text-to-image |
|
- diffusers-training |
|
- diffusers |
|
base_model: stabilityai/stable-diffusion-xl-base-1.0 |
|
inference: true |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the training script had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
|
|
# Text-to-image finetuning - daehan17/try1 |
|
|
|
This pipeline was finetuned from **stabilityai/stable-diffusion-xl-base-1.0** on the **lambdalabs/pokemon-blip-captions** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: a cute samsung president LEE: |
|
|
|
![img_0](./image_0.png) |
|
![img_1](./image_1.png) |
|
![img_2](./image_2.png) |
|
![img_3](./image_3.png) |
|
|
|
|
|
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. |
|
|
|
|
|
## Intended uses & limitations |
|
|
|
#### How to use |
|
|
|
```python |
|
# TODO: add an example code snippet for running this diffusion pipeline |
|
``` |
|
|
|
#### Limitations and bias |
|
|
|
[TODO: provide examples of latent issues and potential remediations] |
|
|
|
## Training details |
|
|
|
[TODO: describe the data used to train the model] |