Prgckwb's picture
Update README.md
4fdec0a verified
---
base_model: stabilityai/stable-diffusion-3.5-medium
library_name: diffusers
license: other
instance_prompt: an icon of trpfrog
widget:
- text: an icon of trpfrog eating ramen
output:
url: image_1.png
- text: an icon of trpfrog eating ramen
output:
url: image_2.png
- text: an icon of trpfrog eating ramen
output:
url: image_3.png
- text: an icon of trpfrog eating ramen
output:
url: image_4.png
- text: an icon of trpfrog eating ramen
output:
url: image_5.png
- text: an icon of trpfrog eating ramen
output:
url: image_7.png
tags:
- text-to-image
- diffusers-training
- diffusers
- sd3
- sd3-diffusers
datasets:
- trpfrog/trpfrog-icons
- Prgckwb/trpfrog-icons-dreambooth
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth - Prgckwb/trpfrog-sd3.5-medium
<Gallery />
## Model description
!! This is same as Prgckwb/trpfrog-sd3.5-medium-lora !!
These are Prgckwb/trpfrog-sd3.5-medium DreamBooth weights for stabilityai/stable-diffusion-3.5-medium.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `an icon of trpfrog` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(
'Prgckwb/trpfrog-sd3.5-medium',
torch_dtype=torch.float16
).to('cuda')
image = pipeline('an icon of trpfrog').images[0]
image.save('trpfrog.png')
```
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md).