models / README.md
hjvision's picture
End of training
af96b2b verified
|
raw
history blame
2.11 kB
metadata
base_model: stabilityai/stable-diffusion-3-medium-diffusers
library_name: diffusers
license: openrail++
tags:
  - text-to-image
  - diffusers-training
  - diffusers
  - sd3
  - sd3-diffusers
  - template:sd-lora
instance_prompt: niul, colorful object, engraved with NiUl
widget:
  - text: niul, colorful object, engraved with NiUl
    output:
      url: image_0.png
  - text: niul, colorful object, engraved with NiUl
    output:
      url: image_1.png
  - text: niul, colorful object, engraved with NiUl
    output:
      url: image_2.png
  - text: niul, colorful object, engraved with NiUl
    output:
      url: image_3.png

SD3 DreamBooth - hjvision/models

Prompt
niul, colorful object, engraved with NiUl
Prompt
niul, colorful object, engraved with NiUl
Prompt
niul, colorful object, engraved with NiUl
Prompt
niul, colorful object, engraved with NiUl

Model description

These are hjvision/models DreamBooth weights for stabilityai/stable-diffusion-3-medium-diffusers.

The weights were trained using DreamBooth with the SD3 diffusers trainer.

Was the text encoder fine-tuned? False.

Trigger words

You should use niul, colorful object, engraved with NiUl to trigger the image generation.

Use it with the 🧨 diffusers library

from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('hjvision/models', torch_dtype=torch.float16).to('cuda')
image = pipeline('niul, colorful object, engraved with NiUl').images[0]

License

Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE).

Intended uses & limitations

How to use

# TODO: add an example code snippet for running this diffusion pipeline

Limitations and bias

[TODO: provide examples of latent issues and potential remediations]

Training details

[TODO: describe the data used to train the model]