camera2-sd3-lora-1 / README.md
anmittal1's picture
End of training
78a39b6 verified
metadata
base_model: stabilityai/stable-diffusion-3-medium-diffusers
library_name: diffusers
license: openrail++
tags:
  - text-to-image
  - diffusers-training
  - diffusers
  - lora
  - sd3
  - sd3-diffusers
  - template:sd-lora
instance_prompt: a photo of [V] object
widget:
  - text: A photo of [V] object
    output:
      url: image_0.png
  - text: A photo of [V] object
    output:
      url: image_1.png
  - text: A photo of [V] object
    output:
      url: image_2.png
  - text: A photo of [V] object
    output:
      url: image_3.png

SD3 DreamBooth LoRA - anmittal1/camera2-sd3-lora-1

Prompt
A photo of [V] object
Prompt
A photo of [V] object
Prompt
A photo of [V] object
Prompt
A photo of [V] object

Model description

These are anmittal1/camera2-sd3-lora-1 DreamBooth LoRA weights for stabilityai/stable-diffusion-3-medium-diffusers.

The weights were trained using DreamBooth with the SD3 diffusers trainer.

Was LoRA for the text encoder enabled? False.

Trigger words

You should use a photo of [V] object to trigger the image generation.

Download model

Download the *.safetensors LoRA in the Files & versions tab.

Use it with the 🧨 diffusers library

from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-3-medium-diffusers', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('anmittal1/camera2-sd3-lora-1', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A photo of [V] object').images[0]

Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke

For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers

License

Please adhere to the licensing terms as described here.

Intended uses & limitations

How to use

# TODO: add an example code snippet for running this diffusion pipeline

Limitations and bias

[TODO: provide examples of latent issues and potential remediations]

Training details

[TODO: describe the data used to train the model]