abby101's picture
run and generate model card
86116e3 verified
metadata
license: openrail++
library_name: diffusers
tags:
  - text-to-image
  - stable-diffusion-xl
  - stable-diffusion-xl-diffusers
  - text-to-image
  - diffusers
  - lora
  - template:sd-lora
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: A mushroom in [V] style
widget:
  - text: ' '
    output:
      url: image_0.png
  - text: ' '
    output:
      url: image_1.png
  - text: ' '
    output:
      url: image_2.png

SDXL LoRA DreamBooth - abby101/test

Prompt
Prompt
Prompt

Model description

These are abby101/test LoRA adaption weights for runwayml/stable-diffusion-v1-5.

Download model

Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke

Use it with the 🧨 diffusers library

from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('abby101/test', weight_name='pytorch_lora_weights.safetensors')

image = pipeline('A mushroom in [V] style').images[0]

For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers

Trigger words

You should use A mushroom in [V] style to trigger the image generation.

Details

All Files & versions.

The weights were trained using 🧨 diffusers Advanced Dreambooth Training Script.

LoRA for the text encoder was enabled. False.

Pivotal tuning was enabled: False.

Special VAE used for training: None.

Intended uses & limitations

How to use

# TODO: add an example code snippet for running this diffusion pipeline

Limitations and bias

[TODO: provide examples of latent issues and potential remediations]

Training details

[TODO: describe the data used to train the model]