adhisetiawan's picture
Update README.md
936b20c verified
metadata
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of bakso
widget:
  - text: A photo of bakso in a bowl
    output:
      url: image_0.png
  - text: A photo of bakso in a bowl
    output:
      url: image_1.png
  - text: A photo of bakso in a bowl
    output:
      url: image_2.png
  - text: A photo of bakso in a bowl
    output:
      url: image_3.png
tags:
  - text-to-image
  - text-to-image
  - diffusers-training
  - diffusers
  - lora
  - template:sd-lora
  - stable-diffusion-xl
  - stable-diffusion-xl-diffusers

SDXL LoRA DreamBooth - adhisetiawan/sdxl-base-1.0-indonesian-food-dreambooth-lora

Prompt
A photo of bakso in a bowl
Prompt
A photo of bakso in a bowl
Prompt
A photo of bakso in a bowl
Prompt
A photo of bakso in a bowl

Model description

These are adhisetiawan/sdxl-base-1.0-indonesian-food-dreambooth-lora LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.

The weights were trained using DreamBooth.

LoRA for the text encoder was enabled: True.

Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.

Trigger words

You should use a photo of bakso to trigger the image generation.

Download model

Weights for this model are available in Safetensors format.

Download them in the Files & versions tab.

Intended uses & limitations

How to use

import torch
from diffusers import DiffusionPipeline

# Load Stable Diffusion XL Base1.0
pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
    variant="fp16",
    use_safetensors=True
).to("cuda")

# Optional CPU offloading to save some GPU Memory
pipeline.enable_model_cpu_offload()

# Loading Trained DreamBooth LoRA Weights
pipeline.load_lora_weights("adhisetiawan/sdxl-base-1.0-indonesian-food-dreambooth-lora")

images = pipeline(
            "a delicous takoyaki in a plate", num_images_per_prompt=4, guidance_scale=8
        )
for i in range(len(images.images)):
  display(images.images[i])

Limitations and bias

[TODO: provide examples of latent issues and potential remediations]

Training details

[TODO: describe the data used to train the model]