Edit model card

SDXL LoRA DreamBooth - jcjo/peb-sdxl-lora

Model description

These are jcjo/peb-sdxl-lora LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.

Download model

Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke

  • LoRA: download [/content/drive/MyDrive/Colab_Notebooks/DA Matching Algorithm/AI/DanceAI/ckpt/PEB.safetensors here 💾](/jcjo/peb-sdxl-lora/blob/main//content/drive/MyDrive/Colab_Notebooks/DA Matching Algorithm/AI/DanceAI/ckpt/PEB.safetensors).
    • Place it on your models/Lora folder.
    • On AUTOMATIC1111, load the LoRA by adding <lora:/content/drive/MyDrive/Colab_Notebooks/DA Matching Algorithm/AI/DanceAI/ckpt/PEB:1> to your prompt. On ComfyUI just load it as a regular LoRA.
  • Embeddings: download [/content/drive/MyDrive/Colab_Notebooks/DA Matching Algorithm/AI/DanceAI/ckpt/PEB_emb.safetensors here 💾](/jcjo/peb-sdxl-lora/blob/main//content/drive/MyDrive/Colab_Notebooks/DA Matching Algorithm/AI/DanceAI/ckpt/PEB_emb.safetensors).
    • Place it on it on your embeddings folder
    • Use it by adding /content/drive/MyDrive/Colab_Notebooks/DA Matching Algorithm/AI/DanceAI/ckpt/PEB_emb to your prompt. For example, a <peb0><peb1> woman (you need both the LoRA and the embeddings as they were trained together for this LoRA)

Use it with the 🧨 diffusers library

from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
        
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jcjo/peb-sdxl-lora', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='jcjo/peb-sdxl-lora', filename='/content/drive/MyDrive/Colab_Notebooks/DA Matching Algorithm/AI/DanceAI/ckpt/PEB_emb.safetensors', repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=[], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=[], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
        
image = pipeline('a photo of <peb0><peb1> woman, black suit, walking on a sidewalk, looking at viewer, smiling, waving, (8k, RAW photo, best quality, masterpiece:1.2), (realistic, photo-realistic:1.37), professional lighting, photon mapping, radiosity, physically-based rendering, octane render.').images[0]

For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers

Trigger words

To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:

Details

All Files & versions.

The weights were trained using 🧨 diffusers Advanced Dreambooth Training Script.

LoRA for the text encoder was enabled. False.

Pivotal tuning was enabled: True.

Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.

Downloads last month
73

Adapter for