--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - diffusers-training - lora inference: true datasets: - AdamLucek/oldbookillustrations-small language: - en --- # LoRA Weights on Old Book Illustrations for Stable Diffusion XL Base 1.0 These are LoRA adaption weights for [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0). The weights were fine-tuned on the [AdamLucek/oldbookillustrations-small dataset](https://huggingface.co/datasets/AdamLucek/oldbookillustrations-small). LoRA for the text encoder was enabled: **True**. Special VAE used for training: [madebyollin/sdxl-vae-fp16-fix](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix). ## Example Images *"An Apple"* *"A Flower Wreath"* *"A view down an alley in New York"* *"An office setting with a desk and papers on it, with a view out the window above the desk into the town"* ## Intended uses & limitations #### How to use
COLAB Notebook Here
```python from diffusers import DiffusionPipeline import torch # Load Stable Diffusion XL Base1.0 pipe = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True ).to("cuda") # Optional CPU offloading to save some GPU Memory pipe.enable_model_cpu_offload() # Loading Trained Old Book Illustrations LoRA Weights pipe.load_lora_weights("AdamLucek/sdxl-base-1.0-oldbookillustrations-lora") # Generate an Image prompt = "An Apple" image = pipe( prompt = prompt, num_inference_steps=50, height=1024, width=1024, ).images[0] # Save the image image.save("SDXL_OldBookIllustrations.png") ``` #### Limitations and bias **Note**: See original Stable Diffusion XL Base 1.0 page for additional limitations and biases **Note**: First try with tuning hyperparameters ## Training details **Video Overview** Trained on a single a100 using the [Diffuser's package](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora_sdxl.py), documentation available [here](https://huggingface.co/docs/diffusers/main/en/training/lora). Training script used: ``` accelerate launch train_text_to_image_lora_sdxl.py \ --pretrained_model_name_or_path=stabilityai/stable-diffusion-xl-base-1.0 \ --pretrained_vae_model_name_or_path="madebyollin/sdxl-vae-fp16-fix" \ --dataset_name="AdamLucek/oldbookillustrations-small" \ --validation_prompt="An inventor tinkers with a complex machine in his workshop, oblivious to the setting sun outside" \ --num_validation_images=4 \ --validation_epochs=1 \ --output_dir="output/sdxl-base-1.0-oldbookillustrations-lora" \ --resolution=1024 \ --center_crop \ --random_flip \ --train_text_encoder \ --train_batch_size=1 \ --num_train_epochs=10 \ --checkpointing_steps=500 \ --gradient_accumulation_steps=4 \ --learning_rate=1e-04 \ --lr_warmup_steps=0 \ --report_to="wandb" \ --dataloader_num_workers=8 \ --allow_tf32 \ --mixed_precision="fp16" \ --push_to_hub \ --hub_model_id="sdxl-base-1.0-oldbookillustrations-lora" ```