metadata
license: openrail
language:
- en
tags:
- stable-diffusion
- stable-diffusion-diffusers
- stable-diffusion-xl
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
datasets:
- frank-chieng/maggieq
metrics:
- character
library_name: diffusers
inference:
parameter:
negative_prompt: 3d render
widget:
- text: >-
professional fashion close-up portrait photography of a young beautiful
maggie Q in the city at night, Nikon Z9, bokeh
example_title: example1 maggieQ
- text: >-
RAW candid cinema, maggie Q portrait in a field, 16mm, ((remarkable
color)), (ultra realistic)
example_title: example2 maggieQ
pipeline_tag: text-to-image
Character Maggie Q SDXL
Overview
Character Lora Miggie Q is a lora training model with sdxl1.0 base model, latent text-to-image diffusion model. The model has been fine-tuned using a learning rate of 1e-5
over 3000 total steps with a batch size of 4 on a curated dataset of superior-quality maggie Q images. This model is derived from Stable Diffusion XL 1.0.
Model Description
- Developed by: FrankChieng
- Model type: Diffusion-based text-to-image generative model
- License: CreativeML Open RAIL++-M License
- Finetuned from model [optional]: Stable Diffusion XL 1.0 base
How to Use:
- Download
Lora model
here, the model is in.safetensors
format. - You need to use include maggie Q prompt in natural language, then you will get realistic result image
- You can use any generic negative prompt or use the following suggested negative prompt to guide the model towards high aesthetic generationse:
poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, extra limbs, disfigured, deformed, body out of frame, bad anatomy, watermark, signature, cut off, low contrast, underexposed, overexposed, bad art, beginner, amateur, distorted face
- And, the following should also be prepended to prompts to get high aesthetic results:
masterpiece, best quality
Google Colab
🧨 Diffusers
Make sure to upgrade diffusers to >= 0.18.2:
pip install diffusers --upgrade
In addition make sure to install transformers
, safetensors
, accelerate
as well as the invisible watermark:
pip install invisible_watermark transformers accelerate safetensors
Running the pipeline (if you don't swap the scheduler it will run with the default EulerDiscreteScheduler in this example we are swapping it to EulerAncestralDiscreteScheduler:
pip install -q --upgrade diffusers invisible_watermark transformers accelerate safetensors
pip install huggingface_hub
from huggingface_hub import notebook_login
notebook_login()
import torch
from torch import autocast
from diffusers import StableDiffusionXLPipeline, EulerAncestralDiscreteScheduler
base_model_id = "stabilityai/stable-diffusion-xl-base-1.0"
lora_model = "frank-chieng/maggieQ"
pipe = StableDiffusionXLPipeline.from_pretrained(
base_model_id,
torch_dtype=torch.float16,
use_safetensors=True,
)
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.load_lora_weights(lora_model, weight_name="sdxl_lora_maggie_Q.safetensors")
pipe.to('cuda')
prompt = "professional fashion close-up portrait photography of a young beautiful maggie Q at German restaurant during Sunset, Nikon Z9"
negative_prompt = "3d render"
image = pipe(
prompt,
negative_prompt=negative_prompt,
width=1024,
height=1024,
guidance_scale=7,
target_size=(1024,1024),
original_size=(4096,4096),
num_inference_steps=28
).images[0]
image.save("maggieQ.png")
Limitation
This model inherit Stable Diffusion XL 1.0 limitation