oms_b_openclip_xl / README.md
kaeru-shigure's picture
Update README.md
9fbdb4e verified
|
raw
history blame
6.54 kB
metadata
library_name: diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
  - text-to-image
license: openrail++
inference: false

What is different about this fork from the original (h1t/oms_b_openclip_xl)?

The code has been modified to work with the current final version (0.27.2) of diffusers.
The behavior remains the same. Enjoy.

- OMSPipeline.from_pretrained('h1t/oms_b_openclip_xl', ...)
+ OMSPipeline.from_pretrained('kaeru-shigure/oms_b_openclip_xl', ...)
--- a/diffusers_patch/models/unet_2d_condition_woct.py
+++ b/diffusers_patch/models/unet_2d_condition_woct.py
@@ -35,7 +35,7 @@ from diffusers.models.embeddings import (
     Timesteps,
 )
 from diffusers.models.modeling_utils import ModelMixin
-from diffusers.models.unet_2d_blocks import (
+from diffusers.models.unets.unet_2d_blocks import (
     CrossAttnDownBlock2D,
     CrossAttnUpBlock2D,
     DownBlock2D,
@@ -159,6 +159,7 @@ class UNet2DConditionWoCTModel(ModelMixin, ConfigMixin, UNet2DConditionLoadersMi
         conv_out_kernel: int = 3,
         mid_block_only_cross_attention: Optional[bool] = None,
         cross_attention_norm: Optional[str] = None,
+        subfolder: Optional[str] = None,
     ):
         super().__init__()
--- a/diffusers_patch/pipelines/oms/pipeline_oms.py
+++ b/diffusers_patch/pipelines/oms/pipeline_oms.py
@@ -8,6 +8,7 @@ from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokeniz

 from diffusers.loaders import FromSingleFileMixin

+from huggingface_hub.constants import HF_HUB_CACHE, HF_HUB_OFFLINE
 from diffusers.utils import (
     USE_PEFT_BACKEND,
     deprecate,
@@ -17,6 +18,7 @@ from diffusers.utils.torch_utils import randn_tensor
 from diffusers.pipelines.pipeline_utils import DiffusionPipeline
 from diffusers.pipelines.pipeline_utils import *
 from diffusers.pipelines.pipeline_utils import _get_pipeline_class
+from diffusers.pipelines.pipeline_loading_utils import *
 from diffusers.models.modeling_utils import _LOW_CPU_MEM_USAGE_DEFAULT

 from diffusers_patch.models.unet_2d_condition_woct import UNet2DConditionWoCTModel
@@ -164,7 +166,8 @@ class OMSPipeline(DiffusionPipeline, FromSingleFileMixin):
         sd_pipeline: DiffusionPipeline,
         oms_text_encoder:Optional[Union[CLIPTextModel, SDXLTextEncoder]],
         oms_tokenizer:Optional[Union[CLIPTokenizer, SDXLTokenizer]],
-        sd_scheduler = None
+        sd_scheduler = None,
+        trust_remote_code: bool = False,
     ):
         # assert sd_pipeline is not None

@@ -279,7 +282,7 @@ class OMSPipeline(DiffusionPipeline, FromSingleFileMixin):

     @classmethod
 os.PathLike]], **kwargs):
-        cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE)
+        cache_dir = kwargs.pop("cache_dir", HF_HUB_CACHE)
         resume_download = kwargs.pop("resume_download", False)
         force_download = kwargs.pop("force_download", False)
         proxies = kwargs.pop("proxies", None)

One More Step

One More Step (OMS) module was proposed in One More Step: A Versatile Plug-and-Play Module for Rectifying Diffusion Schedule Flaws and Enhancing Low-Frequency Controls by Minghui Hu, Jianbin Zheng, Chuanxia Zheng, Tat-Jen Cham et al.

By adding one small step on the top the sampling process, we can address the issues caused by the current schedule flaws of diffusion models without changing the original model parameters. This also allows for some control over low-frequency information, such as color.

Our model is versatile and can be integrated into almost all widely-used Stable Diffusion frameworks. It's compatible with community favorites such as LoRA, ControlNet, Adapter, and foundational models.

Usage

OMS now is supported 🤗 diffusers with a customized pipeline github. To run the model (especially with LCM variant), first install the latest version of diffusers library as well as accelerate and transformers.

pip install --upgrade pip
pip install --upgrade diffusers transformers accelerate

And then we clone the repo

git clone https://github.com/mhh0318/OneMoreStep.git
cd OneMoreStep

SDXL

The OMS module can be loaded with SDXL base model stabilityai/stable-diffusion-xl-base-1.0. And all the SDXL based model and its LoRA can share the same OMS h1t/oms_b_openclip_xl.

Here is an example for SDXL with LCM-LoRA. Firstly import the related packages and choose SDXL based backbone and LoRA:

import torch
from diffusers import StableDiffusionXLPipeline, LCMScheduler

sd_pipe = StableDiffusionXLPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", add_watermarker=False).to('cuda')

sd_scheduler = LCMScheduler.from_config(sd_pipe.scheduler.config)
sd_pipe.load_lora_weights('latent-consistency/lcm-lora-sdxl', variant="fp16")

Following import the customized OMS pipeline to wrap the backbone and add OMS for sampling. We have uploaded the .safetensors to HuggingFace Hub. There are 2 choices for SDXL backbone currently, one is base OMS module with OpenCLIP text encoder h1t/oms_b_openclip_xl) and the other is large OMS module with two text encoder followed by SDXL architecture h1t/oms_l_mixclip_xl).

from diffusers_patch import OMSPipeline

pipe = OMSPipeline.from_pretrained('h1t/oms_b_openclip_xl', sd_pipeline = sd_pipe, torch_dtype=torch.float16, variant="fp16", trust_remote_code=True, sd_scheduler=sd_scheduler)
pipe.to('cuda')

After setting a random seed, we can easily generate images with the OMS module.

prompt = 'close-up photography of old man standing in the rain at night, in a street lit by lamps, leica 35mm summilux'
generator = torch.Generator(device=pipe.device).manual_seed(1024)

image = pipe(prompt, guidance_scale=1, num_inference_steps=4, generator=generator)
image['images'][0]

oms_xl

Or we can offload the OMS module and generate a image only using backbone

image = pipe(prompt, guidance_scale=1, num_inference_steps=4, generator=generator, oms_flag=False)
image['images'][0]

oms_xl

For more models and more functions like diverse prompt, please refer to OMS Repo.